Shortcuts

Dataset Preparation

Before Preparation

It is recommended to symlink the dataset root to $MMDETECTION3D/data. If your folder structure is different from the following, you may need to change the corresponding paths in config files.

mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│   ├── nuscenes
│   │   ├── maps
│   │   ├── samples
│   │   ├── sweeps
│   │   ├── v1.0-test
|   |   ├── v1.0-trainval
│   ├── kitti
│   │   ├── ImageSets
│   │   ├── testing
│   │   │   ├── calib
│   │   │   ├── image_2
│   │   │   ├── velodyne
│   │   ├── training
│   │   │   ├── calib
│   │   │   ├── image_2
│   │   │   ├── label_2
│   │   │   ├── velodyne
│   ├── waymo
│   │   ├── waymo_format
│   │   │   ├── training
│   │   │   ├── validation
│   │   │   ├── testing
│   │   │   ├── gt.bin
│   │   ├── kitti_format
│   │   │   ├── ImageSets
│   ├── lyft
│   │   ├── v1.01-train
│   │   │   ├── v1.01-train (train_data)
│   │   │   ├── lidar (train_lidar)
│   │   │   ├── images (train_images)
│   │   │   ├── maps (train_maps)
│   │   ├── v1.01-test
│   │   │   ├── v1.01-test (test_data)
│   │   │   ├── lidar (test_lidar)
│   │   │   ├── images (test_images)
│   │   │   ├── maps (test_maps)
│   │   ├── train.txt
│   │   ├── val.txt
│   │   ├── test.txt
│   │   ├── sample_submission.csv
│   ├── s3dis
│   │   ├── meta_data
│   │   ├── Stanford3dDataset_v1.2_Aligned_Version
│   │   ├── collect_indoor3d_data.py
│   │   ├── indoor3d_util.py
│   │   ├── README.md
│   ├── scannet
│   │   ├── meta_data
│   │   ├── scans
│   │   ├── scans_test
│   │   ├── batch_load_scannet_data.py
│   │   ├── load_scannet_data.py
│   │   ├── scannet_utils.py
│   │   ├── README.md
│   ├── sunrgbd
│   │   ├── OFFICIAL_SUNRGBD
│   │   ├── matlab
│   │   ├── sunrgbd_data.py
│   │   ├── sunrgbd_utils.py
│   │   ├── README.md
│   ├── semantickitti
│   │   ├── sequences
│   │   │   ├── 00
│   │   │   │   ├── labels
│   │   │   │   ├── velodyne
│   │   │   ├── 01
│   │   │   ├── ..
│   │   │   ├── 22

Download and Data Preparation

KITTI

  1. Download KITTI 3D detection data HERE. Alternatively, you can download the dataset from OpenDataLab using MIM. The command scripts are the following:

# install OpenDataLab CLI tools
pip install -U opendatalab
# log in OpenDataLab. Note that you should register an account on [OpenDataLab](https://opendatalab.com/) before.
pip install odl
odl login
# download and preprocess by MIM
mim download mmdet3d --dataset kitti
  1. Prepare KITTI data splits by running:

mkdir ./data/kitti/ && mkdir ./data/kitti/ImageSets

# Download data split
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/test.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/test.txt
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/train.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/train.txt
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/val.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/val.txt
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/trainval.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/trainval.txt
  1. Generate info files by running:

python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti

In an environment using slurm, users may run the following command instead:

sh tools/create_data.sh <partition> kitti

Tips:

  • Ready-made Annotations. We have also provided kitti data annotation files generated offline here. You could download them and place them under data/kitti/. However, if you want to use ObjectSample Augmentation in LiDAR-based detection methods, you should additionally generate groundtruth database files and annotations.

    python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti --only-gt-database
    

Waymo

Download Waymo open dataset V1.4.1 HERE and its data split HERE. Then put .tfrecord files into corresponding folders in data/waymo/waymo_format/ and put the data split .txt files into data/waymo/kitti_format/ImageSets. Download ground truth .bin file for validation set HERE and put it into data/waymo/waymo_format/. A tip is that you can use gsutil to download the large-scale dataset with commands. You can take this tool as an example for more details. Subsequently, prepare waymo data by running:

# TF_CPP_MIN_LOG_LEVEL=3 will disable all logging output from TensorFlow.
# The number of `--workers` depends on the maximum number of cores in your CPU.
TF_CPP_MIN_LOG_LEVEL=3 python tools/create_data.py waymo --root-path ./data/waymo --out-dir ./data/waymo --workers 128 --extra-tag waymo --version v1.4

Note that:

  • In case the preprocessing of Waymo dataset is slow or blocked, consider reducing the value of --workers. If this doesn’t resolve the issue, you could set --workers as 0 to avoid using multiprocess.

  • If your local disk does not have enough space for saving converted data, you can change the --out-dir to anywhere else. Just remember to create folders and prepare data there in advance and link them back to data/waymo/kitti_format after the data conversion.

Tips:

  • Ready-made Annotations. We have provided the annotation files generated offline here. However, the original Waymo data still needs to be converted to kitti-format data by yourself.

  • Waymo-mini. If you just want to use a part of Waymo Dataset to verify some methods or debug quickly, you could use our provided Waymo-mini which only contains two segments in train split and one segment in val split from the original dataset. All the images, point clouds and annotations in this compressed file have been processed offline so that you can directly download and unzip it to data/waymo/:

    tar -xzvf waymo_mini.tar.gz -C ./data/waymo_mini
    

NuScenes

  1. Download nuScenes V1.0 full dataset data HERE. Alternatively, you can download the dataset from OpenDataLab using MIM. The downloading and unzipping command scripts are the following:

# install OpenDataLab CLI tools
pip install -U opendatalab
# log in OpenDataLab. Note that you should register an account on [OpenDataLab](https://opendatalab.com/) before.
pip install odl
odl login
# download and preprocess by MIM
mim download mmdet3d --dataset nuscenes
  1. Prepare nuscenes data by running:

python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes

Tips:

  • Ready-made Annotations. We have also provided NuScenes data annotation files generated offline here. You could download them and place them under data/nuscenes/. However, if you want to use ObjectSample Augmentation in LiDAR-based detection methods, you should additionally generate groundtruth database files and annotations.

python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --only-gt-database

Lyft

Download Lyft 3D detection data HERE. Prepare Lyft data by running:

python tools/create_data.py lyft --root-path ./data/lyft --out-dir ./data/lyft --extra-tag lyft --version v1.01
python tools/dataset_converters/lyft_data_fixer.py --version v1.01 --root-folder ./data/lyft

Note that we follow the original folder names for clear organization. Please rename the raw folders as shown above. Also note that the second command serves the purpose of fixing a corrupted lidar data file. Please refer to the discussion for more details.

SemanticKITTI

  1. Download SemanticKITTI dataset HERE and unzip all zip files. Alternatively, you can download the dataset from OpenDataLab using MIM. The downloading and unzipping command scripts are the following:

# install OpenDataLab CLI tools
pip install -U opendatalab
# log in OpenDataLab. Note that you should register an account on [OpenDataLab](https://opendatalab.com/) before.
pip install odl
odl login
# download and preprocess by MIM
mim download mmdet3d --dataset semantickitti
  1. Generate info files by running:

python ./tools/create_data.py semantickitti --root-path ./data/semantickitti --out-dir ./data/semantickitti --extra-tag semantickitti

Tips:

  • Ready-made Annotations. We have also provided SemanticKITTI data annotation files generated offline here. You could download them and place them under data/semantickitti/.

S3DIS, ScanNet and SUN RGB-D

To prepare S3DIS data, please see its README.

To prepare ScanNet data, please see its README.

To prepare SUN RGB-D data, please see its README.

Tips: For S3DIS, ScanNet and SUN RGB-D datasets, we have also provided data annotation files generated offline here. You could download them and place them under data/${DATASET}/. However, you also need to generate point cloud files and semantic/instance masks files (if it has) by yourself.

Customized Datasets

For using custom datasets, please refer to Customize Datasets.

Update data infos

If you have used v1.0.0rc1-v1.0.0rc4 mmdetection3d to create data infos before, and now you want to use the newest v1.1.0 mmdetection3d, you need to update the data infos file.

python tools/dataset_converters/update_infos_to_v2.py --dataset ${DATA_SET} --pkl-path ${PKL_PATH} --out-dir ${OUT_DIR}
  • --dataset : Name of dataset.

  • --pkl-path : Specify the data infos pkl file path.

  • --out-dir : Output direction of the data infos pkl file.

Example:

python tools/dataset_converters/update_infos_to_v2.py --dataset kitti --pkl-path ./data/kitti/kitti_infos_trainval.pkl --out-dir ./data/kitti
Read the Docs v: dev-1.x
Versions
latest
stable
v1.4.0
v1.3.0
v1.2.0
v1.1.1
v1.1.0
v1.0.0rc1
v1.0.0rc0
v0.18.1
v0.18.0
v0.17.3
v0.17.2
v0.17.1
v0.17.0
v0.16.0
v0.15.0
v0.14.0
v0.13.0
v0.12.0
v0.11.0
v0.10.0
v0.9.0
dev-1.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.