Skip to main content
Skip table of contents

MobileNet SSD Recipes

Step One: Evaluating and Exporting a Pre-Trained MobileNet SSD Model

The workflow for the MobileNet SSD recipes follows a similar path to the YOLOv5 example. We have provided two MobileNet SSD model options. Note that our YOLOv5 and EfficientDet models are pre-trained on MS COCO, while the MobileNet SSD models are pre-trained on Pascal VOC. The fewer number of classes is often a better starting point for the smaller SSD model sizes. Using Pascal VOC will require an addition step to export the data from that format to the COCO format supported by leip evaluate.

First, choose your MobileNet SSD model architecture from the available options. Note that the available SSD models only support 300 x 300 input shapes. We will use the mb1-ssd model architecture in the example below. Also note that when we export the model, we set the export.include_preprocessor=True flag. This will associate the pre-processor calls with the exported model, enabling tools such as leip evaluate to provide identical pre-processing when evaluating the compiled models.

CODE
export MNAME=mb1-ssd
export MODEL_PATH=/latentai/artifacts/export/${MNAME}_batch1_300-300

# evaluate
af --config-name=ssd \
  command=evaluate \
  model.architecture=$MNAME \
  task.moniker=$MNAME

#  evaluation results will be written to:
# /latentai/artifacts/evaluate/pascal-voc-detection/val/metrics_report.json 
  
# export
af --config-name=ssd \
  command=export \
  model.architecture=$MNAME \
  task.moniker=$MNAME \
  +export.include_preprocessor=True
  
# model will be exported to $MODEL_PATH

Exporting Pascal VOC Dataset

You will need the Pascal VOC dataset in Step Three. You will need to export it into COCO format using the following commands:

CODE
# export data from af in default (coco) format
af data=torchvision/pascal-voc-detection command=export_data

Step Two: Optimize and Compile your MobileNet SSD Model

The MobileNet SSD optimization pipeline step requires calibration images that are specified in /latentai/recipes/ssd/rep_dataset.txt. Those images are downloaded by either running the af command=evaluate command in Step One, or by running the Exporting Pascal VOC Dataset instructions in the previous section. You will need this dataset downloaded before you can run the pipeline steps below.

Please refer to the YOLOv5 Step Two example for more details on building your recipe model. The instructions for MobileNet SSD are very similar, with one additional step to install requisite post-processing files.

Pipeline build files that represent a good starting point for the supported MobileNet SSD architectures are provided in the recipes/ssd directory. You may find that modifying the default settings provides you with better inference speed or accuracy, depending on the specific model or target architecture.

CODE
# Install processors referred to in the build pipeline config file
cp /latentai/recipes/ssd/processors/*.py $MODEL_PATH/processors/

# Choose the below pipeline command to meet your target:

# Run the pipeline for ssd (x86_64, no GPU)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path /latentai/workspace/output/${MNAME}/x86_64 \
  --config_path /latentai/recipes/ssd/pipeline_x86_64.yaml
  
# Run the pipeline for ssd (x86_64, with GPU)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path /latentai/workspace/output/${MNAME}/x86_64_cuda \
  --config_path /latentai/recipes/ssd/pipeline_x86_64_cuda.yaml
  
  # Run the pipeline for ssd (ARM, no GPU)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path /latentai/workspace/output/${MNAME}/aarch64 \
  --config_path /latentai/recipes/ssd/pipeline_aarch64.yaml
  
# Run the pipeline for ssd (ARM, with GPU) (Xavier Jetpack 4.6)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path /latentai/workspace/output/${MNAME}/aarch64_cuda \
  --config_path /latentai/recipes/ssd/pipeline_aarch64_cuda_xavier_jp4.yaml

# Run the pipeline for ssd (ARM, with GPU) (Orin Jetpack 5)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path /latentai/workspace/output/${MNAME}/aarch64_cuda \
  --config_path /latentai/recipes/ssd/pipeline_aarch64_cuda_orin_jp5.yaml

# If you are using Xavier with Jetpack 5, you will need to create your own
# from these examples, or contact Latent AI for assistance.

Step Three: Evaluating Within the SDK Docker Container

For Step Three, refer to the instructions for the YOLOv5 example for more details. An example (for x86_64, no GPU) is provided below to illustrate the proper --test_path to use for MobileNet SSD with the dataset exported above.

When executing Step Three, ensure your environment variables are accurate as set in Step One.

CODE
leip evaluate \
  --input_path /latentai/workspace/output/${MNAME}/x86_64/Float32-compile/ \
  --test_path /latentai/recipes/voc_as_coco_dataset_schema.json \
  --task_family detection

leip evaluate \
  --input_path /latentai/workspace/output/${MNAME}/x86_64/Int8-optimize/ \
  --test_path /latentai/recipes/voc_as_coco_dataset_schema.json \
  --task_family detection

Step Four: Deploying Your Model

For Step Four, you can follow the same instructions provided for the YOLOv5 example.

The C++ examples described there will also provide you with timing benchmarks for your model on your device.

BYOD Example

Once you are ready to train your model with your own data, refer to our instructions on BYOD.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.