Skip to main content
Skip table of contents

MobileNet SSD Recipes

Step One: Evaluating and Exporting a Pre-Trained MobileNet SSD Model

The workflow for the MobileNet SSD recipes follows a similar path to the YOLOv5 example. We have provided two Mobilenet SSD model options. Note that our YOLOv5 and EfficientDet models are pre-trained on MS COCO, while the MobileNet SSD models are pre-trained on Pascal VOC. The fewer number of classes is often a better starting point for the smaller SSD model sizes. Using Pascal VOC will require an addition step to export the data from that format to the COCO format supported by leip evaluate.

First, choose your MobileNet SSD model architecture from the available options. Note that the available SSD models only support 300 x 300 input shapes. In the example below, we will use the mb1-ssd model architecture. Also note that when we export the model, we set the export.include_preprocessor=True flag. This will associate the pre-processor calls with the exported model, enabling tools such as leip evaluate to provide identical pre-processing when evaluating the compiled models.

CODE
export MNAME=mb1-ssd
export MODEL_PATH=/latentai/artifacts/export/${MNAME}_batch1_300-300

# evaluate
af --config-name=ssd \
  command=evaluate \
  model.architecture=$MNAME \
  task.moniker=$MNAME

#  evaluation results will be written to:
# /latentai/artifacts/evaluate/pascal-voc-detection/val/metrics_report.json 
  
# export
af --config-name=ssd \
  command=export \
  model.architecture=$MNAME \
  task.moniker=$MNAME \
  +export.include_preprocessor=True
  
# model will be exported to $MODEL_PATH

Exporting Pascal VOC Dataset

To use the Pascal VOC dataset in Step Three, you will need to export it into COCO format using the following commands:

CODE
# export data from af in default (coco) format
af data=torchvision/pascal-voc-detection command=export_data

# install the instances file where leip evaluate expects it
cd /latentai
cp artifacts/export_data/pascal-voc-detection_ascoco/instances_val2017.json \
  artifacts/export_data/pascal-voc-detection_ascoco/val2017/

Step Two: Optimize and Compile your MobileNet SSD Model

Please refer to the YOLOv5 Step Two example for more details on building your recipe model. The instructions for MobileNet SSD are very similar, with one additional step to install requisite post-processing files.

Pipeline build files that represent a good starting point for the supported MobileNet SSD architectures are provided in the recipes/ssd directory. You may find that modifying the default settings provides you with better inference speed or accuracy, depending on the specific model or target architecture.

CODE
# Install processors referred to in the build pipeline config file
cp /latentai/recipes/ssd/processors/*.py $MODEL_PATH/processors/

# Choose the below pipeline command to meet your target:

# Run the pipeline for ssd (x86_64, no GPU)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path /latentai/workspace/output/${MNAME}/x86_64 \
  --config_path /latentai/recipes/ssd/pipeline_x86_64.yaml \
  --loglevel DEBUG
  
# Run the pipeline for ssd (x86_64, with GPU)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path /latentai/workspace/output/${MNAME}/x86_64_cuda \
  --config_path /latentai/recipes/ssd/pipeline_x86_64_cuda.yaml \
  --loglevel DEBUG
  
  # Run the pipeline for ssd (ARM, no GPU)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path /latentai/workspace/output/${MNAME}/aarch64 \
  --config_path /latentai/recipes/ssd/pipeline_aarch64.yaml \
  --loglevel DEBUG
  
# Run the pipeline for ssd (ARM, with GPU)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path /latentai/workspace/output/${MNAME}/aarch64_cuda \
  --config_path /latentai/recipes/ssd/pipeline_aarch64_cuda.yaml \
  --loglevel DEBUG

Step Three: Evaluating Within the SDK Docker Container

For Step Three, refer to the instructions for the YOLOv5 example for more details. An example (for x86_64, no GPU) is provided below to illustrate the proper --test_path to use for MobileNet SSD with the dataset exported above.

When executing Step Three, make sure your environment variables are accurate as set in Step One.

CODE
leip evaluate \
  --input_path /latentai/workspace/output/${MNAME}/x86_64/Float32-compile/ \
  --test_path /latentai/artifacts/export_data/pascal-voc-detection_ascoco/val2017/instances_val2017.json \
  --dataset_type coco

leip evaluate \
  --input_path /latentai/workspace/output/${MNAME}/x86_64/Int8-optimize/ \
  --test_path /latentai/artifacts/export_data/pascal-voc-detection_ascoco/val2017/instances_val2017.json \
  --dataset_type coco

Step Four: Deploying your Model

For Step Four, you can follow the same instructions provided for the YOLOv5 example.

The C++ examples described there will also provide you with timing benchmarks for your model on your device.

BYOD Example

Once you are ready to train your model with your own data, refer to our instructions on BYOD.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.