EfficientDet Recipes
Step One: Evaluating and Exporting a Pre-Trained EfficientDet Model
The workflow for the EfficientDet recipes follows a similar path to the YOLOv5 example. We have provided several EfficientDet model options, and each model has been pre-trained with the MS COCO dataset. Note that there are a few additional parameters that you need to specify when selecting EfficientDet.
First, choose your EfficientDet model architecture from the available options and note the supported size. In the example below, we will use the efficientdet_d0 model architecture, which expects 512 x 512. Also note that when we export the model, we set the export.include_preprocessor=True
flag. This will associate the pre-processor calls with the exported model, enabling tools such as leip evaluate
to provide identical pre-processing when evaluating the compiled models.
export EFFICIENTDET_SIZE=512
export MNAME=efficientdet_d0
export MODEL_PATH=/latentai/artifacts/export/${MNAME}_batch1_${EFFICIENTDET_SIZE}-${EFFICIENTDET_SIZE}
# evaluate
af --config-name=efficientdet \
command=evaluate \
model.module.model_architecture=$MNAME \
task.moniker=$MNAME \
++task.width=$EFFICIENTDET_SIZE ++task.height=$EFFICIENTDET_SIZE
# evaluation results will be written to:
# /latentai/artifacts/evaluate/coco-detection-90class/val/metrics_report.json
# export
af --config-name=efficientdet \
command=export \
model.module.model_architecture=$MNAME \
task.moniker=$MNAME \
++task.width=$EFFICIENTDET_SIZE ++task.height=$EFFICIENTDET_SIZE \
+export.include_preprocessor=True
# model will be exported to $MODEL_PATH
Setup the Dataset for Steps Two and Three
The above command=evaluate
step will download the MS COCO dataset. This needs to be done before you optimize and compile an EfficientDet model because part of that dataset is used for calibration. To setup the dataset for the rest of this tutorial, do the following (unless you previously set this up in the YOLOv5 Example):
# MS COCO datasets are downloaded in step 1 when you ran af ... command=evaluate
# These steps will make it easy to use that dataset with leip evaluate
/latentai/recipes/mscoco_dataset_schema.json
Step Two: Optimize and Compile Your EfficientDet Model
Please refer to the YOLOv5 Step Two example for more details on building your recipe model. The instructions for EfficientDet are very similar, with one additional step to install requisite post-processing files.
Pipeline build files that represent a good starting point for the supported EfficientDet architectures are provided in the recipes/efficientdet
directory. You may find that modifying the default settings provides you with better inference speed or accuracy, depending on the specific model or target architecture. See the LEIP Pipeline documentation for more information.
# See Step One above for setting environment variables
# Copy the processors file to the target directory:
cp /latentai/recipes/efficientdet/processors/*.py $MODEL_PATH/processors/
# Compile / Optimize for x86 (no GPU)
leip pipeline \
--input_path $MODEL_PATH \
--output_path workspace/output/$MNAME/x86_64 \
--config_path recipes/efficientdet/pipeline_x86_64.yaml
# Compile / Optimize for x86 with CUDA
leip pipeline \
--input_path $MODEL_PATH \
--output_path workspace/output/$MNAME/x86_64_cuda \
--config_path recipes/efficientdet/pipeline_x86_64_cuda.yaml
# Compile / Optimize for ARM (no GPU)
leip pipeline \
--input_path $MODEL_PATH \
--output_path workspace/output/$MNAME/aarch64 \
--config_path recipes/efficientdet/pipeline_aarch64.yaml
# Compile / Optimize for ARM with CUDA (Xavier + Jetpack 4.6)
leip pipeline \
--input_path $MODEL_PATH \
--output_path workspace/output/$MNAME/aarch64_cuda \
--config_path recipes/efficientdet/pipeline_aarch64_cuda_xavier_jp4.yaml
# Compile / Optimize for ARM with CUDA (Orin + Jetpack 5)
leip pipeline \
--input_path $MODEL_PATH \
--output_path workspace/output/$MNAME/aarch64_cuda \
--config_path recipes/efficientdet/pipeline_aarch64_cuda_orin_jp5.yaml
# If you are using Xavier with Jetpack 5, you will need to create your own
# from these examples, or contact Latent AI for assistance.
Step Three: Evaluating Within the SDK Docker Container
For Step Three, you can follow the same instructions provided for the YOLOv5 example. When executing Step Three, make sure your environment variables are accurate as set in the Step One example described above.
Step Four: Deploying Your Model
For Step Four, you can follow the same instructions provided for the YOLOv5 example. The C++ examples described there will also provide you with timing benchmarks for your model on your device.
EfficientDet BYOD
Refer to our instructions on BYOD once you are ready to train your model with your own data.