Step One: Evaluating and Exporting a Pre-Trained EfficientDet Model

The workflow for the EfficientDet recipes follows a similar path to the YOLOv5 example. We have provided several EfficientDet model options, each pre-trained with the MS COCO dataset. Note t hat there are a few additional parameters that you need to specify when selecting EfficientDet.

First, choose your EfficientDet model architecture from the available options and note the supported size. In the example below, we will use the efficientdet_d0 model architecture, which expects 512 x 512. Also note that when we export the model, we set the export.include_preprocessor=True flag. This will associate the pre-processor calls with the exported model, enabling tools such as leip evaluate to provide identical pre-processing when evaluating the compiled models.

export EFFICIENTDET_SIZE=512
export MNAME=efficientdet_d0
export MODEL_PATH=/latentai/artifacts/export/${MNAME}_batch1_${EFFICIENTDET_SIZE}-${EFFICIENTDET_SIZE}

# evaluate

af --config-name=efficientdet \
  command=evaluate \
  model.module.model_architecture=$MNAME \
  task.moniker=$MNAME \
  ++task.width=$EFFICIENTDET_SIZE ++task.height=$EFFICIENTDET_SIZE

#  evaluation results will be written to:
# /latentai/artifacts/evaluate/coco-detection-90class/val/metrics_report.json
  
# export
af --config-name=efficientdet \
  command=export \
  model.module.model_architecture=$MNAME \
  task.moniker=$MNAME \
  ++task.width=$EFFICIENTDET_SIZE ++task.height=$EFFICIENTDET_SIZE \
  +export.include_preprocessor=True

# model will be exported to $MODEL_PATH
CODE

Step Two: Optimize and Compile your EfficientDet Model

Please refer to the YOLOv5 Step Two example for more details on building your recipe model. The instructions for EfficientDet are very similar, with one additional step to install requisite post-processing files.

Pipeline build files that represent a good starting point for the supported EfficientDet architectures are provided in the recipes/efficientdet directory. You may find that modifying the default settings provides you with better inference speed or accuracy, depending on the specific model or target architecture. See the LEIP Pipeline documentation for more information.

# See Step One above for setting environment variables

# Copy the processors file to the target directory:
cp /latentai/recipes/efficientdet/processors/*.py $MODEL_PATH/processors/

# Download mscoco for calibration and evaluation
# (If not previously downloaded for Yolov5)
mkdir -p /latentai/workspace
cd /latentai/workspace
sh /latentai/recipes/yolov5/evaluation/download_mscoco.sh
cd /latentai

# Compile / Optimize for x86 (no GPU)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path workspace/output/$MNAME/x86_64 \
  --config_path recipes/efficientdet/pipeline_x86_64.yaml

# Compile / Optimize for x86 with CUDA
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path workspace/output/$MNAME/x86_64_cuda \
  --config_path recipes/efficientdet/pipeline_x86_64_cuda.yaml

# Compile / Optimize for ARM (no GPU)
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path workspace/output/$MNAME/aarch64 \
  --config_path recipes/efficientdet/pipeline_aarch64.yaml

# Compile / Optimize for ARM with CUDA
leip pipeline \
  --input_path $MODEL_PATH \
  --output_path workspace/output/$MNAME/aarch64_cuda \
  --config_path recipes/efficientdet/pipeline_aarch64_cuda.yaml
CODE

Step Three: Evaluating Within the SDK Docker Container

For Step Three, you can follow the same instructions provided for the YOLOv5 example. When executing Step Three, make sure your environment variables are accurate as set in the Step One example described above.

Step Four: Deploying your Model

For Step Four, you can follow the same instructions provided for the YOLOv5 example. The C++ examples described there will also provide you with timing benchmarks for your model on your device.

EfficientDet BYOD

Once you are ready to train your model with your own data, refer to our instructions on BYOD.