Detector Recipe Step Two: Compile and Optimize
In step one, LEIP Recipes were used to export a pre-trained, traced model. The second step of the recipe compiles and compresses this model in the same Docker container to produce the binary artifact for the model optimized to the architecture of your target device. You will then be able to copy this artifact and the associated files to the target device for evaluation.
Step two of the recipe uses the LEIP Pipeline command with a predefined build configuration provided in
/latentai/recipes to ensure the best compilation results.
Optimizing for a Xavier Target
If your hardware target is an Xavier NX or AGX, use the following commands to compile and optimize your model:
export PYTHONPATH=/latentai/recipes/yolov5_L_RT/evaluation/utils:$PYTHONPATH # Pipeline build for ARM64 with NVIDIA Volta (sm_72) GPU # To target a different NVIDIA architecture, edit the pipeline yaml file leip pipeline \ --input_path /latentai/artifacts/export/recipe_yolov5l_batch1_640-640/traced_model.pt \ --output_path /latentai/workspace/recipes_output/yolov5_L_RT/aarch64_cuda \ --config_path /latentai/recipes/yolov5_L_RT/pipeline_aarch64_cuda.yaml
The binary artifacts will be created in the following directories. It is important that you maintain these output paths for use with the scripts provided to prepare the model for testing.
A Float32 version at:
An Int8 version at:
Optimizing for an x86_64 Target with GPU
Use the following commands to compile and optimize your model if you do not have access to a target device and want to test the model within the SDK Docker container:
export PYTHONPATH=/latentai/recipes/yolov5_L_RT/evaluation/utils:$PYTHONPATH # Pipeline build for x86_64 with NVIDIA GPU # To optimize for a specific NVIDIA architecture, edit the pipeline yaml file leip pipeline \ --input_path /latentai/artifacts/export/recipe_yolov5l_batch1_640-640/traced_model.pt \ --output_path /latentai/workspace/recipes_output/yolov5_L_RT/x86_64_cuda \ --config_path /latentai/recipes/yolov5_L_RT/pipeline_x86_64_cuda.yaml
The binary artifacts will be created in the following directories. It is important you maintain these output paths for use with the scripts provided to prepare the model for testing.
A Float32 version at:
An Int8 version at:
Preparing the Model Files
Now that you have generated the model artifacts, you will need to copy them out of the Docker container to run the model on the target. There are several files and directories that you will need to copy over. A script has been provided to facilitate copying these files into a tar archive file:
In the Docker container:
# Gather the model artifacts to copy to the target cd /latentai/recipes/yolov5_L_RT # If you targeted aarch64_cuda, use the following script command: sh ./create-artifacts-tar.sh aarch64_cuda # If you targeted x86_64_cuda, use the following script command: sh ./create-artifacts-tar.sh x86_64_cuda # An archive will be created at: /latentai/recipes/yolov5_L_RT/model-artifacts.tar.gz
Copying the Dependency and Model Files
Before you can evaluate the model on the target device, you need to retrieve two archive files that will be copied to the AGX target. The first archive is a set of dependencies that you will need to install on the AGX the first time that you configure the target for testing. If you previously installed the dependencies for an older version of the SDK, you will need to reinstall the dependencies from the current SDK.
You can skip these copy steps if you are going to test the model within the SDK docker container.
Note: The following examples assume you have named your running SDK Docker container
leip. If you have used another name, please modify these Docker commands accordingly.
# Run this command on the host machine to copy the dependency file from the container # You will later copy this file to the AGX target docker cp leip:/agx-install.tar.gz .
Next, copy the model files that you prepared above with the
create-artifacts-tar.sh script. You will need to repeat this step each time you have generated new model files:
# Copy the model artifacts from the docker container to the host docker cp leip:/latentai/recipes/yolov5_L_RT/model-artifacts.tar.gz .
Next, you will evaluate your optimized model on the target device.