Detector Recipe Step Three: Evaluate on the Target Device
The leip evaluate
SDK command has to be run in order to evaluate the model you optimized, compiled, and packaged in Step Two. The command line arguments will differ slightly depending on whether you are running the inference locally in the SDK Docker container or connecting to a networked device to run inference remotely.
Installing the Data on the SDK Docker Container
The pre-trained model has been trained on the MS COCO dataset. Data from this dataset must be installed to evaluate the model:
# Install MS COCO data for validation
cd workspace
sh /latentai/recipes/yolov5/evaluation/download_mscoco.sh
cd /latentai
ls workspace/mscoco
If you have trained your model with a different dataset, instructions on installing different validation data on the target is covered in the section on evaluating and deploying your model with BYOD.
For the following examples, make sure that the environment variable MNAME
is still set appropriately from Step Two.
Evaluating Within the SDK Docker Container
You will pass the compiled model directly to leip evaluate
along with the test path to evaluate the model entirely within the SDK Docker container. You can use the following to evaluate the model assuming you followed the path and naming conventions used earlier in this tutorial.
Perform the following to evaluate an x86_64 Docker container with NVIDIA graphics card:
# Evaluating Float32:
leip evaluate \
--input_path workspace/output/$MNAME/x86_64_cuda/Float32-compile \
--test_path workspace/mscoco/val2017 \
--dataset_type coco
# Evaluating Int8:
leip evaluate \
--input_path workspace/output/$MNAME/x86_64_cuda/Int8-optimize \
--test_path workspace/mscoco/val2017 \
--dataset_type coco
Replace x86_64_cuda
in the above examples with x86_64
for an x86_64 without a GPU.
If evaluating for a GPU targeted model fails with a CUDA_ERROR_NO_BINARY_FOR_GPU
error, this indicates that the model was optimized/compiled with the wrong arch
flag.
Evaluating with Remote Inference:
The leip evaluate
command will be run in the SDK Docker container to evaluate the model on a remote target device with the inference performed on the device under test. You will first need to set up your target by installing Latent AI Object Runner (LOR). You will then evaluate the model using the LRE objects created by leip pipeline
in Step Two. The following examples assume you followed the naming conventions and paths from earlier in the tutorial.
Perform the following when using an ARM processor without a GPU:
# Substitute the IP address of your target device for <IP_ADDR>
# The default port for LOR is 50051
# Evaluating Float32:
leip evaluate \
--input_path workspace/output/$MNAME/aarch64/Float32-package \
--host <IP_ADDR> --port 50051 \
--test_path workspace/mscoco/val2017 \
--dataset_type coco
# Evaluating Int8:
leip evaluate \
--input_path workspace/output/$MNAME/aarch64/Int8-package \
--host <IP_ADDR> --port 50051 \
--test_path workspace/mscoco/val2017 \
--dataset_type coco
Replace aarch64
in the above example with aarch64_cuda
, x86_64
or x86_64_cuda
as appropriate for your device under test if you are using an ARM processor with a GPU, x86_64, or an x86_64 with a GPU.
It is also possible to test an LRE object with the leip evaluate
inside the Docker container by running the LOR inside the container itself. Launch the LOR within the SDK by calling python3 -m lor.lor_server
to enable the LOR within the SDK. You can now use leip evaluate
within the same container by passing --host 0.0.0.0
If you want to access the LOR in one container by a leip evaluate
process running in another, you will need to expose the LOR port:
If you use the default port, you can enable this by adding
-p 50051:50051
to thedocker run
command.Use the IP address of the Docker container when passing the
--host
flag toleip evaluate
Next Steps
Once you have completed evaluating the model on the target, you can either integrate the model into your code for deployment. The model can also be retrained with your own data and exported, compiled, and evaluated again.