Skip to main content
Skip table of contents

Detector Recipe Step Three: Evaluate on the Target Device

The leip evaluate SDK command has to be run in order to evaluate the model you optimized, compiled, and packaged in Step Two. The command line arguments will differ slightly depending on whether you are running the inference locally in the SDK Docker container or connecting to a networked device to run inference remotely.

Installing the Data on the SDK Docker Container

The pre-trained model has been trained on the MS COCO dataset. Data from this dataset must be installed to evaluate the model:

CODE
# MS COCO datasets are downloaded in step 1 when you ran af ... command=evaluate
# These steps will make it easy to use that dataset with leip evaluate

mkdir /latentai/workspace/mscoco
ln -s /latentai/workspace/datasets/MSCocodata/validation/data \
  mscoco/val2017
cp /latentai/workspace/datasets/MSCocodata/raw/instances_val2017.json \
  mscoco/val2017
  

If you have trained your model with a different dataset, instructions on installing different validation data on the target is covered in the section on evaluating and deploying your model with BYOD.

For the following examples, make sure that the environment variable MNAME is still set appropriately from Step Two.

Evaluating Within the SDK Docker Container

You will pass the compiled model directly to leip evaluate along with the test path to evaluate the model entirely within the SDK Docker container. You can use the following to evaluate the model assuming you followed the path and naming conventions used earlier in this tutorial.

Perform the following to evaluate an x86_64 Docker container with NVIDIA graphics card:

CODE
# Evaluating Float32:
leip evaluate \
  --input_path workspace/output/$MNAME/x86_64_cuda/Float32-compile \
  --test_path workspace/mscoco/val2017 \
  --dataset_type coco

# Evaluating Int8:
leip evaluate \
  --input_path workspace/output/$MNAME/x86_64_cuda/Int8-optimize \
  --test_path workspace/mscoco/val2017 \
  --dataset_type coco

Replace x86_64_cuda in the above examples with x86_64 for an x86_64 without a GPU.

If evaluating for a GPU targeted model fails with a CUDA_ERROR_NO_BINARY_FOR_GPU error, this indicates that the model was optimized/compiled with the wrong arch flag.

Evaluating with Remote Inference:

The leip evaluate command will be run in the SDK Docker container to evaluate the model on a remote target device with the inference performed on the device under test. You will first need to set up your target by installing Latent AI Object Runner (LOR). You will then evaluate the model using the LRE objects created by leip pipeline in Step Two. The following examples assume you followed the naming conventions and paths from earlier in the tutorial.

Perform the following when using an ARM processor without a GPU:

CODE
# Substitute the IP address of your target device for <IP_ADDR>
# The default port for LOR is 50051

# Evaluating Float32:
leip evaluate \
  --input_path workspace/output/$MNAME/aarch64/Float32-package \
  --host <IP_ADDR> --port 50051 \
  --test_path workspace/mscoco/val2017 \
  --dataset_type coco

# Evaluating Int8:
leip evaluate \
  --input_path workspace/output/$MNAME/aarch64/Int8-package \
  --host <IP_ADDR> --port 50051 \
  --test_path workspace/mscoco/val2017 \
  --dataset_type coco

Replace aarch64 in the above example with aarch64_cuda, x86_64 or x86_64_cuda as appropriate for your device under test if you are using an ARM processor with a GPU, x86_64, or an x86_64 with a GPU.

It is also possible to test an LRE object with the leip evaluate inside the Docker container by running the LOR inside the container itself. Launch the LOR within the SDK by calling python3 -m lor.lor_serverto enable the LOR within the SDK. You can now use leip evaluate within the same container by passing --host 0.0.0.0

If you want to access the LOR in one container by a leip evaluate process running in another, you will need to expose the LOR port:

  1. If you use the default port, you can enable this by adding -p 50051:50051 to the docker run command.

  2. Use the IP address of the Docker container when passing the --host flag to leip evaluate

Next Steps

Once you have completed evaluating the model on the target, you can either integrate the model into your code for deployment. The model can also be retrained with your own data and exported, compiled, and evaluated again.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.