Skip to main content
Skip table of contents

Classifier Recipe Step Three: Evaluating a Model on Target Hardware

To evaluate the model you optimized, compiled and packaged in Step Two, you will run the leip evaluate SDK command. The command line arguments will differ slightly depending on whether you are running inference locally in the SDK docker container, or connecting to a networked device to run inference remotely.

Running leip evaluate on a GPU enabled device will provide skewed (slow) inference numbers if you do not set up a compute engine cache and pass it by environment variable. Please see the LEIP Evaluate documentation for more details. This feature is not currently supported for running leip evaluate on a remote target device.

Evaluating Within the SDK Docker Container

To evaluate the model entirely within the SDK docker container, you will pass the compiled model directly to leip evaluate along with the test path. Assuming you followed the path and naming conventions used earlier in this tutorial, you can use the following to evaluate:

Evaluating in an x86_64 docker container with Nvidia Graphics Card:

CODE
# Evaluating Float32:
leip evaluate \
  --input_path workspace/output/timm-gernet_m/x86_64_cuda/Float32-compile \
  --test_path workspace/datasets/open-images-10-classes/eval/index.txt

# Evaluating Int8:
leip evaluate \
  --input_path workspace/output/timm-gernet_m/x86_64_cuda/Int8-optimize \
  --test_path workspace/datasets/open-images-10-classes/eval/index.txt

For x86_64 without GPU, replace x86_64_cuda in the above examples with x86_64.

If evaluate for a GPU targeted model fails with a CUDA_ERROR_NO_BINARY_FOR_GPU error, this indicates that the model was optimized/compiled with the wrong arch flag.

Evaluating with Remote Inference:

To evaluate on a remote target device, you will run leip evaluate in the SDK docker container with inference performed on the device under test. You will first need to setup your target by installing Latent AI Object Runner (LOR). You will then evaluate using the LRE objects created by leip pipeline in Step Two. The following examples assume you followed the naming conventions and paths from earlier in the tutorial.

For ARM processor without GPU:

CODE
# Substitute the IP address of your target device for <IP_ADDR>
# The default port for LOR is 50051

# Evaluating Float32:
leip evaluate \
  --input_path workspace/output/timm-gernet_m/aarch64/Float32-package \
  --host <IP_ADDR> --port 50051 \
  --test_path workspace/datasets/open-images-10-classes/eval/index.txt

# Evaluating Int8:
leip evaluate \
  --input_path workspace/output/timm-gernet_m/aarch64/Int8-package \
  --host <IP_ADDR> --port 50051 \
  --test_path workspace/datasets/open-images-10-classes/eval/index.txt


For ARM with GPU, x86_64 or x86_64 with GPU - replace aarch64 in the above example with aarch64_cuda, x86_64 or x86_64_cuda as appropriate for your device under test.

It is also possible to test an LRE object with the leip evaluate inside the docker container by running the LOR inside the container itself. To enable the LOR within the SDK, launch the LOR within the SDK by calling python3 -m lor.lor_server

If you want to access the LOR in one container by a leip evaluate process running in another, you will need to expose the LOR port:

  1. If you use the default port, you can enable this by adding -p 50051:50051 to the docker run command.

  2. Use the IP address of the docker container when passing the --host flag to leip evaluate

Next Steps

Once you have completed evaluating the model on the target, you can either integrate the model into your code for deployment, or try out different models, including training with your own datasets.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.