Skip to content

How To: Configure the Recipe to Compile Artifacts for Different Targets

Important Notice: The TensorRT Compiler ingredient is being deprecated. It is recommended to use ONNXRuntime with TensorRT enabled for NVIDIA GPU targets. This provides better compatibility and maintenance going forward.

This page will walk you through configuring the LEIP Recipe to compile artifacts for different hardware targets based on the supportability matrix. The target device dictates the configuration required to build the model artifact that can be deployed and run efficiently on an edge device or server.

Prerequisites

  1. Install LEIP Design: Ensure you have installed the LEIP Design on your development machine.
  2. Consult the Supportability Matrix: Consult the installation prerequisites and supportability matrices to check the compatibility of devices, OS, CUDA, and NVIDIA dependencies.

Check the Target Matrix and Choose Appropriate Target

Before configuring your recipe, check the Target Supportability Matrix to find your target device.

Assign the Compiler

Once you have a recipe, you can choose the appropriate compiler from the available options. Here's an example of configuring ONNXRuntime with TensorRT enabled for NVIDIA GPU targets:

  1. View available compiler options:

    recipe.options("compiler")
    

  2. Assign ONNXRuntime and enable TensorRT:

    recipe.assign_ingredients('compiler', "ONNXRuntime")
    recipe["compiler.is_tensorrt"] = True
    

Note: With ONNXRuntime, you no longer need to explicitly set compiler targets. The appropriate optimizations are automatically selected based on your hardware configuration.

Device-Specific Configurations

Different hardware targets require different configurations. Below are some example configurations based on device types.

1. Raspberry Pi 4 (ARM)

  • Compiler: TVM
  • Target: raspberry-pi/4b64
recipe.assign_ingredients('compiler', "TVM Compiler")
recipe["compiler.target"] = "raspberry-pi/4b64"

2. NVIDIA GPU Devices (AGX Orin, Xavier, RTX, etc.)

For all NVIDIA GPU devices (including Jetson and desktop/server GPUs), use ONNXRuntime with TensorRT enabled:

recipe.assign_ingredients('compiler', "ONNXRuntime")
recipe["compiler.is_tensorrt"] = True

3. Intel CPU Devices (Skylake, Tigerlake, Cascadelake)

  • Compiler: TVM
recipe.assign_ingredients('compiler', "TVM Compiler")

Note: When using ONNXRuntime with TensorRT enabled, the output artifact will be accessed via compiler.output_file instead of compiler.output_path which is used for TVM-based compilation.