Skip to content

How To: Configure the Recipe to Compile Artifacts for Different Targets

This page will walk you through configuring the LEIP Recipe to compile artifacts for different hardware targets based on the supportability matrix. The target device dictates the configuration required to build the model artifact that can be deployed and run efficiently on an edge device or server.

Prerequisites

  1. Installed LEIP Design: Ensure you have installed the LEIP Design on your development machine.
  2. Access to the Supportability Matrix: You can refer to the official Supportability Matrix to check the compatibility of devices, OS, CUDA, and NVIDIA dependencies.

Check the Target Matrix and Choose Appropriate Target

Before configuring your recipe, check the Supportability Matrix to find your target device.

Assign the Compiler

Once you have a recipe, you can choose the appropriate compiler from the available options. Here's an example of selecting the TensorRT compiler:

  1. View available compiler options:

    recipe.options("compiler")
    

  2. Assign the TensorRT compiler:

    recipe.assign_ingredients('compiler', "TensorRT Compiler")
    

Setting the Target and Host

Next, configure the target hardware (e.g., CUDA for GPUs) and the host (e.g., LLVM for CPUs). You can check the available options and assign them accordingly:

  1. View available target options:

    recipe.options("compiler.target")
    

  2. Assign the target (e.g., "cuda" for GPUs):

    recipe["compiler.target"] = "cuda"
    

  3. View available host options:

    recipe.options("compiler.host")
    

  4. Assign the host (e.g., "llvm" for CPU):

    recipe["compiler.host"] = "llvm"
    

It's as simple as that!

Device-Specific Target Configuration

Depending on your target device, you can configure the recipe for specific hardware. Below are some example configurations based on device types.

1. Raspberry Pi 4 (ARM)

  • Compiler: TVM
  • Target: raspberry-pi/4b64
  • Host: None
recipe.assign_ingredients('compiler', "TVM Compiler")  
recipe["compiler.target"] = "raspberry-pi/4b64"

2. AGX Orin (ARM)

  • Compiler: TensorRT
  • Target: nvidia/jetson-orin-nano
  • Host: None
recipe.assign_ingredients('compiler', "TensorRT Compiler") 
recipe["compiler.target"] = "nvidia/jetson-orin-nano"

3. AGX Xavier (ARM)

  • Compiler: TensorRT
  • Target: nvidia/jetson-agx-xavier
  • Host: None
recipe.assign_ingredients('compiler', "TensorRT Compiler") 
recipe["compiler.target"] = "nvidia/jetson-agx-xavier"

4. Desktop/Server (x86) with Intel Skylake CPU

  • Compiler: TVM
  • Target: intel/skylake
  • Host: None
recipe.assign_ingredients('compiler', "TVM Compiler")  
recipe["compiler.target"] = "intel/skylake"

5. Desktop/Server (x86) with Intel Tigerlake CPU

  • Compiler: TVM
  • Target: intel/tigerlake
  • Host: None
recipe.assign_ingredients('compiler', "TVM Compiler")  
recipe["compiler.target"] = "intel/tigerlake"

6. Desktop/Server (x86) with Intel Cascadelake CPU

  • Compiler: TVM
  • Target: intel/cascadelake
  • Host: None
recipe.assign_ingredients('compiler', "TVM Compiler")  
recipe["compiler.target"] = "intel/cascadelake"

7. Desktop/Server (x86) with NVIDIA RTX A4500 GPU

  • Compiler: TensorRT
  • Target: nvidia/rtx-a4500
  • Host: LLVM
recipe.assign_ingredients('compiler', "TensorRT Compiler")  
recipe["compiler.target"] = "nvidia/rtx-a4500"

8. Desktop/Server (x86) with CUDA Support

  • Compiler: TensorRT
  • Target: cuda
  • Host: None
recipe.assign_ingredients('compiler', "TensorRT Compiler")  
recipe["compiler.target"] = "cuda"