The Latent AI Object Runner (LOR) runs on an edge device and provides a gRPC interface enabling a service to remotely install, run inference on, and collect metrics for a model that has been packaged into a Latent AI Runtime Object (LRE). For more information on the LRE, please review the SDK documentation.
One valuable role that the LOR provides is allowing edge devices to be integrated into CI/CD style workflows. The LEIP Classifer Recipes illustrate this example by demonstrating training, exporting, optimizing, compiling and evaluating a model within a LEIP SDK Docker container with inference testing being performed on an edge devices like the Raspberry Pi or Nvidia AGX.
You will need to install the LOR package onto the device before you can connect an edge device to your SDK workflow. Currently, the LOR is delivered as a python wheel as part of the SDK. The LOR is available as a compressed archive file inside the SDK at
Copy the LOR to the Target Device
<lor_file> filename of the provided LOR inside your SDK Docker container:
# Find the archive file in the SDK: ls /latentai/latentai_object_runner/ latentai-lor-1.0.0.tar.gz
Copy the above archive file to your target device.
# replace leip and <lor_file>.tar.gz filename in the below example as appropriate scp leip:/latentai/latent_object_runner/<lor_file>.tar.gz <username>@<target>:~/
Setting up the LOR
Now on the target device, install the package with the following command:
pip3 install Cython pip3 install <lor_file>.tar.gz
By default, port 50051 is used by the LOR. Make sure that this port is open and available.
Once the LOR is installed, you can launch it with the following command:
# You may wish to launch with nohup to prevent the server from exiting with the # current shell session. lor_server
The target device is now providing a GRPC interface to the target device. You may now use this service to run inference on this device via