Skip to main content
Skip table of contents

LEIP Overview

The Latent AI Efficient Inference Platform (LEIP) is a modular, fully-integrated workflow designed to harmonize the end-to-end workspace between AI scientists and embedded software engineers. The LEIP software development kit (SDK) enables developers to train, quantize and deploy efficient deep neural networks. Its modular architecture enables the platform to be expanded to incorporate new functionality to meet the current and future needs of an evolving edge AI market. LEIP is made up of the service modules described below.

LEIP Core Modules

Module

Description

LEIP Optimize

 All-in-one optimizer for models from several supported frameworks in the form of a Python API and CLI familiar to AI scientists and software developers. It consists of two internal phases:

  • LEIP Compress - A state-of-the-art quantization optimizer that enables compression experiments to be easily run on trained neural networks to optimize the trade-off between neural network size reduction and accuracy in inference performance. LEIP supports a broad set of advanced quantization algorithms for compressing neural networks.

  • LEIP Compile - An automated compiler framework that optimizes neural network performance for hardware processor targets. Unlike traditional compilers, LEIP approaches the compilation process from a machine learning point of view. This advantage enables LEIP to deliver additional level of performance optimizations while providing the flexibility to compile instruction sets to support a broad range of hardware processor targets. You may also use LEIP Compile independently to compile models without any quantization.

LEIP Train

A tool that is used for automating the process of doing Quantization Guided Training (QGT) on a model. This is intended for first-time to moderate level users of the LEIP SDK. More advanced users might want details on using the python API for Quantization Guided Training directly.

LEIP Pipeline

Allows you to execute one or more flows of LEIP commands over one model using a JSON configuration file. This lets you easily group all the configuration of commands that you want to apply to your model, and easily share it with anyone to run.

LEIP Adapt

An automated dynamic inference framework that optimizes neural network performance for hardware processor targets. It enables AI model inference to run efficiently and dynamically manages execution of the neural network.

LEIP Zoo

A broad collection of pre-trained models, for a range of applications from audio to computer vision, which you can use to test the LEIP SDK. Documentation and models are available to learn more about how the LEIP SDK optimizes neural networks for size and performance to handle inference workloads on edge devices.

LEIP Workflow

The LEIP SDK supports an end-to-end development workflow. From your set of pre-trained neural network models, LEIP Optimize generates an optimized model, in the form of a Latent AI Runtime Environment (LRE) object, that is quantized to your desired bit-precision and contains executable code native to the target hardware processor.

Compiler Artifacts

LRE Object

LEIP Optimize (or LEIP Compile) generates an LRE object that is optimized for a target hardware. The LRE object is a standalone executable binary or linkable object in the processor native binaries. The LEIP SDK is highly flexible to generate different variants of the LRE object. Each variant comes with a different level of optimization complexity to offer range of compute and memory efficiencies. The main LRE object variants are: (a) parameters and computation in floating point, normally used as a baseline for evaluation; and (b) parameters and computation are all integers.

After a Neural Network (NN) model is compiled the resulting binary artifacts can be incorporated into an end-user application. Latent AI provides C/C++ and Python API examples that include pre-processing of the inputs before they are fed into the NN binary artifact and/or post-processing of the outputs from the binary artifact. The end-user can add/modify these examples to suit their particular needs. Additionally, the C/C++ API examples include a Makefile that will produce an executable for the target device that can be compiled to produce an executable. The Python API example can be transferred to the target device, along with the binary artifacts produced by the compiler, for their execution.

Compiler Output

LEIP Optimize (or LEIP Compile) provides a number of ancillary artifacts that can support the deployment of the LRE object. These artifacts includes metadata (files in JSON) that can provide details such as timestamps, tool versions, and security keys that can be optionally used for model management during deployment.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.