Skip to main content
Skip table of contents

Release Notes

Version History

  • 2.9 (6/21/23)

    • Enhancements and Improvements

      • Quantization Guided Training (QGT) has been added as an option in LEIP Application Frameworks (AF).

      • The TVM SDK components have been updated to TVM 0.11.

      • The PyTorch SDK components have been updated to 2.0.

      • The runtime library has been renamed liblre.

      • Recipes have been provided for JetPack 5.

      • Runtimes are no longer included in LEIP Package .LRE object to support multiple JetPack (4.6.5) compatibility. The appropriate runtime packages must be installed as part of the target device setup. See example_applications repo for sample installation scripts. (https://github.com/latentai/example-applications)

      • The LEIP SDK now supports CUDA 12.

      • The formats across the C++ Classifiers and Detectors examples has been improved for efficiency and consistency.

      • The pre and post processing examples for the C++ Classifier and Detector recipes have been optimized.

      • The LRE now supports setting a UUID in the model artifact at build time with an additional API to access the UUID from your application.

      • All detector recipes now support the AF +export.include_preprocessor=True flag ensuring consistency of pre-processor when evaluating in AF and in LEIP Evaluate.

      • Support for multiple inputs has been added (Python API only – CLI support will be released in the future.

      • The Bring Your Own Data (BYOD) tutorials have been improved.

    • Fixes

      • A bug that prevented the SDK from running on a system without a network connection has been fixed.

      • The ‘--bits’ option has been removed from LEIP Optimize.

      • A bug where the SDK would inaccurately detect more inputs than it should has been fixed.

      • An occasional benign error message reported by the LOR has been removed.

  • 2.8 (2/6/23)

    • Added EfficientDet and MobilenetSDD recipes.

    • Added ability to serialize albumentation compose and export it as a separate JSON file from LEIP application frameworks when using command=export via +export.include_preprocessor=True

    • Added inference examples for EfficientDet and MobilenetSSD recipes released as an available GIT repository. Inference examples for other recipes will be moved to this GIT repository in a future release.

    • Implemented a bias correction for Quantization.

    • Added asymmetric-per-channel quantization option for CPU targets.

    • Finalized the implementation of Matting as a dataset generator wrapper. Matting is a type of dataset augmentation that places segments of images from one dataset on top of images of another. Matting uses two datasets: a foreground and a background.

    • Implemented RandomSubset as a dataset generator wrapper.

    • Updated LEIP Evaluate and LEIP Optimize for new preprocessing. The new processing signature requires additional arguments to add the processor path while fetching the preprocessing function and to add ground truths where required.

    • Updated LEIP Package and LRE to be able to find and set up Python Albumentation preprocessors.

    • Added Support Ground Truth preprocessing in LEIP Evaluate. This adds the ability to preprocess a bounding box for cases where the image is manipulated in a way that the bounding boxes cannot be scaled.

    • The leip run command has been deprecated.

    • Known Issues:

      • asymmetricpc quantization does not currently work with certain CPU architecture flags (e.g. cascadelake, skylake)

      • tf_efficientdet_d# recipes will run on Ampere cards for INT8, but require calibration be performed on a pre-Ampere card.

  • 2.7.5 (2/6/23)

    • Support was added for batch size > 1 in the SDK. This changes how the SDK provides the calibration data in the TRT compilation path.

  • 2.7.3 (10/25/22)

    • The CUDA 11 Docker container build process was updated and verified.

    • Fixed issue with LEIP Application Frameworks that prevented certain operations from working when using multiple GPUs for training and evaluation.

  • 2.7.2 (10/7/22)

    • Added test cases from recipes for Mosaic. These test cases followed the AF Requirements.

    • Mosaic Augmentation was implemented in the AF training.

  • 2.7.1 (9/21/22)

    • Fixes

      • Enable LEIP pipeline to properly build the LRE object for use on the Raspberry Pi.

  • 2.7.0 (9/6/22)

    • Enhancements and Improvements

      • Added Classifier Recipes: New classifier recipes with 22 different classifier backbones for ARM and x86_64 (both with and without GPU)

      • Added Pytorch ImageFolder format for Classifier Recipe: Bring Your Own Data (BYOD).

      • Added the LatentAI Object Runner (LOR) and extended leip evaluate support to enable evaluation in the SDK container with remote inference on a target device.

      • Improved Yolov5 BYOD examples for COCO and Pascal VOC formats

      • Added Intel Architecture and ARM pipeline configurations for Yolov5 recipes

      • Added GPU support for the Python LRE Object

      • Added Ship Detection and Global Wheat Head datasets to Application Frameworks

      • Added support for tags in the CLI and API to enable user defined meta-data labeling in the Analytics dashboard

      • Prediction label sizes are now configurable in Application Frameworks

    • Fixes

      • Fixed --loglevel DEBUG in CUDA containers

      • Improved Batchnorm Folding support

      • Improved checkpoint handling in Application Frameworks

      • Fixed multi-GPU dataloading for Pascal VOC format

      • Correct progress bar display of # samples when using AF command=evaluate

      • Fixed bug with AF Neptune logger

  • 2.6.0 (07/12/22)

    • Enhancements and Improvements

      • Support for tags was added in into the logging API, allowing you to add custom labels to metadata generated during LEIP operations. Support for querying and adding supplemental tags was also included in the LEIP enterprise dashboard.

      • Additional metadata was added to the neural network during LEIP Optimize. The event is then sent to the dashboard with the additional metadata. Additional information about LEIP Optimize can be found here.

      • The SDK was upgraded to support PyTorch version 1.10. Additional information about LEIP SDK supporting PyTorch can be found here.

      • Application Frameworks, including LEIP Recipes, was added to all GPU SDK containers.  

    • Fixes

      • Repaired the batch norm ingestion deconstruction that was causing a customer’s model to fail. 

      • PyTorch was fixed/updated to be used in the CUDA 11 container.

      • Models going through LEIP Optimize may have an operator that becomes broken in calibration and output faulty data. Warnings were added when this occurs.

  • 2.5.0 (5/23/22)

    • Add recipe support for YoloV5 medium and small, including build recipe pipelines

    • Add support for the Full operator in Glacier to accommodate mobilenetV1-ssd

    • Fix a bug in MnetV2-SSD where in certain instances it was returning a mAP score of zero

  • 2.4.1 (4/22/22)

    • Minor usability improvements for LEIP Recipes

    • Recipe configs are now stored in container by default for offline use cases.

  • 2.4.0 (4/8/22)

    • Initial release of LEIP Recipes

    • Support for generation of python and C++ libraries in LEIP Package

    • Improve LEIP Pipeline metrics to allow 'experiments' endpoint in LEIP Enterprise

    • Add support for power op when using a keras batchnorm

  • 2.3.0 (3/18/22)

    • Update LEIP Evaluate with a more robust and modular implementation

    • Add initial detector support for LEIP Evaluate

    • Add support for custom pre/post processors in LEIP Evaluate

    • Update LEIP Pipeline with the ability to only run certain flows

    • Numerous bug fixes for GlacierVM

  • 2.2.1 (2/23/22)

    • Add enhanced metrics for LEIP Evaluate

  • 2.2 (2/15/22)

    • Add support for the LEIP Enterprise server API

    • Add initial support for LEIP Package

    • Add support for Pytorch and per-channel quantization

    • Fix an issue where compression reports were not rendering for all LEIP Pipeline flows.

    • Replace calibration with GlacierVM

  • 2.1 (11/01/21)

    • Add support for LEIP Enterprise event processing

    • Container level performance and stability optimizations

  • 2.0.2 (9/30/21)

    • Fix an accuracy issue that occurred in Yolo models

  • 2.0.1 (8/27/21)

    • Fix an issue in LEIP Optimize where an INT8 target failed for detection and segmentation models in pytorch.

    • Update LEIP preprocessors with a new signature to support image resizing

    • Add support for TF 2.4.x

    • Add support for ONNX models

  • 2.0 (8/1/21)

    • Introduce LEIP Optimize to encapsulate compress + compile

    • Support for INT8 in Cuda optimized environments

    • Introduce LEIP Pipelines, a workflow module designed for experiment tracking, collaboration and automation

    • Numerous improvements in model ingestion and the pre/post-processing components

    • Support for GPU and CPU container variants, each optimized for specific use-cases

  • 1.9.3 (4/6/21)

    • Update LEIP Compile with an additional –optimization category for CUDA

  • 1.9.2 (3/24/21)

    • Update LEIP Compile so you can specify different kinds of optimizations (kernel or graph) using the –optimization parameter

  • 1.9.1 (3/18/21)

    • Include support for the Android SDK

    • Fix a Pytorch issue with compress and compile throwing a ‘device not found’ error with a GPU config

  • 1.9.0 (2/18/21)

    • Include compiler logs/schedules to remove an external dependency

    • LEIP Convert is now rolled into LEIP Compress

  • 1.8.5 (2/03/21)

    • Add support for transpose_conv layer in LEIP Convert

  • 1.8.4 (1/20/21)

    • Update, using LEIP evaluate, if the batch size is > 1 the reported inference speed was artificially slow

    • Update documentation to include more detailed information about using a GPU and docker

    • Refactoring out preprocessors into a new module

    • Updated security patches for TF, Pillow, Open SSL, etc

  • 1.8.3 (12/15/20)

    • If the input shape is None on LEIP Compile, set it to 1 to avoid a segfault.

    • Update the Post Quantizer to resolve an issue where a float32 could be upcast to a float64

  • 1.8.2 (12/11/20)

    • Add batch size support for LEIP evaluate and run

  • 1.8.1 (12/03/20)

    • Add more robust support for TF eager execution

  • 1.8.0 (10/24/20)

    • Add Pytorch support for PTQ, including compressing reports

    • Add Pytorch support for LEIP Compile

  • 1.7.3 (09/29/20)

    • Add channel-wise regularizer to QGT

  • 1.7.2 (09/09/20)

    • Add support for batchnorm folding in an .h5 model

    • Add config validation to LEIP train

    • Update LEIP Train to provide a shortcut way to configure homogenous/uniform quantization

    • Fix a bug where LEIP train does not load pre-trained ckpt properly

    • Fix a bug in LEIP train where modify_scheme() being applied to the model json instead of the attachment scheme json

    • Fix a bug in LEIP train where self.attach_regularizers_step() does not load the models weights back

  • 1.7.1 (08/03/20)

    • Add support for Tensorflow 2.3 in the docker image

  • 1.7.0 (00/01/20)

    • Initial release of LEIP Train and QGT (Quantization Guided Training)

    • Update LRE to address an issue with INT8 performance on certain models

    • Update LEIP Evaluate to support model zoo object detection models

    • Fix a performance issue with LEIP Visualize rendering larger models slowly

  • 1.6.0 (07/15/20)

    • Initial release of the LEIP Visualize compression reports

    • Fix an issue where the base ‘Zoo’ command was not loaded properly in the leip-sdk docker container

    • Add a LEIP ‘install’ command to the base set of commands (the install script is still present)

    • Fix a regression where TF loads on all commands even though it’s not required

  • 1.5.1 (07/07/20)

    • Fix an issue when a model wasn’t being saved if the model_schema.json was not present

  • 1.5.0 (06/26/20)

    • Add Tensor Splitting and Bias Correction optimizations to post training quantization

  • 1.4.2

    • GA Release of the LEIP SDK

Known Issues

The following is a list of know issues and limitations for the LEIP SDK Release version:

Compile

  • Compile for INT8 memory size may not be optimal for all hardware, and Compile may not be able to generate a solution for INT8 for all models that may not be optimal.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.