Enable LEIP pipeline to properly build the LRE object for use on the Raspberry Pi
Enhancements and Improvements
Added Classifier Recipes: New classifier recipes with 22 different classifier backbones for ARM7 and x86_64 (both with and without GPU)
Added Pytorch ImageFolder format for Classifier Recipe: Bring Your Own Data (BYOD).
Added the LatentAI Object Runner (LOR) and extended
leip evaluatesupport to enable evaluation in the SDK container with remote inference on a target device.
Improved Yolov5 BYOD examples for COCO and Pascal VOC formats
Added Intel Architecture and ARM7 pipeline configurations for Yolov5 recipes
Added GPU support for the Python LRE Object
Added Ship Detection and Global Wheat Head datasets to Application Frameworks
Added support for tags in the CLI and API to enable user defined meta-data labeling in the Analytics dashboard
Prediction label sizes are now configurable in Application Frameworks
--loglevel DEBUGin CUDA containers
Improved Batchnorm Folding support
Improved checkpoint handling in Application Frameworks
Fixed multi-GPU dataloading for Pascal VOC format
Correct progress bar display of # samples when using AF command=evaluate
Fixed bug with AF Neptune logger
Enhancements and Improvements
Support for tags was added in into the logging API, allowing you to add custom labels to metadata generated during LEIP operations. Support for querying and adding supplemental tags was also included in the LEIP enterprise dashboard.
Additional metadata was added to the neural network during LEIP Optimize. The event is then sent to the dashboard with the additional metadata. Additional information about LEIP Optimize can be found here.
The SDK was upgraded to support PyTorch version 1.10. Additional information about LEIP SDK supporting PyTorch can be found here.
Application Frameworks, including LEIP Recipes, was added to all GPU SDK containers.
Repaired the batch norm ingestion deconstruction that was causing a customer’s model to fail.
PyTorch was fixed/updated to be used in the CUDA 11 container.
Models going through LEIP Optimize may have an operator that becomes broken in calibration and output faulty data. Warnings were added when this occurs.
Add recipe support for YoloV5 medium and small, including build recipe pipelines
Add support for the Full operator in Glacier to accommodate mobilenetV1-ssd
Fix a bug in MnetV2-SSD where in certain instances it was returning a mAP score of zero
Minor usability improvements for LEIP Recipes
Recipe configs are now stored in container by default for offline use cases.
Update LEIP Evaluate with a more robust and modular implementation
Add initial detector support for LEIP Evaluate
Add support for custom pre/post processors in LEIP Evaluate
Update LEIP Pipeline with the ability to only run certain flows
Numerous bug fixes for GlacierVM
Add enhanced metrics for LEIP Evaluate
Add support for the LEIP Enterprise server API
Add initial support for LEIP Package
Add support for Pytorch and per-channel quantization
Fix an issue where compression reports were not rendering for all LEIP Pipeline flows.
Replace calibration with GlacierVM
Add support for LEIP Enterprise event processing
Container level performance and stability optimizations
Fix an accuracy issue that occurred in Yolo models
Fix an issue in LEIP Optimize where an INT8 target failed for detection and segmentation models in pytorch.
Update LEIP preprocessors with a new signature to support image resizing
Add support for TF 2.4.x
Add support for ONNX models
Introduce LEIP Optimize to encapsulate compress + compile
Support for INT8 in Cuda optimized environments
Introduce LEIP Pipelines, a workflow module designed for experiment tracking, collaboration and automation
Numerous improvements in model ingestion and the pre/post-processing components
Support for GPU and CPU container variants, each optimized for specific use-cases
Update LEIP Compile with an additional –optimization category for CUDA
Update LEIP Compile so you can specify different kinds of optimizations (kernel or graph) using the –optimization parameter
Include support for the Android SDK
Fix a Pytorch issue with compress and compile throwing a ‘device not found’ error with a GPU config
Include compiler logs/schedules to remove an external dependency
LEIP Convert is now rolled into LEIP Compress
Add support for transpose_conv layer in LEIP Convert
Update, using LEIP evaluate, if the batch size is > 1 the reported inference speed was artificially slow
Update documentation to include more detailed information about using a GPU and docker
Refactoring out preprocessors into a new module
Updated security patches for TF, Pillow, Open SSL, etc
If the input shape is None on LEIP Compile, set it to 1 to avoid a segfault.
Update the Post Quantizer to resolve an issue where a float32 could be upcast to a float64
Add batch size support for LEIP evaluate and run
Add more robust support for TF eager execution
Add Pytorch support for PTQ, including compressing reports
Add Pytorch support for LEIP Compile
Add channel-wise regularizer to QGT
Add support for batchnorm folding in an .h5 model
Add config validation to LEIP train
Update LEIP Train to provide a shortcut way to configure homogenous/uniform quantization
Fix a bug where LEIP train does not load pre-trained ckpt properly
Fix a bug in LEIP train where modify_scheme() being applied to the model json instead of the attachment scheme json
Fix a bug in LEIP train where self.attach_regularizers_step() does not load the models weights back
Add support for Tensorflow 2.3 in the docker image
Initial release of LEIP Train and QGT (Quantization Guided Training)
Update LRE to address an issue with INT8 performance on certain models
Update LEIP Evaluate to support model zoo object detection models
Fix a performance issue with LEIP Visualize rendering larger models slowly
Initial release of the LEIP Visualize compression reports
Fix an issue where the base ‘Zoo’ command was not loaded properly in the leip-sdk docker container
Add a LEIP ‘install’ command to the base set of commands (the install script is still present)
Fix a regression where TF loads on all commands even though it’s not required
Fix an issue when a model wasn’t being saved if the model_schema.json was not present
Add Tensor Splitting and Bias Correction optimizations to post training quantization
GA Release of the LEIP SDK
The following is a list of know issues and limitations for the LEIP SDK Release version:
Compile for INT8 memory size may not be optimal for all hardware, and Compile may not be able to generate a solution for INT8 for all models that may not be optimal.