This page discusses the generally supported model file formats and versions for each tool in the LEIP SDK.

Depending on the operators used in a particular model architecture, you may encounter unforeseen compatibility issues with specific tools.

Model Requirements




  • Compatible with TensorFLow 2.3

  • Has not been frozen


  • Compatible with PyTorch 1.7

  • When using LEIP Optimize or Compile, your input model should be traced and saved using (recommended), or alternatively the whole eager model saved using (but it must be traceable anyway).

  • If you are using LEIP Optimize with the -use_legacy_quantizer true option, your model should be a quantizable and traceable eager model (e.g., torchvision.models.quantization.resnet50(pretrained=True)).




LEIP Train

For leip_train command and QGT API (Input and Output)

Keras models from Tensorflow 2.x onwards, in eager execution mode only.

LEIP Optimize

LEIP Compile

The leip_optimize and leip_compile commands support the following input formats. The output format depends on the target being compiled to.

  • TF (SavedModel)

  • TF (Keras)

  • TF (Graph Proto)

  • TF (ckpt meta)

  • TFLite

  • PyTorch 1.7

LEIP Evaluate


The leip_evaluate and leip_run commands can currently execute in the following inference frameworks, which are included in the LEIP SDK Docker Images:

  • Tensorflow ver 2.3

  • Tensorflow Lite ver 2.3

  • LRE - LEIP Runtime Environment

  • PyTorch 1.7

Supported Input Formats



Tensorflow v2.3

  • TF (ckpt meta)

  • TF (Graph Proto)

  • TF (SavedModel)

  • TF (Keras)

Tensorflow Lite v2.3

Any .tflite file converted by Tensorflow or from leip compress.

PyTorch v 1.7

A complete pytorch module (eager or traced) with .pt extension.