![]() |
ONNX-LRE
C++ API documentation
|
Latent Runtime Engine for ONNX models. More...
Classes | |
| struct | Cryption |
| Encryption parameters for protected model access. More... | |
| class | LatentRuntimeEngine |
| The LatentRuntimeEngine class provides a C++ interface to load and run ONNX models using ONNX Runtime. More... | |
| struct | Options |
| Configuration parameters for the inference engine. More... | |
Enumerations | |
| enum class | ExecutionProvider { TensorRT , CUDA , CPU , UNSET } |
| Hardware acceleration backends for ONNX model execution. More... | |
| enum class | Precision { Float32 , Float16 , Int8 , UNSET } |
| Numeric precision options for model execution. More... | |
Latent Runtime Engine for ONNX models.
The OnnxLre namespace contains all classes, functions, and types that form the Latent Runtime Engine for executing ONNX models with hardware acceleration. It provides abstractions for model loading, inference execution, and optimized tensor management on various compute devices.
|
strong |
Hardware acceleration backends for ONNX model execution.
|
strong |
Numeric precision options for model execution.