![]() |
ONNX-LRE
C++ API documentation
|
Latent Runtime Engine for ONNX models. More...
Classes | |
| struct | Cryption |
| Encryption parameters for protected model access. More... | |
| class | LatentRuntimeEngine |
| struct | Options |
| Configuration parameters for the inference engine. More... | |
Enumerations | |
| enum | ExecutionProvider { ExecutionProvider::TensorRT, ExecutionProvider::CUDA, ExecutionProvider::CPU } |
| Hardware acceleration backends for ONNX model execution. More... | |
| enum | Precision { Precision::Float32, Precision::Float16, Precision::Int8 } |
| Numeric precision options for model execution. More... | |
Latent Runtime Engine for ONNX models.
The OnnxLre namespace contains all classes, functions, and types that form the Latent Runtime Engine for executing ONNX models with hardware acceleration. It provides abstractions for model loading, inference execution, and optimized tensor management on various compute devices.
|
strong |
Hardware acceleration backends for ONNX model execution.
|
strong |
Numeric precision options for model execution.