Tasks
Refer to the the What to do with a Recipe getting started page for a more detailed guide on using the Tasks API.
leip_recipe_designer.tasks.visualize_data
¶
visualize_data(recipe: RecipeNode, ignore_constraints: bool = False) -> dict
Generates a visualization of the recipe's data by calling the LatentAI Application Frameworks with the command 'vizdata'. A helpful tool to diagnose if the data ingestion is done properly.
The LatentAI Application Frameworks module must be installed in the current context.
Parameters:
-
recipe(RecipeNode) –A LEIP Recipe Designer API RecipeNode to execute.
-
ignore_constraints(bool, default:False) –If True, the recipe validation is not performed.
Returns:
-
telemetry(dict) –A dict containing the path or paths to some data samples as interpreted by the recipe.
Configuring your visualize_data task
These are some parameters that you can use to configure the data visualization:
"data_generator""minimum_display_sample""data_subset""vizdata.labels"
To learn more about what each of these fields can take as values, run recipe.options("<field_name_here>").
To assign a value to any of these, run recipe["<field_name_here>"] = value
leip_recipe_designer.tasks.train
¶
train(recipe: RecipeNode, ignore_constraints: bool = False) -> dict
Triggers training of the provided recipe by calling the LatentAI Application Frameworks with the command 'train'.
The LatentAI Application Frameworks module must be installed in the current context.
Parameters:
-
recipe(RecipeNode) –A LEIP Recipe Designer API RecipeNode to execute.
-
ignore_constraints(bool, default:False) –If True, the recipe validation is not performed.
Returns:
-
telemetry(dict) –A dict containing a path to the model checkpoint and other relevant information gathered during training.
Configuring your visualize_data task
These are some parameters that you can use to configure training:
"train.num_epochs""train.batch_size_train""train.batch_size_val""train.num_workers""trainer.precision""train.num_epochs""trainer.devices""train.max_time"- and some of the available
"callbacks"
To learn more about what each of these fields can take as values, run recipe.options("<field_name_here>").
To assign a value to any of these, run recipe["<field_name_here>"] = value
leip_recipe_designer.tasks.evaluate
¶
evaluate(recipe: RecipeNode, ignore_constraints: bool = False) -> dict
Runs evaluation of the recipe's checkpoint by calling the LatentAI Application Frameworks with the command 'evaluate'. The evaluation scores can be used to compare the performance of different recipes.
The LatentAI Application Frameworks module must be installed in the current context.
Parameters:
-
recipe(RecipeNode) –A LEIP Recipe Designer API RecipeNode to execute.
-
ignore_constraints(bool, default:False) –If True, the recipe validation is not performed.
Returns:
-
telemetry(dict) –A dict containing the metrics report.
Configuring your evaluate task
These are some parameters that you can use to configure the evaluation of a model:
"evaluation.save_directory""evaluation.max_batches""evaluation.subset""metrics.evaluation.algorithm"
To learn more about what each of these fields can take as values, run recipe.options("<field_name_here>").
To assign a value to any of these, run recipe["<field_name_here>"] = value
Evaluating compiled models
You can also change what kind of model you are evaluating, from a trained checkpoint, to a compiled LRE artifact, by replacing the "checkpoint" type. See what kind of models you can evaluate with recipe.options("checkpoint")
leip_recipe_designer.tasks.visualize_predictions
¶
visualize_predictions(recipe: RecipeNode, ignore_constraints: bool = False) -> dict
Generates a visualization of the checkpoint's predictions over the data by calling the LatentAI Application Frameworks with the command 'predict'. A helpful visual tool to diagnose the performance of the model.
The LatentAI Application Frameworks module must be installed in the current context.
Parameters:
-
recipe(RecipeNode) –A LEIP Recipe Designer API RecipeNode to execute.
-
ignore_constraints(bool, default:False) –If True, the recipe validation is not performed.
Returns:
-
telemetry(dict) –A dict containing the path to the images showing model predictions.
Configuring your visualize_predictions task
These are some parameters that you can use to configure the visualization of predictions of a model:
"prediction.display.ground_truths""prediction.display.confidence""prediction.display.errors""prediction.max_batches""prediction.subset""prediction.labels"
To learn more about what each of these fields can take as values, run recipe.options("<field_name_here>").
To assign a value to any of these, run recipe["<field_name_here>"] = value
Visualizing predictions of compiled models
You can also change what kind of model you are using to predict, from a trained checkpoint, to a compiled LRE artifact, by replacing the "checkpoint" type. See what kind of models you can use via recipe.options("checkpoint")
leip_recipe_designer.tasks.export_model
¶
export_model(recipe: RecipeNode, ignore_constraints: bool = False) -> dict
Traces the recipe's checkpoint by calling the LatentAI Application Frameworks with the command 'export'. Exporting a model is necessary to compile and deploy. A user may skip this step since the compile task will perform the export internally if needed.
The LatentAI Application Frameworks module must be installed in the current context.
Parameters:
-
recipe(RecipeNode) –A LEIP Recipe Designer API RecipeNode to execute.
-
ignore_constraints(bool, default:False) –If True, the recipe validation is not performed.
Returns:
-
telemetry(dict) –A dict containing the paths to the exported artifact and every other relevant output, such as pre and post processors, and the input shape the artifact will expect as [N,C,H,W].
Configuring your export_model task
These are some parameters that you can use to configure how to export your model for deployment or further processing:
"export""export.output_directory""export.output_file""export.batch_size"
To learn more about what each of these fields can take as values, run recipe.options("<field_name_here>").
You will have different options if you select a different type of export, for example if you choose to export as a traced graph or using TorchDynamo ONNX.
To assign a value to any of these, run recipe["<field_name_here>"] = value
leip_recipe_designer.tasks.data_report
¶
data_report(recipe: RecipeNode, ignore_constraints: bool = False) -> dict
Execute a recipe in LatentAI Application Frameworks with the command 'data_report'.
The LatentAI Application Frameworks module must be installed in the current context.
Parameters:
-
recipe(RecipeNode) –A LEIP Recipe Designer API RecipeNode to execute.
-
ignore_constraints(bool, default:False) –If True, the recipe validation is not performed.
Returns:
-
telemetry(dict) –A dict containing the path to the data report file.
Configuring your data_report task
These are some parameters that you can use to configure how to generate a report containing useful statistical information about your dataset:
"data_report.class_specific_report""data_report.number_of_samples""data_report.save_directory""data_report.sections"
To learn more about what each of these fields can take as values, run recipe.options("<field_name_here>").
To assign a value to any of these, run recipe["<field_name_here>"] = value
leip_recipe_designer.tasks.compile
¶
compile(recipe: RecipeNode, model_path: Optional[str] = None, input_shape: Optional[list] = None, ignore_constraints: bool = False) -> dict
Generate a compiled model binary that can be run with the LRE
If the recipe defines a quantizer and calibrator, it will also quantize and calibrate to generate the compiled model exactly as defined in the recipe.
Parameters:
-
recipe(RecipeNode) –A LEIP Recipe Designer API RecipeNode containing the compiler information.
-
model_path(Optional[str], default:None) –A path to the traced model artifact (.pt) if the user has already exported. If not provided, the compile task will handle the export automatically.
-
input_shape(Optional[list], default:None) –If the model_path is provided, the input shape needs to be provided as well, as a list of the format [N, C, H, W]
-
ignore_constraints(bool, default:False) –If True, the recipe validation is not performed.
Returns:
-
dict(dict) –A dict containing the path to the compiled model.
Configuring your compile task
These are some parameters that you can use to configure how to compile your model:
"compiler""compiler.target"
To learn more about what each of these fields can take as values, run recipe.options("<field_name_here>").
To assign a value to any of these, run recipe["<field_name_here>"] = value
leip_recipe_designer.tasks.generate_stub_code
¶
generate_stub_code(recipe: RecipeNode) -> dict
Generates an example deployment package in the folder specified by the given recipe.
The deployment package includes:
- The compiled model (a compiled model must be specified in the checkpoint section of the recipe)
- An example inference script
- Setup instructions for the selected programming language
- A sample image from the dataset
The package configuration is determined by the following sections of the recipe:
- stub_code: Specifies the setup and structure of the generated code
- data_module: Provides an example image and the label map from the dataset
- checkpoint: Copies the compiled model artifact into the resulting directory if provided
Below is an example of the output directory structure for a Python deployment if all fields are set:
demo/
└── python
├── README.md
├── infer.py
├── modelLibrary.so
├── requirements.txt
└── sample_image.jpg
Parameters:
-
recipe(RecipeNode) –A LEIP Recipe Designer API object containing the compilation details.
Returns:
-
dict–A dictionary with the path to the root folder of the generated stub code.
Configuring your generate_stub_code task
To ensure the stub_code.stubs component is set use the following:
recipe.assign_ingredients(
'stub_code.stubs', {'python': 'python'}
)
This will ensure the Python stub code parameters are filled.