# set the loglevel to critical to reduce the amount of logs you will see while running this tutorial
import logging
logger = logging.getLogger('leip_recipe_designer.tasks')
logger.setLevel(logging.CRITICAL)
Getting Started with LEIP Design¶
In this tutorial, we will select, train, and deploy an object detection model optimized for specific hardware using LEIP Design. To make the process quicker and easier, we will use one of the datasets included with LEIP Design.
We will:
- Select a pre-configured object detection recipe and dataset from the LEIP Design environment.
- Train the selected object detection model on the Fire and Smoke dataset.
- Evaluate the trained model’s performance.
- Optimize and compile the model for target hardware (in this case, CPU).
- Export the deployable model and stub code.
Environment Setup¶
If you haven't done so already, please ensure that your environment is set up correctly. Please consult the installation guide for step-by-step instructions.
Step 1: Get our pantry¶
To design recipes, we need ingredients. Ingredients live in the pantry. A pantry can have all sorts of ingredients: models, datasets, optimizers, learning rate schedulers, and even training-aware quantization techniques. The code below will download a pantry full of useful ingredients to get started.
from pathlib import Path
import leip_recipe_designer as rd
# Define the workspace path
workspace = Path('./workspace')
# Build the pantry (do not rebuild if it already exists)
pantry = rd.Pantry.build(workspace / "./my_combined_pantry/", force_rebuild=False)
We will select an object detection recipe that has been trained across different datasets, providing a good balance between performance and efficiency.
To create this recipe, simply use its recipe ID: 59845. This recipe is specifically for detectors and can be loaded directly from the pantry.
For loading recipes and exploring other GRDBs, refer to the Recipe Creators API Reference:
recipe = rd.create.from_recipe_id('59845', pantry=pantry, allow_upgrade=True)
Skipped downloading goldenrecipedb with name "xval_det" and variant "Xval0.3" (0), as it already exists. This is the Cross-validation volume. Available methods are- get_golden_df describe_table
To learn how to use the Golden Recipe Database (GRDB) API to explore Golden Recipe volumes and select a recipe from the Golden DataFrame, consult the Explore Golden Volumes guide.
Step 2: Select a dataset¶
We will select the Fire and Smoke dataset, one of our included datasets.
data = rd.helpers.data.get_data_generator_by_name(pantry=pantry, regex_ingredient_name="Fire and smoke")
This method is used to instantiate a dataset that has already been integrated. In case you want to use your own dataset or look at what else we offer, refer to the Bring Your Own Data (BYOD) guide. Now, replace the dataset in the recipe with the provided one:
rd.helpers.data.replace_data_generator(recipe, data)
Step 3: Set Up the Recipe Before Training¶
Let's assign a logger to monitor training metrics and progress. For this tutorial, we'll use Tensorboard.
recipe.assign_ingredients('loggers', {"my_local_training_log": "Tensorboard"})
[{'choice_id': 'd139774776112d54355b171d97cff8627840b55995e9d852577242a5ed152a81',
'choice_name': 'Tensorboard',
'synonym': 'loggers',
'parent': 'Full Recipe',
'slot': 'slot:loggers',
'path': ['slot:loggers', 'name:my_local_training_log'],
'name': 'name:my_local_training_log'}]
For this tutorial, we're going to limit training to one epoch and restrict batch training to 10% for faster execution. Training the recipe until convergence will take several minutes—an NVIDIA RTX A4500 GPU took about 15 minutes to train this model.
If you would prefer to wait for the full training to converge instead of downloading the checkpoints, comment out the cell below:
recipe["train.num_epochs"] = 1
recipe["trainer.train_batches_percentage"] = 0.1
If the recipe contains any missing elements, we can automatically fill them with default values prior to training. Let's do that now:
recipe.fill_empty_recursively()
[{'choice_id': '4adc4dcfbc1c805eaff72bb8460c6e5991b3f0e7fa7c779443086d3b85c9051f',
'choice_name': 'No Quantizer',
'synonym': 'quantizer',
'parent': 'TVM Compiler',
'slot': 'slot:quantizer',
'queries': ['quantizer'],
'req_map_key': '',
'path': ['slot:compiler', 'slot:quantizer']}]
Step 4: Train the selected recipe¶
train_output = rd.tasks.train(recipe)
If you would like to train one or more candidate models, evaluate the results, and pick a winner, consult our Model Training and Optimization guide.
Step 5: Evaluate the model and visualize the predictions¶
Now we can evaluate the model we trained using the candidate recipe.
# Assign a checkpoint ingredient to the recipe for later use (e.g., evaluation)
recipe.assign_ingredients("checkpoint", "Local ckpt file")
recipe['checkpoint.path'] = str(train_output["best_model_path"])
# Evaluate the trained recipe
eval_output = rd.tasks.evaluate(recipe)
print(f"mAP: {eval_output['evaluate.metric_single']:.4f}")
mAP: 0.0179
Bonus: Detailed Evaluation Metrics¶
In addition to the primary mAP score, we can dive deeper into the evaluation results to get more detailed metrics. These additional metrics can give us better insights into the model's performance across different thresholds and object sizes.
To display detailed evaluation metrics, run the following:
from pprint import pprint
print("Evaluation Metrics:")
pprint(eval_output['evaluate.metric_dict'], indent=2, width=80, compact=True)
Evaluation Metrics:
{ 'classes': [1.0, 2.0],
'map': 0.017924292013049126,
'map_50': 0.0824517086148262,
'map_75': 0.0010657395469024777,
'map_large': 0.04469352588057518,
'map_medium': 0.028310101479291916,
'map_per_class': [0.020685071125626564, 0.015163512900471687],
'map_small': 0.16993780434131622,
'mar_1': 0.06800151616334915,
'mar_10': 0.2026388794183731,
'mar_100': 0.29363811016082764,
'mar_100_per_class': [0.2680981457233429, 0.3191780745983124],
'mar_large': 0.2780952453613281,
'mar_medium': 0.30813083052635193,
'mar_small': 0.17619048058986664,
'metric': 0.017924292013049126}
Next, we will visualize the predictions from the model:
predict_output = rd.tasks.visualize_predictions(recipe)
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import math
%matplotlib inline
def display_helper(images_path, count=4, image_extension="jpeg"):
fig = plt.figure(figsize=(20, 10), facecolor='w')
columns = 4
rows = math.ceil(count / columns)
for idx, path in zip(range(rows * columns), Path(images_path).rglob(str("*."+image_extension))):
img = mpimg.imread(path)
fig.add_subplot(rows, columns, idx + 1)
plt.imshow(img)
plt.axis("off")
display_helper(predict_output["predict.output_directory"], count=8)
Recall that for this tutorial, we restricted training. Allowing the recipe to complete training on the entire dataset would result in improved evaluation metrics and more accurate predictions.
Step 6: Compile Your Recipe¶
Next, we will compile the model for your local CPU. To compile artifacts for different hardware targets, refer to the Setting Compiler Targets guide.
recipe['compiler']
VBox(children=(HTML(value='\n<style>\n .recipe-accordion-style > div[class*="jupyter-widget-Accordion-"] > …
recipe['compiler.output_path'] = './rd_forge_output'
recipe['compiler.force_overwrite'] = True
compiled_model = rd.tasks.compile(recipe)
If you want to use the LEIP Design API to explore LEIP Optimize's compilation, quantization, and calibration parameters, refer to the Optimize guide.
Step 7: Evaluate Your Compiled Model¶
First, we will add the compiled model back to the recipe to ensure we evaluate the compiled recipe.
recipe.assign_ingredients("checkpoint", "compiled")
# Note: For ONNXRuntime (with or without TensorRT), use 'compiler.output_file'
recipe['checkpoint.path'] = str(compiled_model['compiler.output_path'])
Next, we will change the evaluation batch size to match the batch size for which we compiled the model.
# Set the validation batch size to match the compiled batch size
recipe["train.batch_size_val"] = recipe["export.batch_size"]
# Limit the number of evaluation batches to speed up the example evaluation
recipe['evaluation.max_batches'] = 1
Evaluate the compiled model!
# Evaluate the compiled recipe
compiled_eval_output = rd.tasks.evaluate(recipe)
# Print the metric (mean Average Precision) after compilation
print(f"mAP after compilation: {compiled_eval_output['evaluate.metric_single']:.4f}")
mAP after compilation: 0.1586
Step 8: Generate Stub Code¶
In this step, we will generate the necessary stub code for inference, along with the compiled model artifact. This package will include everything required to deploy the model on your target edge hardware.
recipe['model.use_pretrained'] = False
demo_dir = rd.tasks.generate_stub_code(recipe)