# set the loglevel to critical to reduce the amount of logs you will see while running this tutorial
import logging
logger = logging.getLogger('leip_recipe_designer.tasks')
logger.setLevel(logging.CRITICAL)
From Classifier GRDB to Deployable Artifacts: A Step-by-Step Guide¶
This How-To Guide walks you through the process of loading a recipe from the Classifier GRDB, training the model, and generating deployable artifacts. We'll use the tools available in LEIP Design to perform this workflow step-by-step.
Overview¶
In this guide, we will:
- Load a recipe from the Classifier GRDB.
- Train the selected model with a dataset.
- Evaluate the trained model's performance.
- Optimize and compile the model for a hardware target.
- Export the deployable model and stub code.
To begin, let's load a recipe with ID 4163 from the classifier GRDB. The API for this differs slightly from what you've encountered with detectors.
from pathlib import Path
import leip_recipe_designer as rd
workspace = Path('./workspace')
pantry = rd.Pantry.build(workspace / "./my_combined_pantry/", force_rebuild=False)
recipe = rd.create.from_recipe_id("4163", pantry=pantry, task="vision.classification.2d", volume="xval_cls", default=True)
For this guide, we will use the EuroSAT dataset, a public dataset of satellite images. The dataset can be downloaded and prepared for use with the following code:
data = rd.helpers.data.get_data_generator_by_name(pantry=pantry, regex_ingredient_name="EuroSAT", category="classification")
rd.helpers.data.replace_data_generator(recipe, data)
Once the dataset is ready, we need to configure the recipe to use it. Then, we will train the model for a single epoch.
recipe["train.num_epochs"] = 1
recipe.fill_empty_recursively()
train_output = rd.tasks.train(recipe)
After training, it is essential to evaluate the model to measure its performance. This will provide insights into its accuracy, and other metrics.
# Assign a checkpoint ingredient to the recipe for later use (e.g., evaluation)
recipe.assign_ingredients("checkpoint", "Local ckpt file")
recipe['checkpoint.path'] = str(train_output["best_model_path"])
# Evaluate the trained recipe
eval_output = rd.tasks.evaluate(recipe)
from pprint import pprint
print("Evaluation Metrics:")
pprint(eval_output['evaluate.metric_dict'], indent=2, width=80, compact=True)
To deploy the model on specific hardware, it must be optimized and compiled. For example, you can target GPU with CUDA:
print(recipe['compiler'])
compiler: TensorRT Compiler (slot:compiler, id:f71957, version:1.0.1)
compiler.set_channel_layout = None (slot:set_channel_layout)
compiler.export_metadata = False (slot:export_metadata)
compiler.force_overwrite = False (slot:force_overwrite)
compiler.export_relay = False (slot:export_relay)
compiler.output_path = ./compile_output (slot:output_path)
compiler.set_float16 = False (slot:set_float16)
compiler.opt_level = 3 (slot:opt_level)
compiler.target = nvidia/rtx-a4500 (slot:target)
compiler.host = intel/cascadelake (slot:host)
quantizer: No Quantizer (slot:quantizer, id:c00837, version:1.0.0)
recipe['compiler.output_path'] = './rd_forge_output'
recipe['compiler.force_overwrite'] = True
compiled_model = rd.tasks.compile(recipe)
Evaluate the compiled model (just on a single batch) as you have seen already in our getting started tutorial!
recipe.assign_ingredients("checkpoint", "compiled")
recipe['checkpoint.path'] = str(compiled_model['compiler.output_file'])
# Set the validation batch size to match the compiled batch size
recipe["train.batch_size_val"] = recipe["export.batch_size"]
# Limit the number of evaluation batches to speed up the example evaluation
recipe['evaluation.max_batches'] = 1
# Evaluate the compiled recipe
compiled_eval_output = rd.tasks.evaluate(recipe)
# Print accuracy after compilation
print(f"Accuracy after compilation: {compiled_eval_output['evaluate.metric_single']:.4f}")
Finally, we will generate the necessary stub code for inference, along with the compiled model artifact. This package will include everything required to deploy the model on your target edge hardware.
recipe['model.use_pretrained'] = False
demo_dir = rd.tasks.generate_stub_code(recipe)
Now, you can use the generated files to deploy your model on the desired hardware platform, such as a CUDA-enabled GPU, based on your specified target configuration.