Building awesome machine learning models can be an onerous task. Once all the blood, sweat and tears have been expended creating this magical (and ethical) model sometimes it feels like getting the thing deployed will require severing another limb in the name of releasing software. This post is designed to help you, dear reader, manage the land mines surrounding this complicated task and hopefully make this part of the development cycle painless.
Here’s the rundown:
- I made a machine learning model in PyTorch that classifies tacos vs burrito images
- I exported the model to ONNX
- I want to deploy inferencing code to Azure Functions using Python
- I’ve been able to get the function running locally (this can be its own post but I thought the docs were excellent).
What follows is my journey getting this to work (with screenshots to boot). Let’s dig in!