MLOps
AI Model Optimization Has Never Been Easier
Compress the model for your edge device without losing the accuracy
As the popular cliché has it, data scientists spend 80% of their time preparing the data, and only 20% developing the models. While there might be some truth to it, we should never underestimate the effort needed for the remaining 20%. Choosing the architecture, training, fine-tuning, and evaluating the model is no mean feat, especially when developing models for edge devices, where criteria other than performance metrics need to be considered. I’ve recently got to use NetsPresso, a platform that promises to take care of all the model optimization in an automated manner. Let me show you how it works.
Optimizing machine learning models
The typical machine learning pipeline is becoming a more or less established process these days. We query or download the raw data, parse it and clean it, and extract and engineer features, to finally obtain a data set ready for training. Then, we iterate over model architectures and a multitude of training and data processing hyperparameters to hopefully arrive at a model that satisfies some relevant performance metrics.