Four Steps to Error-Free Monitoring of ML Models
Monitoring a machine learning model is essentially tracking the health and the abilities of the model as it encounters new data, and making sure that the model does not make bad decisions, or starts developing a bias towards certain sets of data. In many cases, such as finance and health, bias in the predictions of machine learning can be illegal and hence, tracking the health of a machine learning model is of utmost importance.
To monitor your ML model, you first need to deploy it. After deployment, you need to set up a way such that the model calculates its most important biases and prediction metrics in the form of easy to understand statistics. It is also important to track these statistics at regular intervals of time. Next, you need to set up an API that can query the model and extract these statistics. This API needs to render these statistics onto a dashboard’s backend that can easily visualise these statistics. Such a dashboard should allow the user to perform A/B testing and help them keep the model within optimal bounds. Here’s a step by step process of tracking machine learning models appropriately:
- Choose good engineering infrastructure for the deployment of the model: You cannot track the bias in your model if it’s deployed on a free Heroku server with limited access and memory. Models need to be deployed in proper virtual environments (a space within the machine that is dedicated to the model and its dependencies) or containers such as Docker or Kubernetes that allow for a smooth management of the model and its dependencies.
Having a good infrastructure also allows you to keep your model safe from engineering irregularities or conflicts with other parts of the system. Sometimes, a different version of a package used by your model can interfere with the model’s operations and cause interferences that may bias the predictions of your model.
A good engineering framework also allows for easy access to the model’s internal infrastructure where the model statistics are usually stored. A model build using object oriented principles in Python and having an integrated script to store the models metrics makes it very easy for the user to query the statistics of the model and track them on a graph in real time.
- Use fast and reliable methods for querying your model: The choice of a good API is very important because the abilities of the API will determine the speed and accuracy with which you receive data of your model. A Perl API is a very bad choice whereas a REST API built with Flask is a fairly reliable choice. An API built with NodeJS, integrated with its express framework, is one of the widely used frameworks for querying machine learning frameworks and files inside a system.
The framework for querying your data should also be able to handle multiple queries at the same time. In short, its ability to perform parallel processing is equally important. For example, if multiple data scientists want to track the real time data at the same time, it should be able to query and render data quickly and efficiently to multiple sources.
- Use reliable scripts to calculate statistics beforehand so that you minimize the time taken to process your data: One of the most widely used techniques in data science to improve the efficiency of reporting statistics is to pre-process the data in real-time so that accessing and visualizing the health of the ML model becomes an easier task later on. Basically, the model should not wait for the API to calculate the statistics. It should collect the data, make predictions and calculate bias statistics in real time, and then store them as model attributes that can be easily queried.
This part is extremely important because it determines the accuracy and the usefulness of your visualizations dashboard. Presenting non-intuitive statistics and numbers that are difficult to understand can render the entire monitoring process useless. Thus, selecting the right statistics and calculating them accurately is one of the most important tasks of model monitoring.
- Build a light and reliable dashboard that has the ability to handle multiple users: The engineering skills of the team become increasingly important at this step. The data scientist must build a dashboard that is light, easy to understand, and presents the right amount of detail along with the ability to benchmark the statistics against industry standards and legal requirements. A statistic is not helpful if you don’t know what it should look like. For example, drawing red horizontal lines to define the boundaries within which the statistic must remain is crucial. More importantly, the statistics should also be presented with 3-day and 7-day averages to help the user better understand the overall trend rather than be stuck on small anomalies that are not representative of the actual quality of the model.
The implementation of good design principles is equally important here. Choosing the
right colors along with a dynamic scheme to showcase the state of the model is crucial.
Without an effective design, a user is likely to misinterpret key statistics and potentially
ignore warning signs that can lead to problems later in the model.
- Allow a two-way interaction between the user and the model: A model monitoring system isn’t just about reporting statistics. It’s also about being able to take action and improve the model in case of a problem. Monitoring dashboards should allow the user to implement small fixes and the ability to change certain parameters of the model to adjust its quality. For example, Models that use clustering algorithms should have the ability to change the resolution of the clustering process in order to recluster groups as new data becomes available. Sometimes, big clusters start to look like two distinct clusters. At this time, a user should be able to increase the resolution of the clustering process and allow the algorithm to separate out different clusters. In other cases, some groups can actually become similar clusters and hence, need to be labeled similarly rather than differently.
The ability to A/B test your model and tweak parameters to obtain optimal results is one of the biggest advantages of having a robust model monitoring framework. This also allows for consistency of model results and makes sure that the model evolves with time. Dynamic models tend to perform better than static models that do not change over time.
The ability to quickly, efficiently and accurately deliver model statistics that allow a user to swiftly perform A/B testing and help the model achieve the global maximum performance makes for a highly productive model monitoring system.
Here at Datatron, we offer a platform to govern and manage all of your Machine Learning, Artificial Intelligence, and Data Science Models in Production. Additionally, we help you automate, optimize, and accelerate your ML models to ensure they are running smoothly and efficiently in production — To learn more about our services be sure to Book a Demo.