Datatron CEO Lauded by Intercon Award for Track Record of Success and Innovation
Harish Doddi, founder & CEO of Datatron, continues to lead the pack as a technology trailblazer.
We are proud to announce that Doddi has won the 2021 Intercon Excellence in Technology award with top scores in both the “innovation” and “future readiness” categories.
Intercon recognizes Doddi for bringing a new solution to market and breaking the status quo in Machine Learning, Model Operations and governance for enterprise. Datatron is overjoyed to add this accolade to our list of accomplishments, which includes awards from CIO, Gartner and O’Reilly, among others. In light of this latest accomplishment, we’d like to share some of the insights our CEO has developed throughout his career.
A Pre-Datatron history
As an early employee at several Silicon Valley unicorns (Lyft, Snap, Twitter), Doddi is no stranger to identifying and developing new markets. He helped design and manage technological feats that led to these companies’ early success.
Throughout Doddi’s career, he has overseen data science projects that rapidly expanded in terms of both size and complexity. Several of his past employers now have several machine learning (ML) models across different business functions. This success didn’t happen overnight; in each role, Doddi has grappled with a number of growing pains and operational challenges to come out on top. This experience has helped him identify the challenges – and some valuable lessons – within this rapidly growing field, especially when it comes to managing MLOps at scale.
Model Validation
Doddi used to think that the same results he saw in development would hold true once deployed into production, but he realized quickly that models behave very differently in these two environments.
Development environment assumptions don’t always hold true. Data scientists create their assumptions to constrain models, but often must wait months before deploying a model to test a theory with performance results and analysis.
Model validation is often completed with a lot of trial and error, frustration and wasted time. Without a structure behind model validation testing, data scientists often build custom tools & scripts for their testing. This means model validation is handled differently across teams and business units.
Model Monitoring
Even with data scientists using the best algorithms and tools to validate models, the reality is you can never definitively know if it is the right approach until analysis is complete after the model is deployed into production. It’s an iterative and sometimes frustrating process.
When errors do happen and by the time they are caught, the damage has often already occurred. Without an ML model operations and governance solution to empower you with model validation capabilities, you often experience:
- Offline analysis completed ad hoc with manual A/B testing, canaries and traffic testing
- Inconsistent coordination and communication happening between teams (mobile developers, data science, DevOps)
- A lack of documentation and reporting that could lead to improper audit trails or compliance risk
For these reasons, it is critical to have a way to monitor models and hold them accountable for the anticipated results once they are operationalized and running in production.
Model Management
Over time, data scientists learn how to improve their assumptions based on the segmented model group and variables. Model iteration is a constant improvement process that should be anticipated as an organization further refines assumptions after monitoring its model performance in production. Especially with models requiring multiple model versioning, having model management capabilities will greatly improve machine learning operations efficiency.
Looking back, Doddi believes that with a solution like Datatron in his past jobs, he could have accelerated his data science teams’ learning curve in terms of capturing and applying knowledge from prior models in production for future model development and assumption refinement.
Model Deployment
The velocity in which models are being developed surpasses how quickly they go into production. With multiple new models and version updates going into production, it’s often challenging to manage multiple models all at different stages in the machine learning lifecycle.
Additionally, as the data science team and their preferred list of model development toolsets grows, model deployment can become cumbersome. Inefficiencies, time delays or technical limitations in model deployment could easily become a limiting factor and risk to an organization’s overall ML program.
Doddi’s personal experience of managing MLOps at scale inspired him to create Datatron, the Reliable AI platform that simplifies and standardizes machine learning operations and governance for enterprise. He believes it is the fastest and most scalable way to get an organization’s AI projects running in production and to experience the expected ROI of AI/ML.