Datatron Blog

Stay Current with AI/ML

Datatron MLOps Product Video Model Catalog

Datatron: MLOps Platform – Self-Guided In-Product Demo Series (7 Mini-Videos)

Explore Datatron’s FLEXIBLE MLOps and AI Governance features at your own pace in this self-guided in-product demo via a series of 7 short mini-videos. Each feature is accessible via API and can be easily integrated into your existing ML solution to shore up issues and help your enterprise make progress and deliver on the promise of AI.

 
Part I: MLOps Explained – A Perspective
Part II: MLOps Platform Introduction – UI, API, & User Access Controls
Part III: MLOps Platform – Model Catalog (“DATA SCIENTIST” Use Case)
Part IV: MLOps Platform – Model Deployment & Inferencing (“ML ENGINEER” Use Case)
Part V: MLOps Platform – Model Monitoring, Alerts, Routing, & Fallback
Part VI: MLOps Platform – Real-time Inferencing & Post-Processing
Part VII: MLOps Platform – AI Governance Dashboard and Model/BU “Health Score” (“AI EXECUTIVE” Use Case)

Part I: MLOps Explained – A Perspective

MLOps, or “AI Operationalization” is the process of deploying an Artificial Intelligence Machine Learning (AI/ML) model into production by an ML Engineer (IT) after the model is created in the lab by the Data Scientist.

Additionally, beyond deployment, it can include monitoring for bias/drift, scaling, governance, and more. This process is widely misunderstood by entrants into the space or young AI programs, as they leverage traditional software development DevOps Engineers and processes, which simply don’t work in AI.

This is why Gartner and many other sources state that upwards of 85% of AI/ML models never make it into production. Datatron has solved this.

In this video, Datatron VP of Operations & Customer Success, Victor Thu, explains what MLOps is and how it relates to the AI/ML model lifecycle process. This is part I in a VII part series introducing “The Datatron” MLOps & AI Governance platform, and how it fits into the AI/ML ecosystem.

While this series does focus on “The Datatron,” it is intended to educate AI/ML practitioners, even those who use Open Source, on MLOps processes, pitfalls, features, and best practices, as Datatron can be integrated into/alongside Open Source solutions via API to fill in gaps integrating Datatron’s keystone features (e.g., Model Catalog, Monitoring, Health Dashboard).

Part II: MLOps Platform Introduction – UI, API, & User Access Controls

MLOps platforms can vary by brand, but they each contain fundamental features that qualify them as “enterprise-grade.”

In this video, you’ll see a User Interface (UI) to understand key MLOps platform functionalities, which are also available via API.

Fundamental to an enterprise AI platform are Role-Based Access Controls (RBAC), which segment data, features, and views by groups, be it LOBs, BUs, or functional silos.

While primarily a security feature that restricts access by specific users to only pre-approved areas within the tool, this also simplifies the view of information and presents the right information to the right team member of the right team.

In this video, Datatron VP of Operations & Customer Success, Victor Thu, gives a high-level tour of an MLOps platform’s User Interface, explains that features are also accessible via API, and demonstrates granular role-based user access controls for several teams (Credit, Fraud, Mortgage, Chief Data Office, & Global Markets). This is part II in a VII part series introducing “The Datatron” MLOps & AI Governance platform, and how it fits into the AI/ML ecosystem.

Part III: MLOps Platform – Model Catalog (“DATA SCIENTIST” Use Case)

In this “Data Scientist” Use Case, you’ll learn how a Data Scientist leverages an MLOps & AI Data Governance platform to monitor models for bias, drift, & performance anomalies, and how the model “Health” Dashboard allows the Data Scientist to catch issues before they become problems, and quickly drill down to kick-start the root cause analysis.

Starting in the “Model Catalog” all models are registered with metadata and proper versioning. Because the MLOps platform is model/library/stack agnostic, you can register any model built on any stack (E.g., PyTorch, xgboost, Jupyter Notebook, Tensorflow, scikit, Seldon Core, h20.ai, Raw Python)

Within the Model Catalog, deep metadata is captured, including who worked on the model, when it was registered, features input/output/(Optional: feedback) contracts, and what training data set was used. While the Data Scientist may not be as concerned with audit reports as AI executives, the “Reference Dataset” (i.e. training data) is critical for satisfying compliance requirements when the Governance, Risk, & Compliance (GRC) team comes a’knocking.

With hundreds to thousands of models, management becomes cumbersome. Early AI programs employ manual tools (e.g., Gsheets) but eventually realize that is unsustainable. In the video, you’ll see how you can leverage Tags and search to quickly find a specific model even when working with thousands of models. Just another benefit of an MLOps platform.

In this video, Datatron VP of Operations & Customer Success, Victor Thu, channels his inner Data Scientist to demonstrate their workflow within an MLOps and AI governance platform. This is part III in a VII part series introducing “The Datatron” MLOps & AI Governance platform, and how it fits into the AI/ML ecosystem.


Your AI Program Deserves Liberation.
Datatron is the Answer.
request a demo

Part IV: MLOps Platform – Model Deployment & Inferencing (“ML ENGINEER” Use Case)

In this “ML Engineer” (Machine Learning Engineer) Use Case, you’ll learn how ML Engineers leverage an MLOps & AI Data Governance platform to deploy models quickly via real-time inferencing (i.e. API) or batch mode processing.

Starting with the real-time inferencing workspace, (i.e. “Gateway”) you’ll see an abstraction of your models. In general, the Data Scientist must manually update the “hash key” every time the model is updated. However, this particular platform creates a single static hash key for the application team so that the Data Scientist avoids having to remember to update the endpoint, a common point of failure in a traditional MLOps workflow.

When comparing a new model’s performance to an existing model, the ML Engineer can use a/b testing “Challenger” mode or “Shadow” mode to have the models run side-by-side to determine if the new model is worthy of migrating up to production before exposing your program to the risk associated with an unproven model.

In this video, Datatron VP of Operations & Customer Success, Victor Thu, walks through an ML Engineer’s workflow to deploy a model within an MLOps and AI governance platform. This is part IV in a VII part series introducing “The Datatron” MLOps & AI Governance platform, and how it fits into the AI/ML ecosystem.

Part V: MLOps Platform – Model Monitoring, Alerts, Routing, & Fallback

MLOps & AI Data Governance platforms are critical when it comes to monitoring your AI/ML models and sending alerts for issues such as bias or data drift, and should integrate with all the popular alerting tools in the marketplace (e.g., PagerDuty).

Routing and “Fallback” are additional features that Datatron employs to ensure your models are operating in alignment with your SLA. If one model is performing beneath your defined SLA it will automatically “fall back” to another model.

In this video, VP of Operations & Customer Success, Victor Thu, highlights some of the monitoring, alerts, and routing features within the Datatron MLOps and AI governance platform. This is part V in a VII part series introducing “The Datatron” MLOps & AI Governance platform, and how it fits into the AI/ML ecosystem.

Part VI: MLOps Platform – Real-time Inferencing & Post-Processing

There is no one single way to employ your AI/ML model – Real-time inferencing takes data input as it happens in production and scores the data (e.g., determining the ideal route for pizza delivery), while post-processing can use a stored data set for scoring “offline” when resources are more readily available (after hours), say for a pharmaceutical company to score millions of molecular permutations as part of a drug discovery process.

A robust MLOps platform will support both scenarios and multiple workflows, while allowing you to pick different models from your model catalog against which to apply your data.

In this video, VP of Operations & Customer Success, Victor Thu, once again demonstrates the FLEXIBILITY of “The Datatron” MLOps platform in not only supporting real-time inferencing and offline batch processing, but allowing Ml practitioners to create their own custom workflows. This is part VI in a VII part series introducing “The Datatron” MLOps & AI Governance platform, and how it fits into the AI/ML ecosystem.

Part VII: MLOps Platform – AI Governance Dashboard and Model/BU “Health Score” (“AI EXECUTIVE” Use Case)

In this “AI Business Executive” Use Case, you’ll learn how an AI/ML Executive leverages an MLOps & AI Data Governance platform to see the overall “Health” of their entire AI program, as well as see which models demonstrate bias, drift, & performance anomalies.

The Datatron AI model “Health” Dashboard is a mission-critical model risk management tool for Data & Analytics Heads, Center of Excellence leads, and Governance, Risk, & Compliance (GRC) executives to mitigate fines for compliance issues.

In this video, Datatron VP of Operations & Customer Success, Victor Thu, walks you through the “Health” dashboard available only on “The Datatron” MLOps and AI governance platform. This is part VII in a VII part series introducing “The Datatron” MLOps & AI Governance platform, and how it fits into the AI/ML ecosystem.

 

whitepaper

Datatron 3.0 Product Release – Enterprise Feature Enhancements

Streamlined features that improve operational workflows, enforce enterprise-grade security, and simplify troubleshooting.

Get Whitepaper

whitepaper

Datatron 3.0 Product Release – Simplified Kubernetes Management

Eliminate the complexities of Kubernetes management and deploy new virtual private cloud environments in just a few clicks.

Get Whitepaper

whitepaper

Datatron 3.0 Product Release – JupyterHub Integration

Datatron continues to lead the way with simplifying data scientist workflows and delivering value from AI/ML with the new JupyterHub integration as part of the “Datatron 3.0” product release.

Get Whitepaper

whitepaper

Success Story: Global Bank Monitors 1,000’s of Models On Datatron

A top global bank was looking for an AI Governance platform and discovered so much more. With Datatron, executives can now easily monitor the “Health” of thousands of models, data scientists decreased the time required to identify issues with models and uncover the root cause by 65%, and each BU decreased their audit reporting time by 65%.

Get Whitepaper

whitepaper

Success Story: Domino’s 10x Model Deployment Velocity

Domino’s was looking for an AI Governance platform and discovered so much more. With Datatron, Domino’s accelerated model deployment 10x, and achieved 80% more risk-free model deployments, all while giving executives a global view of models and helping them to understand the KPI metrics achieved to increase ROI.

Get Whitepaper

whitepaper

5 Reasons Your AI/ML Models are Stuck in the Lab

AI/ML Executive need more ROI from AI/ML? Data Scientist want to get more models into production? ML DevOps Engineer/IT want an easier way to manage multiple models. Learn how enterprises with mature AI/ML programs overcome obstacles to operationalize more models with greater ease and less manpower.

Get Whitepaper