Datatron Blog

Stay Current with AI/ML

AI Governance: Artificial Intelligence Model Governance

AI Governance

What is AI governance? At the federal level (e.g., United States government), Artificial intelligence governance is the idea of having a framework in place to ensure machine learning technologies are researched and developed with the goal of making AI system adoption fair for the people. While we’ll be focusing on the corporate implementation of AI Governance in this article, it is essential to understand AI governance from a regulatory perspective, as laws at the governmental level influence and shape corporate AI governance protocols.

AI governance deals with issues such as the right to be informed and the violations that may occur when AI technology is misused. The need for AI governance is a direct result of the rise of artificial intelligence use across all industries. The healthcare, banking, transportation, business, and public safety sectors already rely heavily on artificial intelligence.

The primary focus areas of AI governance are how it relates to justice, data quality, and autonomy. Navigating these areas requires a close look at which sectors are appropriate and inappropriate for artificial intelligence and what legal structures should be involved. AI governance addresses the control and access to personal data and the role morals and ethics play when using artificial intelligence.

Ultimately, AI governance determines how much of our daily lives can be shaped and influenced by AI algorithms and who is in control of monitoring it.

In 2016, the Obama Administration announced the White House Future of Artificial Intelligence Initiative. It was a series of public engagement activities, policy analysis, and expert convenings led by the Office of Science and Technology Policy to examine the potential impact of artificial intelligence.

The next five years or so represent a vital juncture in technical and policy advancement concerning the future of AI governance. The decisions that government and the technical community make will steer the development and deployment of machine intelligence and have a distinct impact on how AI technology is created.

In this article, we’re going to focus on AI governance from the corporate perspective and see where we are today with the latest AI governance frameworks.


Your AI Program Deserves Liberation.
Datatron is the Answer.
request a demo

Why Do We Need AI Governance?

To fully understand why we need AI governance, you must understand the AI lifecycle. The AI lifecycle includes roles performed by people with different specialized skills that, when combined, produce an AI service.

Each role contributes uniquely, using different tools. From origination to deployment, generally, there will be four different roles involved.

Artificial Intelligence

  • Business Owner

    The process starts with a business owner who defines a business goal and requirements to meet the goal. Their request will include the purpose of the AI model or service, how to measure its success, and other constraints such as bias thresholds, appropriate datasets, and levels of explainability and robustness required.


  • The Data Scientist

    Working closely with data engineers, the data scientist takes the business owner’s requirements and uses data to train AI models to meet the requirements. The data scientist, and expert in computer science, will construct a model using a machine learning process. The process includes selecting and transforming the dataset, discovering the best machine learning algorithm, tuning the algorithm parameters, etc. The data scientist’s goal is to produce a model that best satisfies the business owner’s requirements.


  • Model Validator

    The model validator is an independent third party. This role falls within the scope of model risk management and is similar to a testing role in traditional software development. A person or company in this role will apply a different dataset to the model and independently measure metrics defined by the business owner. If the validator approves the model, it can be deployed


  • AI Operations Engineer

    The artificial intelligence operation engineer is responsible for deploying and monitoring the model in production to ensure it operates as designed. This may include monitoring the performance metrics as defined by the owner. If some metrics are not meeting expectations, the operations engineer is responsible for informing the appropriate roles.

    With so many roles involved in the AI lifecycle, we need AI governance to protect the companies using AI solutions in emerging technologies and to protect the consumers using AI technologies across the entire global community.

Who is Responsible for Ensuring AI is Used Ethically?

AI Used Ethically

With the number of roles involved in the AI lifecycle, a question arises: who should be responsible for AI governance?

First, CEOs and senior leadership in corporate institutions are the people ultimately responsible for ethical AI governance. Second in line in terms of responsibility comes the board of an organization who is responsible for audits.

The general counsel should have the responsibility for legal and risk aspects. The CFO should be aware of the cost and financial risk elements. The chief data officer (CDO) should take responsibility for maintaining and coordinating an ongoing evolution of the organization’s AI governance.

With data critical to all business functions, customer engagement, products, and supply chains, every leader needs to be knowledgeable about artificial intelligence governance. Without clear responsibilities, no one is accountable.

Infographic

MLOps Maturity Model [M3]

MLOps Maturity Model Infographic Thumbnail

In this Infographic, you’ll learn:

  • The FIVE stages of maturity in Machine Learning Operations, i.e., MLOps
  • Why DevOps is not the same for ML as it is for software, and why MLOps is needed
  • The ideal teams, stacks, and features to look for to reach Maturity in your ML program

Learn why some companies succeed, while others struggle in AI/ML by seeing the signatures of success across Ideation, Team, Stack, Process, & Outcome in this informative (Hi-res) Infographic.

Infographic: MLOps Maturity Model [M3]

What are the 4 Key Principles of Responsible AI?

Responsible AI

Because AI governance is about protecting both organizations and the customers they serve, defining the key ethical principles of responsible AI is a helpful way to guide policy. In addition, because as a society we’re still in the early stages of AI development and new AI projects emerge daily we must learn from past mistakes and adjust accordingly.

Sometimes machine learning algorithms introduce unintended results as seen in these high-profile cases.

Microsoft Tay

After its launch, Microsoft’s Tay Twitterbot quickly gained 50,000 followers and created more than 100,000 tweets. But after only 24 hours of machine learning, Tay turned into a PR nightmare with some of its offensive tweets and had to be taken offline.

COMPAS Recidivism Algorithm

COMPAS, is a commonly used software in the US to guide criminal sentencing. It was exposed by ProPublica to be racially biased. Black defendants were twice as likely to be misclassified as white offenders.

Apple Card Bias

Tech magnates David Heinemeier Hanson and Steve Wozinak called out Apple for discrimination when their spouses were offered substantially lower credit limits despite having a shared banking and tax history.

Facebook Campaign Ads

Facebook sparked a public outcry that charged the company with putting profits before people and democracy after the company refused to police political ads on its AI-driven network.

These are just some examples of many. None of these companies set out with bad intentions, but these cases show the importance of AI governance and active monitoring.

The Four Key Principles of Responsible AI
Have Empathy

For the Microsoft example, it was Tay’s lack of empathy that caused the issue. The bot was not engineered to understand the societal implications of how it was responding. There were no guardrails in place to define the boundaries of what was acceptable and what might be hurtful to the audience interacting with the bot. The natural language processing error led to a big headache for the company.

Control Bias

AI algorithms make all decisions based on the data at their disposal. In the case of COMPAS, although the developers had no intention of creating a racist AI, the bias it uncovered was a reflection of the bias that exists in the natural world justice and sentencing system. Companies need to regulate machine learning training data and evaluate the impact to catch bias that might have been unintentionally introduced.

Provide Transparency

With negative publicity, it can be a challenge to convince consumers that AI is being applied responsibly. The Apple Card issue really wasn’t that Apple’s decision-making was biased; it was that Apple customer service was unsure how to answer the customer’s concerns. Companies must be proactive about certifying their algorithms, clearly communicating their policies on bias, and providing a clear and transparent explanation of the problem when it occurs.

Establish Accountability

Facebook took a lot of heat for its refusal to hold itself accountable for the quality and accuracy of the information being shown in its ads. Regulation around technology issues is always a few years behind the problem, so regulatory compliance isn’t enough. Companies must proactively establish and hold themselves accountable to high standards to balance the great power AI brings.

Datatron White Papers
Get Access to Exclusive Resources
White Papers

How Should AI Governance be Measured?

Responsible AI Governance

You can’t manage what you don’t measure. Lack of properly measuring AI models puts organizations at risk. So the question becomes which measures are important? In order to answer that, an organization must be clear on their definition of AI governance and who in the organization is accountable, and what their responsibilities are.

Many measures or metrics for AI governance will be standardized for all organizations through government regulations and market forces. Organizations also need to consider other measures that will support their strategic direction and how the company operates on a daily basis. Some essential facts and data-driven KPIs organizations should consider include:

Responsible Artificial Intelligence
Data

Measure for the lineage, provenance, and quality of the data.

Security

Data feeds around model security and usage. Understanding of tampering or improper usage of AI environments is critical.

Cost/Value

Define and measure KPIs for the cost of data and the value created by the data and algorithm.

Bias

KPIs that can show selection bias or measurement bias are a must. Organizations need to monitor bias through direct or derived data continuously. It will also be possible to create KPIs that measure information on ethics.

Accountability

Get clarity of individual responsibilities and when they used the system, and for what decisions.

Audit

The continuous collection of data could form the basis for audit trails and allow third parties, or the software itself, to perform continuous audits.

Time

Measurements of time should be a part of all KPIs, allowing for a better understanding of the model over specific time periods.

These are just some of the KPIs for organizations to consider. The sooner measurements are in place, the better they can evolve for a particular organizations’ environment and goals and be incorporated into software. AI governance should be and will likely be a mandatory part of all AI strategy and machine learning environments.

Your AI Program Deserves Liberation.
Datatron is the Answer.
book a demo

What are the Different Levels of AI Governance?

Levels of AI Governance
Level Zero: No AI Governance

At level zero, each AI development team uses its own tools, and there are no documented centralized policies for AI development or deployment. This approach can provide a lot of flexibility and is common for organizations just getting started with AI.

However, it comes with potential risks once the models are deployed into production. Because there is no framework, it’s impossible to evaluate risk. Companies working at level zero have a difficult time scaling AI practices. Hiring more data scientists does not lead to a ten-fold increase in AI productivity because of too many inconsistencies.

Level One: AI Governance Policies Available

Many organizations have already established some level of AI governance but have not developed a fully mature AI governance framework. Most companies are around level two or three, and with a little help, could have a fully mature AI governance system that’s fully automated, saving their organization a substantial amount of resources

Level Two: Create a Common Set of Metrics for AI Governance

This level builds upon level one by defining a standard set of acceptable metrics and monitoring tools to evaluate models. This brings consistency among all AI teams and enables metrics to be compared to different development lifecycles.

A common monitoring framework is introduced that allows everyone in the organization to interpret the metrics the same way. This reduces risk and improves transparency to better make policy decisions or troubleshoot reliability if issues arise. Companies operating at level two usually have a central model validation team upholding the policies laid out by the enterprise during the validation process. Level two is where organizations start to see productivity gains.

Level Three: Enterprise Data & AI Catalog

Level three leverages the metadata from level two to ensure all assets in a model’s lifecycle are available in an enterprise catalog with data quality insights and provenance. With a single data and AI catalog, the enterprise can trace the full lineage of data, models, lifecycle metrics, code pipelines and more.

Level three also lays the foundation for making connections between the numerous versions of models to enable a full audit. It also provides a single view to a CDO/CRO for a comprehensive AI risk assessment. Organizations at this level are able to clearly articulate risks related to AI and have a comprehensive view of the success of their AI strategy.

Level Four: Automated Validation and Monitoring

Level four introduces automation into the process to automatically capture information from the AI lifecycle. This information significantly reduces the burden on data scientists and other role players, freeing them from manually documenting their actions, measurements, and decisions.

This information also enables model validation teams to make decisions on an AI model, as well as allowing them to leverage AI-based suggestions. At this level, an enterprise can significantly reduce the operations effort in documenting data and model lifecycles. It removes risks from mistakes along the lifecycle in terms of metrics, metadata, and versions of data or model being excluded.

Companies at level four start to see an exponential increase in productivity as they’re able to consistently and quickly put AI models into production.

Level Five: Fully Automated AI Governance

Level five builds on automation from level four to automatically enforce enterprise-wide policies on AI models. This framework now ensures that enterprise policies will be enforced consistently throughout every model’s entire lifecycle. At this level, an organization’s AI documentation is produced automatically with the right level of transparency through the organization for regulators and customers.

This level enables the team to prioritize the riskiest areas for a more manual intervention.
Companies here can be highly efficient in their AI strategy while maintaining confidence in their risk exposure.

Infographic

MLOps Maturity Model [M3]

MLOps Maturity Model Infographic Thumbnail

In this Infographic, you’ll learn:

  • The FIVE stages of maturity in Machine Learning Operations, i.e., MLOps
  • Why DevOps is not the same for ML as it is for software, and why MLOps is needed
  • The ideal teams, stacks, and features to look for to reach Maturity in your ML program

Learn why some companies succeed, while others struggle in AI/ML by seeing the signatures of success across Ideation, Team, Stack, Process, & Outcome in this informative (Hi-res) Infographic.

Infographic: MLOps Maturity Model [M3]

Why You Should Care About AI Model Governance?

AI Model Governance

As demonstrated by the case examples in this article, many times AI models are simply not making the right decisions. Even if a model is trained correctly, over time, it will experience drift – it will change. Because of the inevitable drift, you need a way to monitor and capture that change so you can see what changed and make adjustments to the model.

Although most of model governance today focuses on model risk management around compliance, there’s an emerging trend towards social responsibility. Although the issues are not in violation of any government regulations, the financial fallout from poorly behaving models generates bad press and significantly impacts organizational bottom lines.

If these companies had been using an advanced production risk management governance tool like Datatron, they could have avoided costly AI errors and public embarrassment and financial loss.

What To Look For In An AI Governance Solution?

The best AI model governance solutions focus on simplifying the entire process of monitoring and compliance assurance while not causing significant disruptions to existing workflows. They allow key stakeholders in the organizations to have visibility into what the models are doing at all times.

The first line of defense starts with the data scientist, whose job is to ensure model validation using some kind of explainability tool, which is necessary from a developmental perspective but not good enough for actual production validity. The second line of defense helps with the production aspect by monitoring elements while the model is working. The third and most sophisticated line of defense has the capability to provide a detailed audit report that shows what the models are doing with only the output data.

Model Governance Checklist

When choosing an AI governance solution, choose a platform that goes beyond just compliance control. All enterprise-level businesses need to be confident about their model inventory, model development, and model management practices.

Make sure your AI monitoring and governance solution offers the following:

  • A dashboard that quickly gives you an overview of your AI model’s health in real-time
  • An overall health score that provides intuitive and easy-to-understand measures of all models operating in your system at the model and BU/LOB level
  • Bias, drift, performance, and anomaly detection
  • Alerts for when a model varies from its pre-defined performance thresholds
  • Custom metrics to align with your organization’s KPI metrics
  • Activity log & audit trail
  • An agnostic tool that can work with the various popular ML development platforms
  • Integrate with existing infrastructure, including databases, software, and more

See the Datatron Solution

whitepaper

Datatron 3.0 Product Release – Enterprise Feature Enhancements

Streamlined features that improve operational workflows, enforce enterprise-grade security, and simplify troubleshooting.

Get Whitepaper

whitepaper

Datatron 3.0 Product Release – Simplified Kubernetes Management

Eliminate the complexities of Kubernetes management and deploy new virtual private cloud environments in just a few clicks.

Get Whitepaper

whitepaper

Datatron 3.0 Product Release – JupyterHub Integration

Datatron continues to lead the way with simplifying data scientist workflows and delivering value from AI/ML with the new JupyterHub integration as part of the “Datatron 3.0” product release.

Get Whitepaper

whitepaper

Success Story: Global Bank Monitors 1,000’s of Models On Datatron

A top global bank was looking for an AI Governance platform and discovered so much more. With Datatron, executives can now easily monitor the “Health” of thousands of models, data scientists decreased the time required to identify issues with models and uncover the root cause by 65%, and each BU decreased their audit reporting time by 65%.

Get Whitepaper

whitepaper

Success Story: Domino’s 10x Model Deployment Velocity

Domino’s was looking for an AI Governance platform and discovered so much more. With Datatron, Domino’s accelerated model deployment 10x, and achieved 80% more risk-free model deployments, all while giving executives a global view of models and helping them to understand the KPI metrics achieved to increase ROI.

Get Whitepaper

whitepaper

5 Reasons Your AI/ML Models are Stuck in the Lab

AI/ML Executive need more ROI from AI/ML? Data Scientist want to get more models into production? ML DevOps Engineer/IT want an easier way to manage multiple models. Learn how enterprises with mature AI/ML programs overcome obstacles to operationalize more models with greater ease and less manpower.

Get Whitepaper