Advertisement

AI Governance as Part of the Data Science Lifecycle

By on
Read more about author Chris Luiz.

AI is everywhere. It is embedded in virtually every new product, from toasters to shoes and beyond. Gone are the days when we found AI only in future-forward software and tech products.

AI is being leveraged far beyond the big tech companies. The AI we interact with today is being developed by teams in widely varied companies and industries. Across the broad range of problems that AI is deployed to solve, the potential impact on individuals is correspondingly broad.

In some cases, the impact on consumers is low (e.g., the toaster). In others, potential consequences can be shockingly high. Insurance, finance and lending services, autonomous vehicles, and medicine are areas where we can’t afford our models to go wrong. 

There will always be a risk in deploying AI in sensitive application areas, but the risk is not a reason to abandon the use of models – quite the opposite. The places AI may have the most significant positive impact on humanity are also across higher-risk use cases. Perfect toast is much less interesting than a cure for cancer.

With higher-risk models, data scientists face the responsibility to not only follow best practices for building responsible and ethical AI but also to develop a common understanding with non-technical stakeholders to support overall business needs.

What Do We Want? Streamlined Model Governance

And therein lies the problem. The current state of model governance is best described as an organizational disaster. The data scientists I’ve worked with are not playing fast and loose from an ethical standpoint – and malicious intent isn’t the problem. Data scientists are adept at identifying and raising ethical concerns and ensuring their models meet the organization’s standards. The problem stems from transferring that context from the mind of the data scientist to a place where nontechnical stakeholders can view and understand their existing good work.

Data scientists are highly motivated by solving challenging problems and driving business value, and we must temper any discussion of model oversight through this lens. How can we provide the information our business users need without slowing down the model development process and thus compromising our value to the business?

How do we, as data scientists, ensure our models are working as expected?

We know that high-quality data and good corporate policy are necessary ingredients for ethical AI. How can you build effective models without trust in your data or clear expectations at the corporate level?

Real-time model monitoring, consistent processes for model project signoffs, and uniform/discoverable documentation all play a part. In my experience, Data Science teams are doing this work now, but the process is bespoke, disorganized, and time-consuming. The velocity of model development has changed, and verbal approvals in one-off meetings and emails, policy tracking in spreadsheets, and homegrown one-off monitoring systems are no longer sufficient.

Without a way to evidence our work, we can’t effectively verify our machine learning decisions are sound. We conduct code reviews and ethical reviews should be equally important.

What We Must Avoid

As data scientists and machine learning engineers, we have a choice: Get ahead of the problem or prepare to have a less optimal solution for AI governance imposed on us.

Several recent articles recommend implementing an AI review board as the answer. Doing so will certainly reduce the number of risky models moving into production – but probably not as intended. The first unintentional effect will happen almost immediately. Data scientists will choose to work on less risky problems, as those will be harder to get through the review board. This will significantly reduce the business value of machine learning for the organization and stifle growth and innovation.

Next, great data scientists will look for work elsewhere. Injecting bureaucratic slowdown into the model development and deployment lifecycle is one way to shake up your data science org indeed.

I’ve worked at big companies and navigated enterprise IT security. There must be a better path than advocating for another bureaucratic department of “no.” We should be actively seeking ways to empower our partners across the business, even in the more bureaucratic areas, to say “yes!” instead.

How We Can Get Ahead of AI Governance

A better solution from a top-down, post hoc, and draconian executive review board is a combination of sound governance principles, software products that match the Data Science lifecycle, and strong stakeholder alignment across the governance process. The tooling we adopt must:

  • Seamlessly fit the data science lifecycle
  • Maintain (and preferably increase) the speed of innovation
  • Meet stakeholder needs of today and into the future
  • Provide a self-service experience for nontechnical stakeholders

In operationalizing the above, we are effectively creating a business-level system for continuous innovation. There are staged checks and tests to complete before deployment to production. Each step has been pre-negotiated with stakeholders and built into the Data Science lifecycle, and it is 100% clear to the data scientists what is required to drive business value with their model.

Including AI governance as part of the Data Science lifecycle is enabling for developers. Ask any data scientist that has spent months on a project, only to have it never see the light of day because of a counterintuitive result and “feelings.”

With governance software and principles in place, when a model is ready to move into production – stakeholder questions are already answered, and the model is already approved. No more meetings, emails, or last-minute one-off approvals. 

Organizations that adopt and operationalize sound AI governance principles (and software that enables them) for their data scientists will realize a substantial advantage over their competitors. An advantage weighed by models in production, cost savings, and incremental revenue. 

Remember: The Gross Value of All Models Not in Production Is Zero

Enabling data scientists drives business value, and intelligently operationalized governance can play a part. But much like responsible and ethical AI, it won’t happen by accident.

Leave a Reply