Advertisement

Forthcoming AI Regulation Makes Data Management Imperative

By on
Read more about author Ramprakash Ramamoorthy.

Although algorithmic decision-making has become increasingly vital for many businesses, there are growing concerns related to transparency and fairness. To put it mildly, the concern is warranted. Not only has there been documentation of racial bias in facial recognition systems, but algorithmic decision-making has also played a role in denying minorities home loans, prioritizing men during hiring, and discriminating against the elderly. The adage “garbage in, garbage out” is as relevant as ever, but forthcoming AI regulation is raising the stakes for corporate Data Management. 

Given that AI is being used to make decisions related to self-driving cars, cancer diagnoses, loan approvals, and insurance underwriting, it is no surprise that AI regulation is coming down the pike. In an effort to not stifle innovation, the United States will likely drag its feet, and the European Union will likely lead the way.  

AI regulation is coming. The White House Office of Science and Technology published an Algorithmic Bill of Rights in November; however, in all likelihood AI regulation will come from the EU. Just as the EU’s GDPR set the bar for data privacy across the globe, their recent Proposal for a Regulation on Artificial Intelligence (AI Act) will likely do the same for algorithmic decision-making. The AI Act is not expected to be finalized and implemented until 2023; nevertheless, businesses should take a proactive approach to how they handle the data in their AI systems. 

The AI Act 

Just like data privacy legislation, AI regulation is ultimately about human rights and the respect for human autonomy.  

The AI Act takes a risk-based approach. According to the AI Act, AI systems will be classified as either unacceptable, high-risk, limited risk, or minimal/no risk. “Unacceptable” AI systems are considered a danger to the public, such as the use of biometric identification by police in public spaces. On a case-by-case basis, “high-risk” systems will be allowed to operate, with the caveat that these systems meet certain requirements. “Limited-risk” systems will be subject to transparency requirements, meaning that all users must be notified if they are interacting with an AI. And lastly, systems deemed “minimal/no risk” will be permitted to function without restriction.  

Much like GDPR, the proposed fines are consequential, as corporate violations will result in penalties up to 300 million euros, or 6% of annual turnover – whichever is greater.  

Maximizing Transparency 

The AI Act is intended to not only minimize harm, but also to maximize transparency.   

For many organizations, the proposed AI restrictions should not come as a surprise. After all, GDPR (implemented May 25, 2018) and CPRA (takes effect January 1, 2023) already provide users with “the right … to obtain an explanation of the decision reached” by algorithms. Although open to legal interpretation, such language suggests that legislators are moving toward an approach that prioritizes algorithmic accountability. Put simply, all users, employees, customers, and job applicants should have the right to an explanation as to why AI has made a given decision.  

That said, when an AI system has thousands of data inputs, such as Ant Group’s credit-risk models, it can be rather difficult to explain why an individual’s loan was denied. Moreover, transparency can be inherently problematic for companies that view their AI systems as confidential or industry trade secrets. Nevertheless, despite the challenges for legislators and regulators, the fact remains: AI regulation is coming, and systems will eventually need to be explainable.  

Getting User Consent, Conducting Data Reviews, and Keeping PII to a Minimum 

Companies using algorithmic decision-making should take a proactive approach, ensuring that their systems are transparent, explainable, and auditable. Companies should not only inform users whenever their data is being used in algorithmic decision-making, but they should also get their consent. After gaining consent, all user data in machine learning-based algorithms needs to be protected and anonymized.   

AI developers should treat data much like they would treat code in a version control system. As developers integrate and deploy AI models into production, they should conduct frequent data reviews to ensure the models are accurate and error-free. 

Unless personal identifiable information (PII) is absolutely necessary, AI developers should keep this data out of the system. If an AI model can operate well without PII, it is best to remove it, ensuring that decisions are not biased on PII data points, such as gender, race, or zip code. 

Frequently Audit AI Systems 

Additionally, as much as possible, efforts should be made to minimize harm to users. This can be done by frequently auditing AI models to ensure that the decisions are equitable, unbiased, and accurate.  

Frequent audits are vital. Although the initial version of the AI system might be well-tested for biases, the system can begin to operate differently as data flows through it. Measures to identify and mitigate concept drift should be put into place right at the time of the launch of the model. Of course, it is important that AI developers track AI model performance without affecting the privacy of users.   

It’s best to audit one’s systems today before AI regulation comes to fruition; that way, there won’t be a need to revamp one’s processes down the line. Depending on where an organization does business geographically, failure to protect user data can result in reputational damage, expensive fines, and class action lawsuits – not to mention it’s the right thing to do. 

Leave a Reply