Advertisement

AI Bill of Rights: A Step Toward Truly Ethical AI?

By on
Read more about author Andrei Papancea.

In early October, The White House published its so-called Blueprint for an AI Bill of Rights, a set of guidelines encouraging tech companies to deploy artificial intelligence more responsibly. The guidelines are not binding but have been conceived to persuade tech companies to further protect consumers, including explaining why an automated system is in use.

Many have felt this was long overdue, with AI now effectively endemic in society across multiple business sectors. The Bill of Rights is, in a sense, a step towards ethical AI, ensuring that people are treated properly, and their privacy respected.

Although my eye isn’t normally trained toward the political space, the AI Bill of Rights caught my attention. Not only does it touch on AI, but it also outlines the principles and protections that are in line with many of the things we believe in as a conversational AI company and which we help clients think through when designing and building automated conversations. 

The document was created “to help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public,” according to the department’s press release. There are five core protections mentioned within the blueprint, as follows: 

  • Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
  • Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  • Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  • Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

These principles are, in my view, straightforward, non-partisan, and good for the end-user … something that we focus on relentlessly while building out conversations.

Let’s explore each in a little more detail:

Safe and Effective Systems: I strongly agree with this. Customers should be protected from unsafe and ineffective systems. There’s nothing more irritating than interacting with AI that doesn’t do the job it’s supposed to do, or worse – that puts you at risk. Brands offering automation need to ensure that customers get the help they need and aren’t left waiting and unhelped. 

Alternative Options: We always want to do what’s right for the end-user. That said, not every customer inquiry should be automated, and businesses must have alternative options where the end-user can opt out and get the help they need to resolve their inquiry. This may mean that “containment” metrics take a dip, but the trade-off is increased customer satisfaction with service because the end-user is receiving the help they need.

Notice and Explanation: While conversational AI can be designed to be human-like, it is not human. This is an important distinction. Failing to inform the customer that a virtual assistant is being used can add to further mistrust of AI, creating a poor customer experience. With that said, we also agree that end-users should be aware that an automated system is being used.

But are guidelines alone going to be enough?

As far as I can see, the general response to the proposed AI Bill of Rights has been largely positive, though some have expressed reservations that the guidelines are not legally binding.

Writing in Campaign US, Aaron Kwittken described the proposed AI Bill of Rights as a “toothless tiger,” whilst Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, was quoted as describing the proposals as valuable, “but they would be even more effective if they were built on a foundation set up by a comprehensive federal privacy law.” 

For what it’s worth, my view is that we are where we are. We now, at least, have a blueprint in place for more effective AI governance and which, we hope, will lead to both more businesses and more consumers embracing it, having confidence in it and recognizing the positive contribution it can make. Unquestionably, the blueprint could have gone further into promoting the benefits that AI can and is delivering, but it is nonetheless a welcome starting point. Provided it is just that, a start, and if we, as the practitioners of AI for the public good, embrace it and make it work, we could find ourselves looking back on this moment as something of a milestone. 

Leave a Reply