Is your AI Fair, Transparent and Accountable?

moral.ai? graphic

Fueled by the cloud, and the commoditization of the computational power, we have opened up new frontiers in research leading to unprecedented gains in the field(s) of artificial intelligence. We have wrapped these scientific advancements into APIs and libraries that deliver enormous power into the hands of full stack developers. We are able to realize these power in practical applications such as chat bots (NLP/NLU), predictive modeling / machine learning and image recognition (to name a few). The opportunities for automation as well as autonomous and intelligent systems are growing, but we are realizing the current capabilities for short term competitive advantage. As the field of AI progressed and these techniques become more commoditized leveraging them will be table stakes for business. It is easy to get focus on the enormous benefits presented — but we need to assure our application of the tooling and techniques adheres to the same ethical and moral guidelines we run our businesses on.

I have been socializing the need for guidance in the space. We need to discuss the challenges of artificial intelligence and the importance of minimizing the effect of unintended consequences. The long term implications of these systems will challenge much of what we know about society. However there are immediate short term concerns, where even “innocuous intent”, might pose real risks to our brand and our business. There is momentum in this space (guidance on AI) which is mostly academic, but industry organizations, such as IEEE, have stared to weigh in. These bits that follow are notes from my research and conversations. This is an attempt to start the conversation and to solicit peers who are both fascinated by the opportunities but also conscientious practitioners.

Please read, comment and share.

Warm up

This brief eight minute video covers some of the risks — particularly inclusivity and bias. It’s a solid warm up.

A Snapshot of Concerns

Bias has crept into a number of applications and algorithms actively in use. Here are four that span gender, racial and neighborhood bias and are fairly representative of systems behaving different than one would expect during the ideation phase of product creation. Maybe the impact of bias could have been avoided, but at minimum users should be made aware of the bias so they can adjust their interpretation appropriately.

Amazon resume screening with gender bias

Google speech has gender bias

COMPAS, a judicial sentencing tool, has racial bias

PredPol, a proactive policing, has neighborhood bias (redlining)

Two valuable lessons stemming from these instances:

  1. how the behavior experienced is different than the behavior expected when the product was being designed
  2. how did the system learn that unintended behavior and what could have been done to prevent it

The Evolving Guidance

The evolving guidance seems to focus on three key areas: Fairness, Accountability and Transparency. I argue that a fourth component Data helps complete the discussion, although the counter is that data cross cuts the original three. It is important that we coalesce on an opinion, informed by regulation and industry best practices, across these four topic areas:

Fairness

Defining algorithmic “fairness”, http://fairness-measures.org/Pages/Definitions differentiates fairness, bias and discrimination:

  • Group Fairness: It is also referred to as statistical parity. It is a requirement that the protected groups should be treated similarly to the advantaged group or the populations as a whole.
  • Individual Fairness: It is a requirement that individuals should be treated consistently.
  • Comparison between Group & Individual Fairness: Group fairness does not consider the individual merits and may result in choosing the less qualified members of a group, whereas individual fairness assumes a similarity metric of the individuals for the classification task at hand that is generally hard to find.
  • User Bias: This appears when different users receive different content based on user attributes that should be protected, such as gender, race, ethnicity, or religion.
  • Content Bias: It refers to biases in the information received by any user. Take for example, when some aspect is disproportionately represented in a query result or in news feeds.
  • Direct Discrimination: This consists of rules or procedures that explicitly mention minority or disadvantaged groups based on sensitive discriminatory attributes related to group membership.
  • Indirect Discrimination: This consists of rules or procedures that, while not explicitly mentioning discriminatory attributes, intentionally or unintentionally could generate discriminatory decisions. It exists due to the correlation of the non-discriminatory items with the discriminatory ones.

Accountability

Neither by omission nor commission should we accept systematic bias and inappropriate behavior either coded or learned by our systems. Providing guidance requires that we understand where these systems are deployed so that their behavior can be documented and monitored. Education of the engineers and product owners is essential in being able to understand how bias may impact the software we build out.

As intelligent systems increase in capacity we will likely face another question: “Can we hold the software accountable for the decisions it makes?” While the software itself and the author/provider might be parties we that eventually hold some accountability the immediate accountable party might best be the deployer who commissions the usage.

The guiding thought is placing the consumers first and by protecting our consumers and therefore protecting our brand. Forbes weighs in (https://www.forbes.com/sites/forbescommunicationscouncil/2018/05/31/accountability-the-one-thing-you-cant-outsource-to-ai/#3b59627b7e18)

it is imperative to remember that we built the machines, and that accountability is the one thing we can’t outsource to AI

Transparency

Transparency requires us to understand and document in an audit-able manner the intent of our software systems, how they behave (including how they may have been trained) and how these systems are maintained. The HP video draws a line, “If you cannot explain what this program has done, it just should not be released”. And further explains transparency as:

Part of transparency is to be able to explain the logical steps that it went through to arrive at a certain recommendation

An engineer needs to understand the algorithms at play. The quote implies that blind use of black-box algorithms should be avoided. Some organizations have moved to avoid leveraging black-box algorithms, while some vendors (such as IBM) have pivoted to provide visibility into their black-boxes (https://www.computerworlduk.com/data/how-ibm-is-taking-lead-in-fight-against-black-box-algorithms-3684042/)

IBM is open sourcing software intended to help AI developers to see inside their creations via a set of dashboards, and dig into why they make decisions. …It promises real-time insight into algorithmic decision making and detects any suspicion of baked-in bias, even recommending new data parameters which could help mitigate any bias it has detected. Importantly, the insights are presented in dashboards and natural language, “showing which factors weighted the decision in one direction vs. another, the confidence in the recommendation, and the factors behind that confidence,” the vendor explained in a press release. “Also, the records of the model’s accuracy, performance and fairness, and the lineage of the AI systems, are easily traced and recalled for customer service, regulatory or compliance reasons — such as GDPR compliance.”

It is about the factors of the decision more (but not exclusive of) than the mechanics of the algorithm (https://www.technologyreview.com/s/609495/ai-can-be-made-legally-accountable-for-its-decisions/)

“When we talk about an explanation for a decision, we generally mean the reasons or justifications for that particular outcome, rather than a description of the decision-making process in general,” they say.

Selecting partners who provide visibility into the black-box is probably preferred than avoiding effective algorithms and should be factored into vendor selection criteria. There is also consideration of being transparent to end users when they may be interacting with autonomous systems, witness this forthcoming California Law (http://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1001):

(a) It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.

While some formal governance might help provide fair and equal guidance, UC Davis Law Review implies that if it comes it might be ambiguous:

Indeed, the unfolding development of a professional ethics of AI, while at one level welcome and even necessary, merits ongoing attention. History is replete with examples of new industries forming ethical codes of conduct, only to have those codes invalidated by the federal government.

Call To Action

  • What are your thoughts?
  • Is this an appropriate time to take action or are we early?
  • Do we even need to take action and form guidelines or otherwise?
  • Is there interest in studying the space?
  • What actions should we take?
  • Should we as engineers recommend guidance and governance?
  • What might that cover?

Selected References

These first two resources (IEEE and UC Davis Law) Review are fairly dense — but interesting reads. The Humane AI is a collection of links, many of which are more accessible.

IEEE Guidance

UC Davis Law Review (on AI Policy)

Humane AI Newsletter

A 25 year software industry veteran with a passion for functional programming, architecture, mentoring / team development, xp/agile and doing the right thing.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store