“After several years of trying—and ultimately failing—to live up to boundless hype, artificial intelligence (AI) was, in 2021, deeply humbled in the court of public opinion. Fortunately, data scientists are waking up to the fragility of AI’s decisioning powers and have created compensating controls. These include Auditable AI and Humble AI, and I predict we will see them formally join Explainable AI and Ethical AI under the umbrella of Responsible AI, the gold standard that should be used in developing artificial intelligence systems that are trustworthy and safe.
“Let’s take a look at the concepts of Auditable AI and Humble AI, which I predict will become widely used in 2022.
Artificial Intelligence Is Now a Real Problem
“But first, let’s recap a couple of points about what happened with AI in 2021. AI adoption skyrocketed over the pandemic; illustratively, one survey by PwC found that 86% of companies surveyed said AI is becoming a “mainstream technology” at their company in 2021. Some of AI’s ills became painfully apparent in 2021, too, in endless negative news articles, podcasts and conversations. Futurist and my Twitter contemporary Bernard Marr put together a concise discussion of the negative impacts of AI here, including:
- AI bias
- Loss of certain jobs
- A shift in the human experience
- Accelerated hacking
- AI terrorism
The Way Forward: Ethics by Design
“On September 15, 2021, the world took a giant step closer to achieving Responsible AI with the release of IEEE 7000, the first standard that, as described by Politico AI, “will show tech companies in a very practical way to build technology that will bring human and social value, instead of just money.”
Sarah Spiekermann of the Vienna University of Economics and Business, who was involved in the IEEE 7000 effort, told Politico it will determine that a ”good” AI will be one that gives companies control over:
- The quality of the data used in the AI system
- The selection processes feeding the AI
- Algorithm design
- The evolution of the AI’s logic
“The best available techniques for a sufficient level of transparency of how the AI is learning and reaching its conclusions,” to quote Ms. Spiekermann.
“I believe that transparency and ethics by design is the only way forward. I’ve written and spoken extensively about this belief since 2020, which I predicted would be the year that AI grows up. With IEEE 7000, it looks like achieving this goal could be in the foreseeable future.
“Auditable AI is artificial intelligence that produces an audit trail of every detail about itself, including all of the points referenced above: data, variables, transformations and model processes including machine learning, algorithm design and model logic. Auditable AI must be backed by firmly established and adhered-to model development governance frameworks, such as the blockchain-based frameworks FICO uses for this purpose. Auditable AI essentially addresses the transparency requirement of the IEEE 7000 standard.
“Here, I predict that aspirations to achieve Responsible AI will move beyond hollow pledges, to model development governance steps that are formal, programmatically transcribed and immutably backed by blockchain technology. This talk and this article contain more of my thoughts on how blockchain can be used to achieve Auditable AI and, ultimately, Responsible AI.
“Importantly, Auditable AI provides a proof of work of adherence to the model development standards of Responsible AI, and further directly supports its connected function, Humble AI.
“Humble AI is artificial intelligence that will know that it is not sure about the right answer. Humble AI addresses the uncertainty of AI decisioning, and uses uncertainty measures (such as a numeric uncertainty score) to quantify the model’s confidence levels in its own decisioning, with granularity down to the individual decision and data element.
“Put another way, Humble AI helps us gain more confidence in each score it produces and the consumer it impacts, not just the accuracy of the score itself on an entire corpus of consumers.
“While I have blogged about Humble AI in the past, I see a movement away from entirely abandoning complex models, to becoming more surgical about when to use a complex model or a less-complex model that has better supporting evidence. I believe that in 2022, models will be dissected and assessed per transaction with assigned confidence scores. This will give humans explicit direction on how, when and if to use a specific model’s output in decisioning, particularly if artificial intelligence reveals the model is more uncertain of decisions made about a class of transactions or customers.
“The ability to determine whether a model is appropriate will vary across scored examples, e.g., for some customers the model is very certain and has great statistical confidence, and others not. This is a critical component of Responsible AI such that when model confidence is insufficient, those customers can get scored by a fallback model that has better statistical confidence, and thus receive a fairer outcome. This directly aligns with the IEEE 7000, and leverages the realization that the AI model’s decisioning may be more or less certain for different individuals. Responsible AI requires this uncertainty be reflected in decisioning.
A Fluid, Complementary Relationship
“Auditable AI and Humble AI are symbiotic; when constructed properly, Auditable AI will establish the very criteria and uncertainly measures that will switch decisioning to fallback models, a move consistent with the Humble AI approach. Criteria for using Humble AI models and the fallback models should be established during the model development period, based on the strengths, weaknesses and transparency derived during the model build and transcribed in the Auditable AI record.
“Ultimately, Auditable AI and Humble AI will help build more trust in the decisions generated by artificial intelligence systems—the ultimate catalyst to help this humbled technology bounce back to achieve mainstream trust”.
Scott Zoldi Chief Analytics Officer, FICO