Explainable AI: trust, transparency, accountability

black box of AI

“Business needs to see inside the black box.”

Artificial intelligence (AI) interprets medical outcomes, predicts financial transactions, and drives cars and trucks. Yet, for most of us, AI remains a mystery of complex algorithmic calculations. We can’t help wondering, is it trustworthy?

Industry is asking the same question, says Bipin Thomas, founder and president of ICURO, a Santa Clara-based company that provides an integrated hardware and software AI solution. The lack of transparency has slowed adoption.

Thomas who is spearheading expanded AI offerings at Extension is teaching a new one-day introductory course—AI-Led Enterprise Transformation: Technologies and Use Cases, an overview of AI concepts, technologies, applications, and what lies ahead.

Slowed AI adoption

Business leaders want to know, how much accuracy can AI deliver? How can we monitor it? Can we audit it?

“Business needs to see inside the black box,” Thomas says. “They need to be 100 percent sure that the data and combinations training the AI system are the right ones.”

In a recent white paper about challenges to AI adoption, PricewaterhouseCoopers describes AI as “a transformational $15 trillion opportunity” that is hampered by a lack of visibility into the data that is slowing the adoption of AI. The black box needs to be a glass box. “How many people would trust an AI algorithm giving a diagnosis rather than a doctor without having some form of clarity over how the algorithm came up with the conclusion?” PwC asks.

The question people are asking these days is how do we build a window into the evolving system of AI?

Explainable AI (XAI)

“As industries migrate to automation, this becomes an important framework,” Thomas says. “To be explainable, it has to be interpretable, transparent, and auditable. It’s very important that decisions can be audited. To be audited, it has to be transparent.”

Thomas plans to integrate deep reasoning skills into the new UCSC Extension curriculum along with real industry use cases for machine learning and deep learning.

The opportunity for engineers is this emerging field of AI explainability—the creation of machine learning applications that can be effectively understood.

There are a lot of complex algorithms inside an AI system and it’s constantly being fed more data. They need to be able to show what data is being used to train the system, to explain the whys.

Thomas cites three factors as driving the adoption of AI across industry:

  • Increased trust,
  • Increased accountability, and
  • Increased efficiency.

“A framework should have all these elements. If you bring that to the industry users, then they can see what you’re doing and they will trust you.”

Deep Reasoning

Explainable AI is fundamentally different than deep learning. It involves more accuracy, but it’s not easy to come by, Thomas says. Deep learning is one of the first steps to get into the AI framework. In explainable AI, deep learning becomes deep reasoning.

Currently Extension offers several deep learning courses:

Engineers are developing new tools to forward XAI, including the U.S. Department of Defense, which launched an XAI project in 2018. Not every industry will need a glass box system.

“New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future,” writes David Gunning, a program manager in the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office.

“As we build out our AI and machine learning curriculum at Extension, we are keeping this in mind,” Thomas says. “We’re updating our deep learning courses to help people understand the XAI framework. It will take them to another level. Students will explore deep reasoning. It’s the next stage of going into XAI.”

Industry use cases in the classroom

There are already a lot of case studies and use cases for FDA approval in the healthcare industry. Manufacturing is also coming along, particularly in the area of supply chain.

“When a doctor collaborates with another doctor, they both explain why they arrived at a diagnosis,” Thomas says. The machine also has to be able to provide the reasoning.

“It should be able to interact with new data and explain how it got there. Manufacturing is another place where there is huge machine-human interaction and explainable AI becomes huge value.”

Thomas is working with experts in bioinformatics and healthcare to develop curriculum that brings real industry use cases for XAI into the classroom. It’s important that students have the opportunity to work with actual explainable AI algorithm examples, he says.

“We’re at the very beginning now,” Thomas says. “Our program is very job-oriented. It has long-term potential. Having the hands-on skills is going to be a great factor for jobs in many industries.”

Leave a Reply