Research News

When should someone trust an AI assistant's predictions?

Researchers help workers collaborate with artificial intelligence systems

In a busy hospital, a radiologist uses an artificial intelligence system to help her diagnose medical conditions based on patients' X-ray images. Using the AI system can help her make faster diagnoses, but how does she know when to trust the AI's predictions?

Traditionally, she doesn't. Instead, she may rely on her expertise, a confidence level provided by the system itself, or an explanation of how the algorithm made its prediction -- which may look convincing but still be wrong -- to make an estimation.

To help people better understand when to trust an AI "teammate," Massachusetts Institute of Technology researchers created a technique that guides humans to a more accurate understanding of when a machine makes correct predictions and when it makes incorrect ones. The research is supported by the U.S. National Science Foundation.

By showing people how the AI complements their abilities, the new technique could help humans make better decisions or come to conclusions faster when working with AI agents.

"We propose a teaching phase where we gradually introduce humans to this AI model so they can see its weaknesses and strengths," says Hussein Mozannar of MIT. "We do this by mimicking the way people will interact with AI in practice, but we intervene to give them feedback to help them understand each interaction."

Mozannar conducted the research with Arvind Satyanarayan and David Sontag, also of MIT. The findings will be presented at the Association for the Advancement of Artificial Intelligence Conference in February.

The work focuses on the mental models humans build. If the radiologist, for example, is not sure about a case, she may ask a colleague who is an expert in a certain area. From experience and her knowledge of this colleague, she has a mental model of his or her strengths and weaknesses that she uses to assess the advice.

Humans build the same kinds of mental models when they interact with AI, so it's important that those models are accurate, Mozannar says. Cognitive science suggests that humans make decisions about complex tasks by remembering past interactions and experiences. So, the researchers designed a process that provides examples of a human and AI working together, which serve as reference points a person can draw on in the future.

"This work is an ideal example of how mathematical results can be brought to bear on solving real-world problems in AI," said Rance Cleaveland, director of NSF's Division of Computing and Communication Foundations. "The interplay between basic research and its applications is a hallmark of the kinds of impactful results NSF is looking for."