Human values and expertise improve AI reliability, study finds


LAWRENCE — Both optimism and fear are connected with the rise of artificial intelligence. Much of this centers around how reliant humans will ultimately be on such machines.

Michael Lash
Michael Lash

But according to new research, machines also need humans.

“We should be considering human input when we’re making machine learning models,” said Michael Lash, assistant professor of business at the University of Kansas.

“Moving forward, if we’re going to do this right — this being data science, machine learning, predictive analytics — we’ve got to take human feedback into account. Humans have to be part of the design process.” 

His paper titled “HEX: Human-in-the-loop explainability via deep reinforcement learning” examines machine learning explainability (MLX), which promises to provide decision-makers with prediction-specific rationale. This assures people that predictions are made for the right reasons and are thus reliable. Lash's proposal incorporates decider-specific preferences and expectations into the process. 

It appears in Decision Support Systems. 

The paper finds that incorporating humans into the explanation-generating process increased reliance, trusting and sense-making of the explanations returned by the system. The proposed method also performed better than competing ones on using empirical assessments.

“Typical AI systems are dumb … in the sense that they just learn from whatever data is fed to them,” Lash said.

“This is often biased data or incomplete data. On the other hand, with people, our learning system in our brains is very good at recognizing patterns in simple ways. We can extract rules that are easily explainable to one another to help make sense of things.”

By adding that as a component to the AI system, the individuals who use such systems can be assured AI is making decisions for the right reasons because the explanations produced by the systems are given in terms people understand, according to Lash.

Much of Lash’s work focuses on machine learning explainability, which is the problem of trying to rationalize why machines make the predictions that they do to humans.

“A major gap I noticed is a lot of works don’t consider the decider, the humans themselves,” he said. “A lot of the emphasis has been paid to how humans can learn from the machines, which is great. And a lot of the examples I’ve seen are like physicians in training, for instance, and how machines can help such physicians better recognize diseases and things.”

But he then asked himself, “What about expert physicians?”

“If we can take the expert physician’s mental model, if we can distill that somehow and pair it with the machine, then we can maybe help other folks who aren’t as experienced make decisions like an expert would,” he said.

Among the methods for arriving at this model, Lash conducted a controlled lab study where individuals assessed explanations generated through AI. This incorporated a randomized control trial where some subjects were shown explanations using the human-in-the-loop method, whereas others were shown explanations generated from a competing method that didn’t consider these.

“I was surprised at how uniform the results were in terms of increasing both liking sense-making and trusting the explanations and predictions that were given using our method. Adding in the human component drastically increased all these things,” he said.

He envisions this method being used effectively for AI integration involving everything from finance to vehicles to shipping.

Lash’s research focuses broadly on machine learning, data mining and business analytics. His previous papers include “Predicting mobility using limited data during early stages of a pandemic” and “Impact of the COVID-19 Pandemic on the Stock Market and Investor Online Word of Mouth.”

Lash described the future of AI as "revolutionary." 

With HEX, that revolution doesn’t have to be exclusively machine-driven.

“Humans paired with machines can make better decisions,” Lash said. “But by adding in this method, by considering a human’s values and expertise, the human can get feedback on the predictions or decisions made by the AI to see whether it is making these for the right reasons.”

Mon, 09/09/2024

author

Jon Niccum

Media Contacts

Jon Niccum

KU News Service

785-864-7633