People underestimate AI capabilities due to ‘exponential growth bias,’ study finds


Go to Article

scientists believe artificial intelligence (AI) to be a top contender. Yet there seems to be very little currently being done to ensure its safety.

“We are, on average, going to be surprised at how quickly AI progresses and potentially surpasses human capability,” said Nathan Meikle, an assistant professor of business at the University of Kansas.

His new paper, titled “Unaware and Unaccepting: Human Biases and the Advent of Artificial Intelligence,” examines the human biases that impede AI’s assessment. His experiments find that people are prone to underestimate AI capabilities due to exponential growth bias and people reject the aversive implications of rapid technological progress even in cases in which they themselves predict the growth rate.

The new work is published in Technology, Mind, and Behavior.

“We’re motivated to believe things we want to have happen,” said Meikle, who co-wrote the paper with Bryan Bonner of the University of Utah.

“Most of us don’t want to live in a world where AI is smarter than humans. And because we want humans to be superior to AI, there’s a chance that we are sticking our head in the sand. We don’t want AI to surpass human intelligence. Therefore, we think it’s not going to happen.”

Motivated reasoning emerges most often when the facts are ambiguous.

“For instance, I don’t want to get cancer. Say my odds of getting cancer in a lifetime are 40%. But because I don’t want cancer, and because I can look to my past and say, ‘I’m reasonably healthy, and I’ve never had cancer,’ I’m prone to underestimate my odds of getting cancer, and I might think the probability is only like 20%,” he said. 

But exponential growth bias (which is our inability to accurately estimate exponential growth curves) becomes even more skewed when a concept turns more abstract.

“A simple example is would you rather have a billion dollars or would you rather have the money from doubling a penny 64 times?” he said. “Our intuition tells us to take the billion. But from doubling a penny, you’re actually looking at more than 184 billion dollars. And this example is especially relevant to AI because AI has been progressing at an exponential rate, in tandem with computing speed.”

To verify this theory as it relates to underestimating AI, Meikle recruited several hundred participants in the U.S. and conducted two experiments that examined the effects of motivated reasoning and exponential growth bias on human judgment. The questions tested how participants might envision the interaction between AIs and humans decades from now. (Sample: Imagine 20 years into the future and AIs are equal in intelligence to humans. How positive do you feel about the future you just imagined?)

“An AI doesn’t need to be way smarter than us to pose an existential risk,” Meikle said.

“Genetically, we share about 99% of our DNA with chimpanzees. But it’s just that little bit of extra intelligence which allows us to be at the top of the food chain. And so if an AI were to become more intelligent than humans — which I think there’s a reasonable probability of happening very soon — then maybe the AI adopts a goal that is not consistent with human flourishing … and we’re in trouble. Or, even more believably now, people use AI to manipulate other humans.”

An Idaho native, Meikle came to KU in 2021. He is a former receiver with the BYU Cougars. (He caught a dozen passes in the 2005 Las Vegas Bowl.) He also hosts a podcast titled “Meikles and Dimes,” where he interviews guests about leadership, including Kansas City Chiefs head coach Andy Reid. Meikle teaches courses in leadership and ethics at KU.

Meikle said he personally employs AI all the time.

“I’m getting to the point now where I use ChatGPT every day. It’s one of my most commonly opened apps — just asking it questions about what happened here, what happened there,” he said.

Is he fearful it might eventually replace him?

“Does it bother me that a calculator can run calculations better than me? No. And so in some ways, we don’t care. But I think we’re especially concerned about if artificial intelligence takes our jobs,” Meikle said. “I don’t mind if a calculator can calculate faster than me. But if it’s collecting my paycheck, there are going to be problems.”