Skip to main content

This is when AI’s top researchers think artificial general intelligence will be achieved

This is when AI’s top researchers think artificial general intelligence will be achieved

/

Short answer: maybe within our lifetimes, but don’t hold out

Share this story

An illustration of a cartoon brain with a computer chip imposed on top.
Illustration by Alex Castro / The Verge

At the heart of the discipline of artificial intelligence is the idea that one day we’ll be able to build a machine that’s as smart as a human. Such a system is often referred to as an artificial general intelligence, or AGI, which is a name that distinguishes the concept from the broader field of study. It also makes it clear that true AI possesses intelligence that is both broad and adaptable. To date, we’ve built countless systems that are superhuman at specific tasks, but none that can match a rat when it comes to general brain power.

But despite the centrality of this idea to the field of AI, there’s little agreement among researchers as to when this feat might actually be achievable.

Researchers guess by 2099, there’s a 50 percent chance we’ll have built AGI

In a new book published this week titled Architects of Intelligence, writer and futurist Martin Ford interviewed 23 of the most prominent men and women who are working in AI today, including DeepMind CEO Demis Hassabis, Google AI Chief Jeff Dean, and Stanford AI director Fei-Fei Li. In an informal survey, Ford asked each of them to guess by which year there will be at least a 50 percent chance of AGI being built.

Of the 23 people Ford interviewed, only 18 answered, and of those, only two went on the record. Interestingly, those two individuals provided the most extreme answers: Ray Kurzweil, a futurist and director of engineering at Google, suggested that by 2029, there would be a 50 percent chance of AGI being built, and Rodney Brooks, roboticist and co-founder of iRobot, went for 2200. The rest of the guesses were scattered between these two extremes, with the average estimate being 2099 — 81 years from now.

In other words: AGI is a comfortable distance away, though you might live to see it happen.

WIRED25 Summit: WIRED Celebrates 25th Anniversary With Tech Icons Of The Past & Future
AGI means AI with broad intelligence, but we’re missing a lot of key components to make it happen.
Photo by Matt Winkelmeyer/Getty Images for WIRED25

This is far from the first survey of AI researchers on this topic, but it offers a rare snapshot of elite opinion in a field that is currently reshaping the world. Speaking to The Verge, Ford says it’s particularly interesting that the estimates he gathered skew toward longer time frames rather than earlier surveys, which tend to fall closer to the 30-year mark.

“I think there’s probably a rough correlation between how aggressive or optimistic you are and how young you are,” says Ford, noting that several of the researchers he spoke to were in their 70s and have experienced the field’s ups and downs. “Once you’ve been working on it for decades and decades, perhaps you do tend to become a bit more pessimistic.”  

Ford says that his interviews also revealed an interesting divide in expert opinion — not regarding when AGI might be built, but whether it was even possible using current methods.

Some of the researchers Ford spoke to said we have most of the basic tools we need, and building an AGI will just require time and effort. Others said we’re still missing a great number of the fundamental breakthroughs needed to reach this goal. Notably, says Ford, researchers whose work was grounded in deep learning (the subfield of AI that’s fueled this recent boom) tended to think that future progress would be made using neural networks, the workhorse of contemporary AI. Those with a background in other parts of artificial intelligence felt that additional approaches, like symbolic logic, would be needed to build AGI. Either way, there’s quite a bit of polite disagreement.

“Some people in the deep learning camp are very disparaging of trying to directly engineer something like common sense in an AI,” says Ford. “They think it’s a silly idea. One of them said it was like trying to stick bits of information directly into a brain.”

Many experts say we’re missing key building blocks to create AGI

All of Ford’s interviewees noted the limitations of current AI systems and mentioned key skills they’ve yet to master. These include transfer learning, where knowledge in one domain is applied to another, and unsupervised learning, where systems learn without human direction. (The vast majority of machine learning methods currently rely on data that has been labeled by humans, which is a serious bottleneck for development.)

Interviewees also stressed the sheer impossibility of making predictions in a field like artificial intelligence where research has come in fits and spurts and where key technologies have only reached their full potential decades after they were first discovered.

Stuart Russell, a professor at the University of California, Berkeley who wrote one of the foundational textbooks on AI, said that the sort of breakthroughs needed to create AGI have “nothing to do with bigger datasets or faster machines,” so they can’t be easily mapped out.

“I always tell the story of what happened in nuclear physics,” Russell said in his interview. “The consensus view as expressed by Ernest Rutherford on September 11th, 1933, was that it would never be possible to extract atomic energy from atoms. So, his prediction was ‘never,’ but what turned out to be the case was that the next morning Leo Szilard read Rutherford’s speech, became annoyed by it, and invented a nuclear chain reaction mediated by neutrons! Rutherford’s prediction was ‘never’ and the truth was about 16 hours later. In a similar way, it feels quite futile for me to make a quantitative prediction about when these breakthroughs in AGI will arrive.”

Ford says this basic unknowability is probably one of the reasons the people he talked to were so reluctant to put their names next to their guesses. “Those that did choose shorter time frames are probably concerned about being held to it,” he says.

GM Plant Readies For Third Shift As U.S. Sales Increase In February
Many researchers said economic problems were more pressing than the threat from superintelligence.
Photo by Bill Pugliano / Getty Images

Opinions were also mixed on the dangers posed by AGI. Nick Bostrom, the Oxford philosopher and author of the book Superintelligence (a favorite of Elon Musk’s), was someone who had strong words to say about the potential danger, saying AI was a greater threat than climate change to the existence of the human race. He and others said that one of the biggest problems in this domain was value alignment — teaching an AGI system to have the same values as humans (famously illustrated in the “paperclip problem”).

“The concern is not that [AGI] would hate or resent us for enslaving it, or that suddenly a spark of consciousness would arise and it would rebel,” said Bostrom, “but rather that it would be very competently pursuing an objective that differs from what we really want.”

Most interviewees said the question of existential threat was extremely distant compared to problems like economic disruption and the use of advanced automation in war. Barbara Grosz, a Harvard AI professor who’s made seminal contributions to the field of language processing, said issues of AGI ethics were mostly “a distraction.” “The real point is we have any number of ethical issues right now, with the AI systems we have,” said Grosz. “I think it’s unfortunate to distract attention from those because of scary futuristic scenarios.”

This sort of back-and-forth, says Ford, is perhaps the most important takeaway from Architects of Intelligence: there really are no easy answers in a field as complex as artificial intelligence. Even the most elite scientists disagree about the fundamental questions and challenges facing the world.

“The main takeaway people don’t get is how much disagreement there is,” says Ford. “The whole field is so unpredictable. People don’t agree on how fast it’s moving, what the next breakthroughs will be, how fast we’ll get to AGI, or what the most important risks are.”

So what hard truths can we cling to? Only one, says Ford. Whatever happens next with AI, “it’s going be very disruptive.”