‘Sapiens’ author says AI is an alien threat that could wipe us out: ‘Instead of coming from outer space, it’s coming from California’

Eyal Warshavsky—SOPA Images/LightRocket/Getty Images

Billions of dollars are being poured into the development of AI, with the technology being hailed as a “revolution”—but famed historian and philosopher Yuval Noah Harari sees it as an “alien species” that could trigger humanity’s extinction.

“AI is fundamentally different from anything we’ve seen in history, from any other invention, whether it’s nuclear weapons or the printing press,” Harari—the bestselling author of Homo Deus and Sapiens: A Brief History of Humankind—told an audience at CogX Festival in London on Tuesday.

“It’s the first tool in history that can make decisions by itself. Atom bombs could not make decisions. The decision to bomb Hiroshima was taken by a human.”

The risk that comes with this ability to think for itself, Harari said, is that superintelligent machines could ultimately end up usurping the human race as the world’s dominant power.

“Potentially we are talking about the end of human history—the end of the period dominated by human beings,” he warned. “It’s very likely that in the next few years, it will eat up all of human culture, [everything we’ve achieved] since the Stone Age, and start spewing out a new culture coming from an alien intelligence.”

This raises questions, according to Harari, about what the technology will do not just to the physical world around us, but also to things like psychology and religion.

“In certain ways, AI can be more creative [than people],” he argued. “In the end, our creativity is limited by organic biology. This is a nonorganic intelligence. It’s really like an alien intelligence.

“If I said an alien species is coming in five years, maybe they will be nice, maybe they will cure cancer, but they will take our power to control the world from us, people would be terrified.

“This is the situation we’re in, but instead of coming from outer space, [the threat is] coming from California.”

AI evolution

The phenomenal rise of OpenAI’s generative AI chatbot, ChatGPT, over the past year has been a catalyst for major investment in the space, with Big Tech entering into a race to develop the most cutting-edge artificial intelligence systems in the world.

But it’s the pace of development in the AI space, according to Harari—whose written works have examined humanity’s past and future—that “makes it so scary.”

“If you compare it to organic evolution, AI now is like [an amoeba]—in organic evolution, it took them hundreds of thousands of years to become dinosaurs,” he told the crowd at CogX Festival. “With AI, the amoeba could become a T. rex within 10 or 20 years. Part of the problem is we don’t have time to adapt. Humans are amazingly adaptable beings…but it takes time, and we don’t have this time.”

Humanity’s next ‘huge and terrible experiment’?

Conceding that previous technological innovations, such as the steam engine and airplanes, had sparked similar warnings about human safety and that “in the end it was okay,” Harari insisted when it came to AI, “in the end is not good enough.”

“We are not good with new technology, we tend to make big mistakes, we experiment,” he said.

During the Industrial Revolution, for example, mankind had made “some terrible mistakes,” Harari noted, while European imperialism, 20th-century communism, and Nazism had also been “huge and terrible experiments that cost the lives of billions of people.”

“It took us a century, a century and a half, of all these failed experiments to somehow get it right,” he argued. “Maybe we don’t survive it this time. Even if we do, think about how many hundreds of millions of lives will be destroyed in the process.”

Divisive technology

As AI becomes more and more ubiquitous, experts are divided on whether the tech will deliver a renaissance or doomsday.

At the invitation-only Yale CEO Summit this summer, almost half of the chief executives surveyed at the event said they believed AI has the potential to destroy humanity within the next five to 10 years.

Back in March, 1,100 prominent technologists and AI researchers—including Elon Musk and Apple cofounder Steve Wozniak—signed an open letter calling for a six-month pause on the development of powerful AI systems. They pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Tesla and SpaceX cofounder Musk has separately said the tech will hit people “like an asteroid” and warned there is a chance it will “go Terminator.” He has since launched his own AI firm, xAI, in what he says is a bid to “understand the universe” and prevent the extinction of mankind.

Not everyone is on board with Musk’s view that superintelligent machines could wipe out humanity, however.

Last month, more than 1,300 experts came together to calm anxiety around AI creating a horde of “evil robot overlords,” while one of the three so-called Godfathers of A.I. has labeled concerns around the tech becoming an existential threat “preposterously ridiculous.”

Top Meta executive Nick Clegg also attempted to quell concerns about the technology in a recent interview, insisting that large language models in their current form are “quite stupid” and certainly not smart enough yet to save or destroy civilization.

‘Time is of the essence’

Despite his own dire warnings about AI, Harari said there was still time for something to be done to prevent the worst predictions from becoming a reality.

“We have a few years, I don’t know how many—five, 10, 30—where we are still in the driver’s seat before AI pushes us to the back seat,” he said. “We should use these years very carefully.”

He suggested three practical steps that could be taken to mitigate the risks around AI: Don’t give bots freedom of speech, don’t let artificial intelligence masquerade as humans, and tax major investments into AI to fund regulation and institutions that can keep the technology under control.

“There are a lot of people trying to push these and other initiatives forward,” he said. “I hope we do [implement them] as soon as possible, because time is of the essence.”

He also urged those working in the AI space to consider whether unleashing their innovations on the world was really in the planet’s best interests.

“We can’t just stop the development of technology, but we need to make the distinction between development and deployment,” he said. “Just because you develop it, doesn’t mean you have to deploy it.”

This story was originally featured on Fortune.com

More from Fortune:
5 side hustles where you may earn over $20,000 per year—all while working from home
Want more for your money? These 14 savings accounts have rates of 5% APY (and higher)
Buying a house? Here's how much to save
This is how much money you need to earn annually to comfortably buy a $600,000 home

Advertisement