Experts warn that AI is an extinction-level threat, and I wish they'd stop scaring us

AI Danger
(Image credit: Getty Images)

The people building the world's most advanced AI just signed an urgent and brief statement warning that mitigating AI's extinction-level capabilities is now one of our most pressing problems.

No, I'm not overstating this. Here's the statement from the Center for AI Safety that was, according to them, cosigned by, among others, OpenAI CEO Sam Altman, Google DeepMind's Demis Hassabis, and Turing Award winners Yoshua Benigo and Geoffrey Hinton:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Imagine your car manufacturer yelling at you as you drive off the lot: "The car may kill you, all of your friends, and everyone you know!"

With 23 words, The Center paints a dire picture of runaway artificial intelligence that is already plotting our demise.

Now, though, AI is a surly teen, often too smart for its own good, high on extraordinary levels of engagement, and giddily hallucinating because it doesn't know any better.

While none of the co-signers have added any color to their signatures, Altman was quite clear a few weeks back when he spoke in front of Congress about the need to regulate AI. "We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models," Altman told lawmakers. So, it's not hard to believe that he signed a statement that calls for AI mitigation. Still, the "risk of extinction" is another level – maybe even a level of hysteria.

If you're wondering what all the fuss is about, you probably haven't had your first conversation with a chatbot. No, not the silly ones that could answer one or two questions and then ran out of fresh conversation. I mean those generative AI bots built on Large Language Models (LLMs) that use their training and ability to predict the next most likely word in a sentence to generate eerily lifelike responses to virtually any query. OpenAI's ChatGPT is the most well-known and popular among them, but Google's Bard and Microsoft's Bing AI are close behind.

ChatGPT-like capabilities are also spreading like weeds thanks to OpenAI's dead-simple plugin tools.

And for every "AI develops novel cancer treatment in 30 days" story, there's another one about an AI doing its best to find a way to destroy humanity.

Things are moving fast with AI development no longer feeling like a slow, funicular ride up a steep mountainside and instead resembling a Crisco spray-coated sled ride careening out of control down a hill while mowing down everything and everyone in its path.

The reality of AI in 2023, though, is probably somewhere in the middle.

We've been scared for a long time

Here's another headline for you: "Scientists worry machines may outsmart man." It's from The New York Times – in 2009. Back then a group of concerned scientists met in Monterey Bay, California, to discuss what they saw as the very real risk posed by AI. "Their concern is that further advances could create profound social disruptions and even have dangerous consequences," wrote The Times journalist John Markoff.

Their chief concerns were AI-engineered, polymorphic computer viruses that could defy tracking, blocking, and eradication, drones that could kill autonomously, and systems that could, when interacting with humans, simulate empathy.

They foresaw the potential for massive job loss as AIs took on many of our most repetitive and boring jobs and were especially concerned about criminals hijacking systems that could masquerade as humans.

And just as the most recent alarm bells are being rung by those who've brought us our most exciting AI advancements, 2009's oracles of doom were organized by the Association for the Advancement of Artificial Intelligence.

Back then I was somewhat stunned that so many smart people would seek to hobble this still very new technology just as it was advancing from a gestating fetus to a crawling toddler.

Now, though, AI is a surly teen, often too smart for its own good, high on extraordinary levels of engagement, and giddily hallucinating because it doesn't know any better.

Regulation, please

Teens need rules and so I'm 100% in agreement with the need for AI regulation (preferably global-level). However, I can't see how this hysterical language with talk of "extinction-level" events is helping anyone. It's the kind of fearmongering that could short-circuit AI development as consumers who fundamentally don't understand AI storm Dr. Frankenstein's castle and burn it to the ground.

AI is not a monster nor are the people developing it. The concerns are well founded but the real-time danger, well, it's just not there yet. Our systems are still kind of dumb and often get things wrong. We run a higher risk of loads of garbage data and information than we do a catastrophic event.

That, too, though is no small concern. Consumers now have unfettered access to these powerful tools but what they don't seem to understand is that AI is still just as likely to get something wrong as it is to find that once-in-an-AI-lifetime-cure.

We can't ignore the risks but inflammatory rhetoric won't help us mitigate them. There is a big difference between warning that AI poses an existential threat and saying it could lead to our extinction. One is fuzzier and, to be honest, hard for people to visualize, and the other conjures images of an asteroid slamming into the Earth and ending us all.

What we need is more smart discussion and real action on regulation – guardrails and not roadblocks. We only get the former if we stop building walls of AI terror.

Lance Ulanoff
US Editor in Chief

A 35-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.


Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Ryan, Fox News, Fox Business, the Today Show, Good Morning America, CNBC, CNN, and the BBC.