Search

AI Investor Says You Should Be Scared of Super-Intelligent AI: Here’s Why

AI Investor Says You Should Be Scared of Super-Intelligent AI: Here's Why

Artificial general intelligence (AGI) investor Arram Sabeti has warned that the AI arms race is reaching “dangerous” territory and humans should be scared of super-intelligent AI.

“I’m scared of AGI,” Sabeti wrote in a long thread on Twitter, as corporate AI rivalry reaches fever pitch. “It’s confusing how people can be so dismissive of the risks.”

“I’m an investor in two AGI companies and friends with dozens of researchers working at DeepMind, OpenAI, Anthropic, and Google Brain. Almost all of them are worried.”

Also read: GPT-4 May Already Be as Smart as Humans in Math, Medicine, Law – Microsoft

48% of experts worried about AGI

AGI is the elixir of artificial intelligence – the point at which AI-powered chatbots like ChatGPT would have gained the ability to perform any intellectual task that a human can, and perhaps even better. Microsoft claimed in a recent report that GPT-4 may have reached AGI already.

Autonomous robots with near or above human-level intelligence are the stuff of nightmares, according to Sabeti, founder of U.S. food company Zerocater. He was also the lead investor in a $6.5 million seed round for Fathom, a startup that is building an AI notetaker for Zoom.

Citing parallels with a type of nuclear reactor that will make free power, Sabeti cautioned that such a state of affairs must be feared because of the potential risks that could arise from AI that’s smarter than its human creators.

He said while people may be excited about such a prospect, “half of nuclear engineers think there’s at least a 10% chance of an ‘extremely bad’ catastrophe, with safety engineers putting it over 30%.” Sabeti then warned, “that’s the situation with AGI.”

To support his argument, the entrepreneur looked to data from the 2022 Expert Survey on Progress in AI. Per the survey, 48% of the 738 machine learning researchers polled gave at least a 10% chance of an extremely bad outcome, like human extinction, from advanced AI.

People were asked questions such as: “what probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”

Some 69% of the respondents said society should prioritize AI safety research more than it is currently prioritized. But they also generally believe that AGI is still more than 30 years away.

Arram Sabeti also pointed to another survey published on the Effective Altruism Forum titled, “Existential Risk From AI.” A poll of 44 people working in AI safety gave an average probability of about 30% for something terrible happening, with some going well over 50%, results show.

Tech leaders speak on AI risks

Sabeti said he has lost confidence in the current trajectory of AI development and called for governmental intervention.

“My trust in the large AI labs has decreased over time. They’re starting to engage in exactly the kind of dangerous arms race dynamics they explicitly warned us against from the start,” he wrote on Twitter.

“It seems clear to me that we will see superintelligence in our lifetimes, and not at all clear that we have any reason to be confident that it will go well.”

He said the most uncertain part has been when AGI would happen, though timelines have accelerated. Geoffrey Hinton, a pioneer in machine learning, recently said he can’t rule out AGI in the next 5 years, and that AI wiping out humanity is not inconceivable.

As MetaNews reported this week, Microsoft researchers claim supersmart AI is already here, as evidenced by GPT-4 completing difficult human-level intelligence tasks in math, medicine, law, and others. Other tech leaders have also spoken openly about the existential AI risks.

OpenAI CEO Sam Altman on the risk of AGI killing everyone: “So first of all, I will say, I think that there’s some chance of that. And it’s really important to acknowledge it.”

Twitter owner and billionaire Elon Musk: “With artificial intelligence we are summoning the demon. Mark my words – AI is far more dangerous than nukes.”

Late physicist Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race.”

Top alignment researcher Paul Christiano: “WIthout AI alignment, AI systems are reasonably likely to cause an irreversible catastrophe like human extinction.”

While the adoption of AI is on the rise and there are clear benefits to its use, concerns about the potential dangers of super intelligent AI persist. It is important for organizations to consider the impact of AI on humanity and to ensure that AI is designed in a way that is safe.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×