Search

AI is Far Worse Than Nuclear War, Says Prominent Researcher

AI is Far Worse Than Nuclear War, Says Prominent Researcher

Artificial general Intelligence (AGI) researcher Eliezer Yudkowsky says AI innovation is far worse than the nuclear bomb and could lead to the death of everyone on earth. But that may not be entirely accurate, according to some of his peers, who believe the risks are overstated.

Yudkowsky spoke in the wake of an open letter signed recently by several luminaries including Apple co-founder Steve Wozniak, billionaire Elon Musk, Gary Marcus, and others, calling for a six-month moratorium on large language AI training in the world.

“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter,” he warned, in a recent article published by the Time Magazine.

Also read: Trouble in ChatGPT Paradise? AI-Run Startup Fails to Meet Financial Goals

Shutting down AI

Yudkowsky is an American computer scientist best known for popularizing the idea of Friendly AI, a term referring specifically to AIs that produce “good, beneficial outcomes rather than harmful ones.” He spent 20 years researching AGI – basically the state where AI could reason like humans – and is considered a pioneer in the field.

In his article, Yudkowsky argues that the risks of artificial intelligence cannot be managed through regulation alone. He believes that the development of AI poses an existential threat to humanity and that the only way to deal with the threat is to shut it all down completely.

“Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs,” he suggested.

“Put a ceiling on how much computing power anyone is allowed to use in training an AI system…” No government or military in the world should be exempted from adhering to these exacting standards, says the Machine Intelligence Research Institute co-founder.

One of the core issues Yudkowsky raised is what he described as the “alignment problem.” Essentially, the problem refers to the difficulty of ensuring that an AI system’s goals and objectives are aligned with those of its human creators.

Critics say the danger is that an AI system may develop its own goals and objectives that conflict with those of its creators, leading to disastrous outcomes. Developing AI without solving the alignment problem is akin to building a skyscraper on shaky foundation, he said.

Nuclear hot debate

Yudkowsky fears the unintended dangers of rapidly scaling development of super smart technologies without proper safety measures. He proposes the creation of specialized organizations and institutions dedicated to addressing these safety concerns.

However, the decision theorist’s argument is not new. Many experts have been warning about the dangers of AI for years. In 2018, a report by the RAND Corporation warned that the development of AI could increase the risk of nuclear war.

The report that said the integration of AI, machine learning, and big-data analytics could dramatically improve militaries’ ability to locate, track, target, and destroy a rival’s nuclear-deterrent forces.

Others have also waded into the debate. For former Google product lead Bilawal Sidhu, the AI debate can either be treated like nuclear or it can be left open. He says an open-source AI would allow good actors to police the bad ones, which minimizes harm the tech would do.

In a series of tweets, Sidhu equated AI technology’s potence to an improved nuclear, adding data was the new crude oil in a digitized world.

“The big data era digitized everything, creating both open & closed treasure troves. Data is the new oil, and with AI, extracting this crude resource is easier than ever,” he wrote.

“Unlike in the past where government used to wield power exclusively, this influence is also held by individuals. Wielding this power isn’t limited to big tech giants or nation-states. Anyone can use it for good or evil. People have already shown wild capabilities.”

More AI capabilities

Now, even geolocation, previously a preserve of sophisticated government spy agencies, can be done by individuals. Sidhu cited an AI artist who geolocated top influencers using Instagram and publicly available camera feeds, as a case in point.

What is known as “pattern of life analysis” has traditionally been relegated to the intelligence community, but can be done by individuals. When a much more proficient large language model such as GPT-4, the latest version of the GPT family, is added to the mix, the potential of such individuals becomes terrifying, he says.

Owing to this scary prospect, Sidhu says it is little wonder OpenAI, the company that created the GPT technology, has been selective about opening up multi-modality. Combined with “detection and tracking algorithms” on media and neural link based “reasoning” ability of GPT-4 class models in parsing social feeds, a powerful big brother is born.

He called for greater regulation of the type of models such as image generation, insisting this has far less downside than general-purpose models like GPT-4.

“But it isn’t exactly rainbows and sunshine either. VFX and photoshop has always been a thing, but image models enable disinformation at an unfathomable scale,” he tweeted.

Sidhu cited how his clearly labeled “VFX videos” fooled millions. He said it was unimaginable what such tech would do in the hands of what he called “bad actors.”

In a previous interview with Newsweek, former U.S. Secretary of State Henry Kissinger argued that AI is “as consequential” but “less predictable” than nuclear weapons. Kissinger believes that the risks of AI can be managed through international cooperation and regulation.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×