Artificial Intelligence (AI) researchers have warned of the danger of superhuman AI and told MPs they should be regulated in the same way as nuclear weapons. Oxford University researchers told House of Commons Science and Technology Committee that this could risk human extinction.
Doctoral student Michael Cohen said: With superhuman AI, there is a particular risk that is of a different sort of class, which is …it could kill everyone.”
“If you imagine training a dog with treats, it will learn to pick actions that lead to getting it treats, but if the dog finds the treat cupboard, it can get the treats itself without doing what we wanted it to do,” added Cohen.
Also read: CNET Suspends AI After Publishing a Series of Bad Articles
The Independent reported the researchers have also warned the AI industry had already become a “literal arms race” as competition mounts to produce both commercial and military applications with the technology.
“I think the bleak scenario is realistic because AI is attempting to bottle what makes humans special, that has led to humans completely changing the face of the earth,” said University of Oxford professor of machine learning Michael Osborne.
“Artificial systems could become as good as good at outfoxing us geopolitically as they are in the simple environments of game,” added Osborne.
AI as dangerous as nukes
This is not the first time AI scientists have warned of the risk posed by technology. The latest warning echoes that of a thought experiment put forward by philosopher Nick Bostron about 20 years ago.
The Paperclip Maximizer problem hypotheses that a super intelligent AI would ultimately destroy humanity even if its initial goal of producing the most amount of paperclips possible – was not explicitly malicious.
“There are some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons. AI is as comparable a danger as nuclear weapons,” said Osborne.
Elon Musk has also previously called for more government oversight saying the technology has potential to cause more harm than nuclear weapons.
Self-authored Bill to regulate self
On Wednesday, Congressman Ted Lieu introduced the first ever piece of federal legislation written by artificial intelligence.
Using the artificial language model ChatGPT, Lieu offered the following prompt: “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.”
The resulting resolution introduced was the first in the history of Congress to have been written by AI.
Earlier in the week, Lieu authored an op-ed in the New York Times calling for Congress to take action to regulate AI.
“We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future,” wrote Lieu.
Recognizing the potential positive impacts of artificial intelligence, Lieu’s resolution outlines Congress’ “responsibility to ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans.”
Lieu, who is one of the few members of Congress with technology background also said that some of the harm associated with AI could be “deadly.”
“As one of the just three members of Congress with computer science degree, I am enthralled by AI and excited about the incredible ways it will continue to advance society.”
“And as a member of Congress, I am freaked out by AI, specifically AI that is left unchecked and unregulated,” he wrote.
The rise of AI and tools like ChatGPT have raised a lot of concerns with regards cheating in the classroom.
Earlier this month, New York City’s Department of Education announced a ban on ChatGPT from its schools’ devices and networks on fears this could inspire more cheating by students.
Although not entirely a new technology, ChatGPT – a chatbot created by OpenAI rose to fame on social media late last year with some describing the bot a better search engine than Google due to its conversational speaking style and coherent, topical response style.