The British Computer Society CEO Rashik Parmar believes that AI threats to humanity are overstated. He said concerns being expressed “play to the fears that most of society has” and have been shaped by popular science fiction films like Terminator and Ex Machina.
His comments come in the wake of a recent statement from US-based Centre For AI Safety warning of “the risk of extinction from AI.” Signed by CEOs from OpenAI and Google, the letter says the risks should be treated with the same urgency as pandemics and nuclear war.
“There should be a healthy scepticism about big tech and how it is using AI, which is why regulation is key to winning public trust,” said Parmar, a former IBM chief technology officer for Europe, Middle East and Africa, according to local media reports.
“But many of our ingrained fears and worries also come from movies, media and books, like the AI characterizations in Ex Machina, The Terminator, and even going back to Isaac Asimov’s ideas which inspired the film I, Robot.”
Movies fuel AI fears
The development of AI has raised concerns about its potential to be used for harmful purposes, such as discrimination, surveillance, and nuclear war. There have also been concerns about the potential for artificial intelligence to create mass unemployment.
In March, several luminaries including Apple co-founder Steve Wozniak, billionaire Elon Musk, Gary Marcus, and others, signed an open letter calling for a six-month moratorium on large language AI training in the world.
And then Geoffrey Hinton – considered the “godfather of AI” – quit his job at Google last month with a warning that it could fuel disinformation and cause massive job losses.
For American computer scientist Eliezer Yudkowsky, the risks of AI cannot be managed through regulation alone. He believes that the development of AI poses an existential threat to humanity and that the only way to deal with the threat is to shut it all down completely.
Parmar explained that people who are more familiar with AI through Hollywood movies are more likely to believe that it poses a threat to humanity. He said the concerns that are being expressed “play to the fears that most of society has”.
“They come from what they’ve seen in the movies. They’re amazing, you watch Terminator and you think that it’s real and that it’s going to come and kill you any second now,” said Parmar.
“It’s a killing machine, that throughout the films uses AI in different ways – interpreting what’s been done, predicting the future and responding to different situations. AI isn’t explicitly mentioned but you know it’s AI that’s doing this,” he added.
In science fiction films like Terminator, Ex Machina and The Matrix, AI is often portrayed as a threat to humanity. The films depict artificial intelligence systems that become self-aware and decide to exterminate their human creators.
Although the movies are works of fiction, they have helped to shape public perceptions of AI, according to Parmar. He noted AI is not as powerful as Hollywood would have you believe, and that the systems are not yet capable of independent thought or action.
“AI is just a bit of software and no bit of software has any intention, it’s not sentient,” Parmar stated, urging balance and responsibility in the development of artificial intelligence.
“There are legitimate concerns about AI, which is why we need to make sure it grows up responsibly,” he said.
“It needs to be developed by ethical professionals, who believe in a shared code of conduct.” The British Computer Society chief executive officer blamed the media for “feeding off these fears” to create misconceptions about the dangers of AI.
“Do films and the media have to change? No. It just proves we need more public education about the reality of AI, and for it to be part of the skills and teaching we get when we’re very young,” Parmar added.
Regulators from around the world have started to pay more attention to AI in recent months. This past week, European Commission Vice President Margrethe Vestager said the EU and the United States expect to draft a voluntary code of conduct on artificial intelligence within weeks.
⚠️Accountability on #AI can't wait. It is NOW. Today #TTC kicked off work on a 1st voluntary AI #CodeOfConduct. We’ll work with our key partners & the #AI community on #safeguards to make AI responsible, safe & trustworthy. This is a huge step in a race we can't afford to lose. pic.twitter.com/WBcazIysiK
— Margrethe Vestager (@vestager) May 31, 2023
She said the U.S. and the EU should promote a voluntary code of conduct for AI to provide safeguards as new legislation is being developed. In May, leaders of the so-called G7 nations met in Japan and called for the development of technical standards to keep AI “trustworthy”.
China’s Cyberspace Administration has already issued new regulations that ban the use of AI-generated content to spread “fake news.” In Australia, Industry and Science Minister Ed Husic said regulation is coming soon.