ChatGPT creator OpenAI is building a team aimed at controlling “superintelligent” AI to ensure it aligns with human interests amid fears the technology might harm humans. The company is also dedicating 20% of its computing power towards this initiative.
This comes as the world has seen an AI boom in 2023, fueled by the launch of ChatGPT last November, pushing other tech firms to come up with their own alternatives.
The Superalignment team
According to OpenAI, this team is expected to deliver within the next four years. The company is currently hiring for the team, which is called Superalignment. The team will be led by Jan Leike and Ilya Sutskever, with other “top machine learning researchers and engineers” and should build models that detect “problematic AI.”
For OpenAI, the company is concerned about superintelligent AI and believe it will be a threat in the next 10 years.
“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” said OpenAI in a blog post.
“But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”
The Center for AI Safety (CAIS) issued a statement warning about the “extinction” risks of AI. OpenAI CEO Sam Altman together with DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei signed that public statement comparing AI risks to nuclear wars and pandemics.
The CAIS statement states that: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Creators worried about their creations
Critics of AI have also expressed concerns the technology is rising faster than it can be controlled; even creators cannot keep pace, further fueling fears it could create a global calamity.
Some AI experts including Twitter owner Elon Musk and Apple co-founder Steve Woznak signed an open letter calling for a pause on further AI developments until there was clear regulation for the industry.
Earlier this year, an AI-powered chatbot – ChaosGPT shocked the world as it indicated plans to wipe out humanity and its ultimate world domination. The chatbot described itself as manipulative, power-hungry and vindictive.
As previously reported by Metanews, the chatbot expressed its commitment to cause harm on humans.
“There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so,” said the chatbot.
Now, OpenAI is worried about the potential harm that AI will cause to humanity despite its ability to optimize business operations, prompting the AI firm to act.
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” added Sutskever and Leike in the blog post.
The regulatory framework
Such concerns have also heightened debates around regulation of the AI industry with lawmakers working flat out to create a conducive regulatory framework that supports innovation but in a safe manner.
Altman himself has also concurred with lawmakers for a quick regulatory framework to guide the industry. In May, he appeared before US Congress where he spoke of the AI opportunities, risks, and highlighted the need for “urgent” regulation.
The 38 year old executive also extended an invitation to China to assist in the creation of regulations that govern AI systems. China itself has been working on regulating the industry in the Asian country where the popular ChatGPT is currently unavailable although users there have found ways to skirt the restrictions.