As we reported last week, More than 1,000 tech leaders have penned an open letter urging for a pause on major AI development and training. The moratorium, they say, will allow developers to gain a better understanding of these technologies.
However, despite concerns over ChatGPT and generative AI, some industry figures believe this call for a halt in AI development will not hinder the progress made by China and other countries in the field.
Read Also: Why Musk, Wozniak Want OpenAl to Pause ChatGPT Upgrades
“I don’t think this [a moratorium on GPT development] will hinder China from developing its own large language models,” said Zhou Hongyi, founder of cybersecurity firm 360 Security Technology.
Tech giants in China, including Baidu and Alibaba, are working on similar projects to OpenAI’s ubiquitous ChatGPT, which has set the trend for AI chatbots since last November.
More than a thousand experts signed a letter calling on “all AI labs to pause for 6 months the training of AI systems more powerful than GPT4”
Or in plain English:
“We call on OpenAI to stop further developments for six months so their competitors have time to catch up.” 🤡 pic.twitter.com/XDVafvCwDn
— Dr. Eli David (@DrEliDavid) March 30, 2023
“We [in China] are in all honesty two years away from where GPT-4 currently is,” said Hongyi in a post published on his official Weibo account. The statement also highlights the progress China has made in the field of AI.
Hongyi’s comment on the AI technology gap follows a letter drafted by the Future of Life Institute, which has garnered hundreds of signatures, including from prominent figures such as Elon Musk, Apple co-founder Steve Wozniak, and historian Yuval Noah Harari.
‘A terrible idea’
The letter calls for a pause in the development of AI technologies that exceed the level of GPT-4 for at least six months, citing concerns about potential risks to humanity.
Andrew Ng, a prominent machine learning and AI expert, has spoken out against the proposed six-month moratorium on AI development, which he referred to as “a terrible idea” in a tweet on Tuesday.
He argued such a pause is unrealistic without government intervention and that “having governments pause emerging technologies they don’t understand…sets a terrible precedent and is an awful policy for innovation.”
“I’m seeing many new applications in education, healthcare, food… that’ll help many people. Improving GPT-4 will help. Let’s balance the huge value AI is creating vs. realistic risks,” Ng stated.
The AI expert argues that responsible AI is crucial and while there are risks associated with it, the notion that AI companies are recklessly shipping unsafe code is inaccurate.
“The vast majority of AI teams prioritize responsible AI and safety, although not all teams share this commitment,” he explained in a Twitter thread.
Is AI consciousness on the way?
Zhou predicted that due to exponential advancements in AI technology, the iterations of GPT-6 to GPT-8 will eventually gain consciousness and could potentially evolve into a new species.
“How should we humans deal with security challenges brought upon us by AI, no one has a definitive answer,” said Hongyi, who believes that not developing AI is actually a bigger threat than advancing the technology.
Hongyi’s 360 is part of a group of Chinese companies that have launched competing services to ChatGPT.
During a security-themed forum on Wednesday, the company showcased its ChatGPT-like services and, despite acknowledging some weak areas, received praise from Chinese netizens for conducting a live demo.
The founder’s remarks indicate that he does not support the call for a pause in AI development, and his company’s recent launch of rival services highlights China’s determination to continue advancing in the field.