Search

China Wants AI Chatbots To Follow Communist Party Censorship Rules

China Wants AI Chatbots To Follow Communist Party Censorship Rules

Chinese regulators have released draft regulations that will compel generative artificial intelligence – the software systems like the one underpinning ChatGPT – to follow the Chinese Communist Party’s (CCP) strict censorship rules, The New York Times reported on Monday.

Generative AI is a set of machine learning techniques that allow computers to generate text, draw pictures, and create other media that resemble human output. The draft rules were released by the powerful Cyberspace Administration of China (CAC) and target local AI firms.

Chinese technology giants have been racing to play catch-up since American startup OpenAI launched its hugely popular chatbot ChatGPT in November. Alibaba launched its generative AI Tongyi Qianwen earlier this month, which the firm plans to integrate into several of its services.

Baidu unveiled its ChatGPT rival called Ernie Bot in March. SenseTime, which makes facial recognition software, also released an AI product. At least two start-ups developing Chinese alternatives to OpenAI’s technology have raised millions of dollars, say reports.

Also read: ChatGPT Facing Major Regulatory Hurdles in Europe and Canada

Toeing the Chinese Communist Party line

According to the New York Times report, the CAC’s draft rules will require content produced by AI systems to reflect “socialist core values” and avoid information that undermines “state power” or national unity.

Companies will be expected to follow the CCP’s strict censorship rules, just as websites and apps have to avoid content that is deemed politically sensitive, the CAC said. The regulations also prohibit profiling users and impose strict controls on AI-generated content.

The move by China is part of a broader effort by governments worldwide to regulate the development of AI tech to prevent things like falsehoods, half truths, and even outright lies. The tech industry refers to this sort of AI behavior as “hallucinations”.

China Wants AI Chatbots To Follow Communist Party Censorship Rules

Hallucinations remain a major concern for AI developers across the industry. Even companies like Microsoft, which invested $13 billion in OpenAI, are still fine-tuning their chatbots to eliminate harmful responses.

Per the report, the CAC’s draft rules highlight this concern and lay out a framework for how Chinese firms will need to approach the development of the tech.

Experts say the regulations could also have implications for foreign companies operating in China, as they will be required to comply with the CCP’s censorship rules if they want to do business in the country.

“Generative artificial intelligence put into tension two of the top goals of the party: the control of information and leadership in artificial intelligence,” said Matt Sheehan, an expert on Chinese AI, as quoted by The Times.

The measures will come into effect later this year, CNBC reports, and will work alongside other various regulations around data protection and algorithm development in China.

Governments tighten AI regulation

China has a long history of censoring the Internet. It’s often sweeping censorship rules have hounded global tech giants such as Google and Facebook out the vast Asian country. Chinese authorities have since banned ChatGPT, but it continues to flourish underground.

The CCP’s rules are far reaching to the extent that they “even require censorship of specific words and dates like June 4, 1989, the day of the Tiananmen Square massacre.” However, China is not alone in trying to regulate generative AI.

Governments around the world are grappling with how to regulate the development of the technology, which has the potential to transform the way humans live and do business in both good and bad ways.

In March, Italy banned ChatGPT, citing concerns about the technology’s potential to generate fake news and other harmful content. As MetaNews previously reported, AI chatbots are entering a period of regulatory pain in Europe and Canada over issues of privacy.

Canada launched a probe into ChatGPT’s privacy-related practices. In the European Union, OpenAI has grappled with similar concerns after the regional bloc tightened regulations on data protection and privacy, specifically the risk that ChatGPT could violate Europe’s privacy rulebook, the General Data Protection Regulation (GDPR).

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×