Search

Japan Won’t Penalize Developers in Upcoming Generative AI Guidelines

Japan Won't Penalize Developers in Upcoming Generative AI Guidelines

Japan will not penalize companies that fail to comply with its upcoming generative AI guidelines, a list of 10 principles on the use and development of AI, according to government and ruling coalition officials, as reported by The Japan Times.

Instead, the guidelines will focus on encouraging AI developers to be more responsible. The idea is to help speed up the development of AI rather than impede innovation and economic growth by imposing stiff penalties and rules on noncompliant startups, the report said.

Also read: Japan Takes Aim at AI Overreliance and Bias with Proposed Rules

Japan considers AI certification system

According to officials, the guidelines will “list 10 principles, including compliance with the Constitution, respect for human dignity, the protection of privacy, and the need to ensure transparency in data learning.”

Japan also wants to prevent situations where personal data is leaked during AI training. So, the government plans to ask businesses to stop users from becoming too dependent on AI. It also doesn’t want companies to share personal user information with third parties without permission.

In order to prevent privacy breaches, Japanese authorities are considering “introducing a certification system” to protect user data and improve transparency from AI developers, The Japan Times wrote. The regulation will also cover some eight industries that are considered to be high-risk areas for AI use, including finance, medical care, and broadcasting, it added.

The guidelines are expected to be finalized by year-end and will apply only to companies that are building generative AI systems, such as OpenAI’s ChatGPT, not ordinary users.

Japan is looking to AI to boost economic growth, cope with labour shortages, and make it a leader in advanced chips. The government is reportedly backing a company called Rapidus to manufacture high-tech chips as part of an industrial policy that’s aiming to reclaim Japan’s lead in technology.

Not following the EU’s ‘strict’ example

Developments in generative AI by companies such as OpenAI and Anthropic have fueled both fear and excitement due to the potential that the technology could have on economies and society. Japan is mostly playing catch-up with the likes of the U.S. and the European Union (EU).

This might explain the Asian country’s more relaxed or softer approach to regulating AI. The stance flies in the face of the EU’s more stringent AI Act, which the bloc hoped would be a blueprint for other countries to follow.

Europe’s draft AI regulations have been criticized by the U.S. State Department, warning that they could hamper investment in the emerging technology and favor large AI companies over smaller rivals. It said that some rules in the Act are based on terms that are “vague or undefined,” according to Bloomberg.

Yutaka Matsuo, a professor at the University of Tokyo who also chairs the Japanese government’s AI strategy council, previously described the EU’s draft AI Act as “too strict,” saying it is “almost impossible” to specify copyrighted material used for deep learning.

“With the EU, the issue is less about how to promote innovation and more about making already large companies take responsibility,” said Matsuo, Reuters reported.

Japan’s computing power, defined as the availability of graphics processing units (GPUs) used to train AI, is far behind that of the U.S. Matsuo said.

“If you increased the GPUs in Japan by 10 times, it would probably still be less than what OpenAI has available,” he added.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×