UN Advisor Wants Human Rights-Centric AI Regulation

UN Advisor Wants Human Rights-Centric AI Regulation

As debates on generative AI regulation continue following a boom in technology this year, a UN advisor is pushing for human rights to be at the center of the parameters that governments adopt.

Markus Krebsz, an expert in risk management and AI, neurotechnology, and robotics and an adviser to the UN, thinks special emphasis must be placed on safeguards to protect children while promoting sustainability and reducing the global digital divide.

Although there is a consensus on AI’s transformative abilities, a proposed way forward for governments holds that machines should not trump humans. This is in relation to products with embedded AI and other digital technology.

Also read: AI Powered by Human Brain Cells Achieves Speech Recognition

AI-enabled products

The proposal covers a wide range of AI-enabled products or other embedded digital technologies. These include internet-enabled toys, consumer drones, industrial robots, sophisticated medical diagnostic tools, smart phones, cameras, and smart speakers.

This comes as AI gadgets have been criticized for potential privacy infringement, with privacy campaigners last month unearthing several children’s toys and gadgets secretly listening to their conversations.

Other products that are also covered in the proposal include autonomous weapons, self-driving cars, and stand-alone AI that is not embedded.

“Smart devices—including home gaming, mobile phones, robots, and drones—increasingly have AI systems embedded within them and therefore have the potential to be impacted by these new regulation proposals,” said Krebsz, an honorary professor at the University of Stirling Management School.

“It is imperative that our fundamental rights as citizens, from privacy and personal and data protection to potential harm to the environment, are addressed when governments draw up their regulatory frameworks.”

Concerns have been raised about personal data privacy breaches by big firms such as Meta, which in 2019 was fined hefty penalties, as much as $5 billion, by the Federal Trade Commission for breaching users’ privacy.

The company was also accused of breaching users’ privacy with its Facebook Messenger Kids app by failing to guard kids’ privacy.

A tough call

Also a founding director of the Human-AI Institute, Krebsz has spearheaded efforts for the United Nations Economic Commission for Europe Working Party 6 (UNECE WP.6) to create global guidance for member states since 2020.

He, however, acknowledged that coming up with global AI regulations for products embedded with AI is not a walk in the park but a tall order.

“Regulating products with embedded AI has its challenges; for example, the products and technology are continuously evolving, and that brings with it questions of whether these products remain compliant following firmware updates,” said Krebsz.

“But I hope the proposals provide a solid basis for governments of member states to develop their own horizontal digital regulatory framework for products with embedded AI and to provide greater certainty as well as reducing risks for all stakeholders—policymakers, industry, and the public.”

Regulation for transparency

Heidi Lund, chair of UNECE WP.6, has acknowledged ongoing debates on regulation the world over to cover digital innovation and AI. While these may differ depending on an individual country and market, there is a need to embrace “the multitude of legitimate regulatory concerns related to a digital product, such as product safety, cybersecurity, resilience, and privacy.”

Regulation should help improve transparency and boost innovation.

“Proper regulation shouldn’t be seen as stifling AI development,” said Kebsz.

“In fact, it will do the opposite—it will encourage sustainable innovation with the aim of ensuring safe infrastructure and helping to reduce trade barriers for products with embedded AI.”

Image credits: Shutterstock, CC images, Midjourney, Unsplash.