Search

Canada Outlines Voluntary Code of Conduct for AI Developers

Canada Outlines Voluntary Code of Conduct for AI Developers

The Canadian government has outlined a voluntary code of conduct for AI developers to prevent their systems from being used to create harmful or malicious content.

Consultations are currently ongoing between Innovation Canada and various stakeholders such as civil society and industry experts over the code which is expected to be in place before Bill C-27, a legislation which also contains the Artificial Intelligence and Data Act (AIDA).

The consultations will also ensure that a group of lawmakers have a better understanding of the AI technology in time for the return of Parliament in September.

Mandatory safeguards

According to Calgary Herald, the code will require AI developers to have safeguards that ensure their technology is not used for malicious purposes like cyberattacks, impersonating real people, or any actions that might dupe people into revealing their personal data or giving legal or medical advice.

Criminals have used generative AI to clone people’s voices and use them to trick their friends and families of cash after pretending to be in trouble.

In Canada, experts have been pushing for the government to take control of the AI industry following the launch of generative AI platforms like ChatGPT and other systems, which can compose prose, lyrics, and text, and generate real-like images, videos, and audio.

The technology has potential to transform industries but can also be abused and misused by devious individuals.

Now the Canadian government wants the voluntary code to be “sufficiently robust to ensure that developers, deployers, and operators of generative AI systems are able to avoid harmful impacts, build trust in their systems, and transition smoothly to compliance with Canada’s forthcoming regulatory regime.”

Innovation Canada has also indicated that the government “intends to prioritize the regulation of generative AI systems.” Plans for the code are contained in a document that was released on Wednesday, August 16.

Also read: Caitling Long Speaks Out Against D.C.’s Attempt to Lock Down Digital Assets

Shutterstock

Distinction between original and AI content

According to the document, AI developers will be compelled to put in place systems that allow people to distinguish between AI content and a human-made creation and ensure human oversight.

This is not unique to Canada alone as the EU also asked online platforms like Meta to label any AI-generated content as part of moves to combat misinformation. 

In the US, tech firms like Google, Meta, OpenAI, Athropic, Amazon, Microsoft, and Inflection voluntarily agreed with the White House for responsible and safe AI developments. They committed to watermarking AI content and safeguarding the market from cyberattacks and discrimination.

The code of conduct also touches on users’ safety calling on the AI companies to ensure their systems are safe and secure.

In addition to that, the voluntary code also obliges developers to critically examine their systems to avoid “low-quality data and non-representative data sets or biases.”

This comes as generative AI has been accused of perpetrating biases against minorities.

The regulation headaches

Meanwhile, the same document released on Wednesday, August 16 also talks about the bill, which contains the AIDA, saying it was designed “to be adaptable to new developments in AI technologies” and will provide the “legal foundation” to regulate generative AI.

However, the bill has been criticized for being outdated. According to the National Post, the bill was introduced in June 2022. This means it was written before ChatGPT and its other competitors like Google Bard and Microsoft Bing were launched, “meaning it actually pre-dates the emergence of generative AI systems.”

The bill has also been criticized for being rushed, without enough input from other stakeholders and therefore in need of extensive revision to provide clarity on what AI technologies it will govern and how.

Since the launch of ChatGPT and the subsequent boom in generative AI, lawmakers from across the globe have been cracking heads to try and come up with a regulatory framework to govern the tech. 

From China to the US and Europe, AI regulation has been topical with leaders trying to strike a balance between promoting innovation as well as maintaining the security of users.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×