As debates around AI safety continue, Australia has vowed to impose mandatory safeguards for “high-risk” AI in a move meant to promote responsible innovation and use of the technology.
This comes as there have been growing concerns over the downside of AI, especially with regards to privacy and safety. Yet governments everywhere also agree that AI technology can transform economies across all sectors, creating scope to support more AI innovations.
Australians don’t trust AI
The pledge by the Australian government follows a consultation it carried out last year into responsible AI developments. The consultation received over 500 responses, according to a TechFinitive report.
According to the report, the responses highlighted that about a third of Australians believe the country does not have adequate guardrails to ensure safe development, deployment, and use of AI.
“While AI is forecast to grow our economy, there is low public trust that AI systems are being designed, developed, deployed, and used safely and responsibly,” the paper said.
“This acts as a handbrake on business adoption and public acceptance.”
Also Read: Evil Geniuses, Former Dota 2 TI5 Champions, Exit All eSports But Valorant
Government steps in
Although there are no clear steps to follow yet, the government is taking cognizance of the concerns raised by Australians. While the government considers the specific mandatory guardrails, and if they involve tweaking the existing laws, they promise their aim is to protect citizens from irresponsible AI developments.
The guardrails will aim to “promote the safe design, development, and deployment of AI systems,” and this will relate to testing and accountability.
“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI,” Australian Minister for Industry and Science Ed Husic said off the back of the federal government’s interim response to the industry consultation.
While plans are underway for the mandatory guardrails, the government also promised continued use of what they termed “low-risk” AI.
Possible course of action
The government of Australia has also pledged to work with industry players to come up with ideal safeguards. This includes labeling and watermarking AI content, which is also being adopted in other countries as a step towards limiting AI misinformation and allowing users to distinguish between AI-generated content and the original human-made one.
Additionally, an advisory board will be set up to help in the setting up of the practical guardrails.
“We want safe and responsible thinking baked in early as AI is designed, developed and deployed,” Husic said.
According to TechFinitive, Australia will also closely watch how other countries like the US, Canada, and the EU are combating the challenges posed by AI. The EU has already come up with a Digital Services Act that seeks to create a safer digital space “where the fundamental rights of users are protected.”
Among other issues the Act will address is the ban on the use of AI for “high-risk activities such as biometric surveillance, social scoring, and untargeted scraping of facial images.”
The EU has also called on tech firms to watermark AI content, while these organizations also pledged at the White House for responsible AI developments, deployment, and usage. One of their voluntary pledges was agreeing to watermark AI content.