Australia Sets Up an Advisory Body to Address AI Risk Concerns

Australia Sets Up an Advisory Body to Address AI Risk Concerns

In a move to tackle the challenges posed by artificial intelligence, Australia has announced the formation of a new advisory body.

This initiative, as disclosed by the government, marks Australia’s intensified efforts to regulate AI technologies. The establishment of this body underlines the nation’s focus on navigating the complex landscape of AI, balancing its economic benefits against the potential risks.

Understanding the AI landscape

Artificial intelligence is fast taking over industries, an indicator of the great extent of economic growth. Minister for Science and Industry Ed Husic emphasized how AI has the potential to grow the economy. He, however, also pointed out the uneven application of AI in business, signifying a need for broader and more consistent integration. While it holds potential, there is skepticism surrounding AI technology. Trust issues related to AI technology have become an issue that inhibits the adoption of this technology. Husic identified this low trust as part of the reason for the impeded uptake of beneficial AI technologies.

“There’s also a trust issue around the technology itself, and that low trust is becoming a handbrake against the uptake of technology, and that’s something we’ve got to confront.”

Australia’s proactivity with AI regulation isn’t entirely new, though. In 2015, the nation was blazing a trail insofar as establishing the world’s first eSafety Commissioner. However, Australia has been more tardy than other countries trying to catch up with AI. The establishment of this advisory body is in line with global norms, especially those of the European Union, which already has obligatory regulations on AI for technology companies. However, Australia’s first set of guidelines will be voluntary, reflecting a cautious approach to AI governance.

Towards a safer AI future

In response to the growing importance of AI, Australia opened a consultation last year, which received over 500 responses. This indicates high interest and concern among the public and industry stakeholders. The government seeks to restrict AI applications to “low risk” and those posing “high risk.” For example, using AI to filter spam emails is considered low-risk, whereas creating manipulated content like “deep fakes” is considered high-risk. The government’s complete response to the consultation, due later this year, is highly anticipated and will likely shape the future direction of AI policy in Australia.

Furthermore, the Australian government plans to join hands with industry bodies and put forward a set of AI guidelines. These guidelines encourage technology firms to adopt practices like labeling and watermarking AI-generated content. Such steps are critical in increasing transparency and trust with AI applications, facilitating ease in differentiating ‘deep fake’ content created by AI from what humans make.

The balance of innovation and safety

The creation of the advisory body and the proposed guidelines reflect an attempt to strike a delicate balance between fostering innovation and ensuring safety. The voluntary nature of the initial guidelines suggests a desire to encourage compliance without stifling innovation. This approach might evolve as the AI landscape changes, requiring the government to adapt its strategies to new developments and challenges in AI.

As the government prepares to release its full response to the AI consultation later this year, there is much anticipation about the future of AI regulation in Australia. This response will likely define how Australia navigates the complex interplay between technological advancement and ethical considerations.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Recommended topics

Related articles

Latest Video