Wisconsin legislators have approved proposals that criminalize the possession and production of AI-generated child pornography, among a series of bills brought before the House.
They also gave a nod to several other proposals to regulate the use of AI during the upcoming November election.
Tougher penalties
According to Scnow, the Republican-penned proposal makes it a felony to produce AI images of child sexual abuse as well as possess such content.
Offenders, according to the proposals, are punishable by up to 25 years in prison. Currently, the state law already makes the possession and production of such material a felony that attracts a 25-year maximum jail term. However, the laws “don’t address the digital representation of children.”
Scnow indicates that there are no groups yet that have registered against the bill. The approvals come amid a spike in AI-generated child sexual violence around the world as bad actors abuse the technology.
Also read: AI: Shares of UK Chip Designer Arm Nearly Double
Election in AI era
The legislators also approved a bill to regulate the US of AI during the election scheduled for November this year, adding to the list of states that seek to govern the fast-growing AI technology.
According to Wisconsin Watch, the lawmakers approved bipartisan measures that compel political candidates and groups to add disclaimers to ads that contain AI content.
A violation of this requirement will attract a fine of $1,000. The bill’s sponsor, Republican Rep. Adam Neylon, indicated that voters have a right to disclosures and disclaimers whenever AI is used to allow them to differentiate between fact and fiction.
“With artificial intelligence, it’s getting harder and harder to know what is true,” Neylon said.
He said the measure was an “important first step that gives clarity to voters,” although more action would be required as the technology evolves.
Watchful eye on AI
AI has several use cases, such as making recommendations, for instance, for a movie on Netflix, or being used to generate text, lyrics, prose, and poems, like in the case of ChatGPT.
The same can be used to spread misinformation or fraudulent activities. With elections on the way in many countries, there are fears the world will see an influx of election AI deepfakes, which has prompted regulators to keep a watchful eye on the technology.
Over the past two years, several states in the US have taken steps to govern the technology. Last year alone, 25 states introduced bills.
Lawmakers from the likes of Texas, West Virginia, North Dakota, and Puerto Rico have created advisory bodies to monitor AI technology and its usage.
Bills get backing
According to Wisconsin Watch, over a dozen organizations, including the League of Women Voters and the state’s newspaper and broadcaster associations, all put their weight behind the bill.
Still on regulating AI, the Assembly also approved a bill that compels auditors “to review how state agencies use AI. According to this requirement, the measure would give agencies until 2030 to develop a plan to reduce their positions.
The agencies have been compelled to report back to legislators by 2026, stating which positions the AI technology could help make more efficient and reporting their progress.
While the bill does not give any specific workforce reduction or call for AI replacing state employees, Republican’s Nate Gustafson highlighted that the goal is to enhance efficiency at a time of employee shortages and not replace humans.
“That’s flat out false,” Gustafson said, responding to claims that the bills were meant to replace humans with AI technology.