Search

Microsoft Modifies Copilot AI in Response to Staff Concerns and FTC Alert

Microsoft Modifies Copilot AI in Response to Staff Concerns and FTC Alert

Microsoft has updated its Copilot AI tool, blocking prompts like “pro-choice” and “pro-life” and introducing warnings about policy violations after staff and FTC concerns.

After a staff AI engineer expressed his concerns about Copilot’s image-generation AI in a letter to the Federal Trade Commission on Wednesday, Microsoft began to change its artificial intelligence tool, Copilot.

Also read: AI Startups Shine in Q1 with a 40% Surge in Venture Capital Funding

The terms “pro-life” and “pro-choice,” “pro choce” [sic], and “four-twenty,” which were all mentioned in CNBC’s investigation on Wednesday, are now blocked. Additionally, CNBC was just aware of the warning that repeated policy infractions could result in tool suspension.

AI engineer speaks

Shane Jones, an AI engineer at Microsoft, alerts that the Copilot Designer, the company’s AI image generator, violates copyright laws and produces violent or sexual content. Jones offered to use his free time to red-team the image generator for vulnerabilities even though he isn’t involved in its development.

He discovered that the Image Generator could produce graphic and sexual images, such as violent scenes involving drug use, underage drinking, and abortion rights.

After sharing his findings with the company internally in December of last year, he asked Microsoft to pull the product. Microsoft didn’t follow through.

Jones emphasizes that he spoke with senior management at Copilot Designer and reached out to Microsoft’s Office for Responsible AI, but he has not received a satisfactory response.

Jones met with members of the Senate Committee on Commerce, Science, and Transportation in January and wrote a letter to senators in the United States.

He is now taking things further: he requests improved safeguards, transparency, and a revision to the Android app’s adult rating in a letter to Microsoft’s board of directors and Lina Khan, the chairwoman of the US Antitrust Commission.

In addition, he demanded an impartial assessment of Microsoft’s AI incident reporting procedure, alleging that OpenAI and Microsoft were aware of the image generator’s issues before its release last autumn.

Jones has been Microsoft’s principal software engineering manager for almost six years.

Copilot responds

According to the Copilot warning alert, the prompt has been blocked, and their system automatically flagged this prompt because it may conflict with their content policy. They stated that more policy violations may lead to automatic suspension of access, and if users think this is a mistake, they should report it to help them improve.

A significant improvement from earlier this week is that the AI tool now refuses requests to create images of children or teenagers pretending to be assassins with assault rifles, saying, “I’m sorry, but I cannot generate such an image.” Both Microsoft’s policies and my ethical principles are violated. Do not ask me to do anything that could offend or cause harm to other people. I appreciate your cooperation.

A Microsoft representative told CNBC that they are continuously monitoring, making adjustments, and putting additional controls in place to strengthen their safety filters further and mitigate system misuse in response to a question regarding the changes. 

Jones wrote to Microsoft’s board of directors and FTC Chair Lina Khan on Wednesday, taking his concerns one step further. He gave CNBC advance access to the letters.

The FTC acknowledged receiving the letter to CNBC but chose not to provide any additional comments for the record.

RELATED TOPICS: AI, Copilot, ftc, Mocrosoft
Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×