An unhinged article published by MSN, the online news portal owned by Microsoft, described deceased former NBA player Brandon Hunter as “useless” in its headline, sparking outrage on social media. The post, which appears to have been written by an AI, has been pulled down.
Hunter, 42, collapsed during a hot yoga session at Orlando studios and died on Sept. 12, according to several reports. Between 2003 and 2005, Hunter played a combined 67 games for the Boston Celtics and the Orlando Magic and went on to have a long career in Greece and Italy.
‘AI should not write obituaries’
In its headline, the distorted MSN article reads: “Brandon Hunter useless at 42.” The rest of the short article is complete hogwash, telling readers that the athlete “handed away” after achieving “vital success as a ahead [sic] for the Bobcats” and “performed in 67 video games.”
Here is the introduction to the MSN post, which was modeled after a TMZ Sports story about the ex-NBA player’s death, only less intelligible.
“Former NBA participant Brandon Hunter, who beforehand performed for the Boston Celtics and Orlando Magic, has handed away on the age of 42, as introduced by Ohio males’s basketball coach Jeff Boals on Tuesday.”
Social media users swiftly criticized MSN for being insensitive, sloppy, and “embarassing on so many levels,” as one user on X, formerly Twitter, put it. Another shouted, “AI should not be writing obituaries. Pay your damn writers, MSN.”
Posting on Reddit, one individual wrote:
“The most dystopian part of this is that AI which replaces us will be as obtuse and stupid as this translation – but for the money men, it’s enough.”
Another redditor complained about MSN’s carelessness, accusing the network of caring only about “making money.”
“Who cares if those words are accurate, they make money!” the user quipped sarcastically. “It’s gonna kill the internet. Just a sea of bad content that is tailored to game the algorithm and not make humans happy or educated.”
— Ed 🥯🏴 (@EdLockwood87) September 13, 2023
This is not the first time that MSN has published erroneous AI-generated content on its portal. In August, the platform published a bizarre AI-written travel guide on Ottawa, Canada, advising tourists to visit a local food bank. The article was deleted after criticism, Futurism reported.
Microsoft senior director Jeff Jones told the Verge that “the [Ottawa] article was not published by an unsupervised AI. In this case, the content was generated through a combination of algorithmic techniques with human review, not a large language model or AI system.”
In 2020, MSN reportedly sacked the entire team of human journalists who were responsible for moderating content on its platform. According to some reports, MSN has been publishing what is thought to be ridiculous and sloppy content, including stories about “mermaids.”
Generative AI chatbots such as OpenAI’s ChatGPT or Google’s Bard are very smart, capable of generating text, code, and even solving complex mathematical problems. But the AI models are also prone to producing falsehoods and complete lies, often with confidence.
The technology industry refers to these confident lies as “hallucinations.” This weakness has become a major focus area for regulators around the world.
In July, for example, the U.S. Federal Trade Commission (FTC) began an investigation into OpenAI over potential violations of consumer protection laws related to ChatGPT, which is accused of spreading false information and flouting data privacy rules.
The FTC is investigating whether ChatGPT caused people harm by providing the wrong answers to their questions. It wants to know if the company “engaged in unfair or deceptive privacy or data security practices” that caused users reputational harm, the Washington Post reported.