Threads, the social media app created by Meta to rival Twitter, is blocking searches for words it considers to be “potentially sensitive.” It includes “COVID,” “vaccines,” and other keywords that the company accuses of being linked to what it calls “misinformation” in the past.
The restrictions, first reported by The Washington Post, came hardly 24 hours after Threads introduced its new search function. Users who searched for COVID and vaccine-related terms like “coronavirus,” “COVID-19,” “vaccines,” and “COVID vaccines” found they had been blocked.
Instead of seeing results related to their search queries, users encountered a blank page and a pop-up redirecting them to the website for the Centers for Disease Control and Prevention (CDC), the Post reports. Threads also blocked words such as “gore,” “sex,” and “nude.”
Meta says blockade ‘temporary’
In a statement, Meta, the company previously known as Facebook, confirmed that it blocked several keywords on Threads, saying the measure was temporary. Meta typically blocks search terms that it says are linked to content that breaks its platform rules, like QAnon hashtags.
“The search functionality temporarily doesn’t provide results for keywords that may show potentially sensitive content,” a company spokesperson told the Post.
Adam Mosseri, the head of Instagram who also heads Threads, tweeted that the company was “working to support more searches quickly… [and] trying to learn from last [sic] mistakes and believe it’s better to bias towards being careful as we roll out search.”
I hear you, and we're working to support more searches quickly. We're trying to learn from last mistakes and believe it's better to bias towards being careful as we roll out search.
— Adam Mosseri (@mosseri) September 11, 2023
Meta’s blocking of all searches containing keywords it considers “sensitive,” even posts that don’t feature rule-breaking content, is unprecedented for the firm. The move has drawn criticism on social media, with users accusing Meta of colluding with governments to stifle free speech.
“The government is not a trustworthy source on many things. That’s why it’s crucial not to censor searches or link to generic government websites that contain out-of-date or factually wrong information,” tech journalist and author Taylor Lorenz wrote on Twitter, now X.
“We need scientists, experts, doctors, and journalists to be able to disseminate reliable information,” she added.
Speaking to MetaNews, Onai Mushava, a cloud noir poet who writes about the intersection between society and the internet, said Meta’s latest limits “remind us that we are conducting our online lives under the paternal gaze of social media companies.”
“When you add words like that to the iniquistor’s index, it’s a vote of no confidence in your community whom you cannot trust with processing information and making their own judgements, and a vote of no confidence in your range and nuance as a platform,” he added.
Policing free speech on Threads
Meta launched Threads on July 5 to rival X, formerly Twitter. The firm amassed 10 million sign-ups within seven hours of its launch, becoming the fastest growing app in history. It took Meta’s engineers five months to build the app, which is integrated with Instagram.
The speed with which it was built means that it also lacked a lot of the basic features found in a social media app, like the search function. But as soon as the feature was added, it triggered past concerns about so-called “misinformation” on Meta’s platforms.
Instagram search, for instance, has been criticized for spreading “misinformation” about COVID as well as feeding what are thought to be conspiracy theories around issues of vaccination. This could have influenced Threads’ total ban of some “potentially sensitive” search terms.
However, social media firms have been politically weaponized in the service of U.S. interests. In an investigative article from 2022, The Intercept detailed how the U.S. government is secretly working with leading tech companies to monitor and moderate content. The firms include Twitter, Facebook, Reddit, Discord, Wikipedia, Microsoft, LinkedIn, and Verizon Media.
The plan is to filter out content the U.S. Department of Homeland Security (DHS) considers “dangerous speech”. The report claimed that the DHS is targeting what it calls “inaccurate information on the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines.”
It is also looking at “racial justice, U.S. withdrawal from Afghanistan, and the nature of US support for Ukraine.” The monitoring extends to elections. Facebook, for example, reportedly “created a special portal for DHS and government partners to report disinformation directly.”