Featured January 3, 2023
Andrew Tate Arrest: Trust Twitter and TikTok Content at Your Own Risk
Days after the arrest of Andrew Tate, Twitter and TikTok began amplifying deliberate lies by fans of the popular misogynist that Tate had been released, actions that have discredited the social media platforms. The posts fooled many, but it turned out they were completely false as Tate remains held in custody in Romania.
The top Twitter post in searches for “Andrew Tate released” reshared an old video of the 36-year old’s interview with Tucker Carlson discussing his arrest in August. It was packaged as though it were new. The video garnered 2.8 million views and the post itself got 3,000 likes.
Also read: Not Everyone Loves ChatGPT: Here’s Why
“Update: Andrew Tate released without charges after swatting incident. Handled it with class and dignity,” tweeted Tate fan, @KaladinFree. “In other news, liberals and one semi-retarded tree hugger are suddenly suffering from mass depression.”
UPDATE- Andrew Tate released without charges after swatting incident. Handled it with class and dignity.
In other news, liberals and one semi-retarded tree hugger are suddenly suffering from mass depression. pic.twitter.com/5rzPOHOQ6i
— Free (@KaladinFree) December 30, 2022
Andrew Tate arrested for rape, human-trafficking
Andrew Tate is a popular online influencer best known for his misogynistic opinions. The British-American was arrested with his brother Tristan in Romania on Dec. 29, as part of an investigation into human trafficking, rape and organized crime, the Guardian reports.
Prosecutors said in a statement last week that four individuals, including two British citizens, are believed to be members of a criminal gang that exploited women for sex after enticing them with promises of a relationship.
The women were allegedly subjected to “physical violence and mental coercion through intimidation, constant surveillance, control and invoking alleged debts.” They were forced to make pornographic videos. Prosecutors said they found six women exploited by the suspects.
Additional details of the specific allegations against Tate have not been made public. Tate and his brother will spend the next 30 days in jail after a judge denied them bail on Friday. This should give Romania’s Directorate for Investigating Organized Crime and Terrorism time to continue its investigations.
Andrew Tate freed, claim Twitter fans
Following Tate’s arrest, thousands of his supporters have taken to social media sharing posts that the ex-Big Brother contestant had been freed from jail in Romania without providing evidence to back their claims.
On Twitter, Tate’s 4.2 million followers used hash tags such as #FreeTopG and #tateinnocent to spread theories that Tate was falsely accused and framed by state actors.
A top three post on Twitter in searches for “Andrew Tate released” simply states: “Andrew Tate released from jail.” The old video amassed more than 408,000 views and 4,000 likes within days of being posted online.
Andrew Tate Released From Jail pic.twitter.com/lWltW4n70x
— CrossedChains (@CrossedChains) December 30, 2022
In it, the narrator speaks about Tate’s arrest and how security agents confiscated a lot of the brothers’ assets including iPads, a YouTube studio and $400,000 in cash. Andrew Tate nods in agreement while sitting casually in a chair, his head resting nonchalantly in his hand.
Another clip with 323K views purports to show “Andrew and Tristan celebrating after being released from custody.” The video shows them taking shots of whiskey and talking about sex in a room packed with scantily dressed women. Loud music is playing in the background.
Andrew Tate and Tristan Tate celebrating after being released from custody 👑 pic.twitter.com/EJjvcT8nTv
— Evan Clarke (@EvanClarke_) December 30, 2022
Twitter is hardly the only site spreading misinformation about the alleged release of Andrew Tate. Users of other social media platforms such as TikTok have put up elaborate schemes to fool people into believing the former kickboxer had been released.
On Dec. 30, the most-liked new video about Tate involved one that “appeared at first to be a factual report,” the Guardian reports. But it ended by asking users: “What do you guys think is the real reason Tate was arrested? Was it the Matrix, or did Tate actually commit a serious crime?”
The Guardian reports an investigation by the Observer in August into Tate “revealed that TikTok was aggressively promoting his content to users including boys and young men.” Members of Tate’s online academy “had been instructed to post deliberately controversial clips in an effort to boost engagement and manipulate the algorithm,” it added.
‘The Matrix got me’
Andrew Tate may have carefully shaped opinions regarding in his arrest. On Friday, a day after his arrest, he or someone logging into his account tweeted that “The Matrix sent their agents.” He was referring to the 1999 eponymous sci-fi movie in which people lived in a simulation.
In this context, “The Matrix” is supposed to represent Big Brother, state agents who police free speech and punish thought crime. Many of Tate’s followers have rehashed his views, blaming “The Matrix” for setting the man up.
On Jan. 1, Tate doubled down on his claims. “My unmatched perspicacity coupled with sheer indefatigability makes me a feared opponent in any realm of human endeavor,” he wrote on Twitter. “For every domain the Matrix shuts down, we have dozens ready to replace it.”
My unmatched perspicacity coupled with sheer indefatigability makes me a feared opponent in any realm of human endeavor.
For every domain the Matrix shuts down, we have dozens ready to replace it.
Find The Real World here:https://t.co/0wFTUaN1cv
— Andrew Tate (@Cobratate) January 1, 2023
Social media networks have struggled to contain content that spreads falsehoods, hate speech or fan violence. However, the likes of Facebook and YouTube have responded with more aggressive moderation policies and practices. Andrew Tate was banned from Twitter in 2017 for similar offenses but was reinstated recently after the Elon Musk takeover.
It is incredible the platform has allowed fans of Tate to spread misinformation unchecked, even when it had been reported the former kickboxer remained in detention.
At this scale, users can trust Twitter and other social media networks’ content at their own risk. Most people already don’t trust social media. Only 24% believe social media does a good job separating fact from fiction, according to a Reuters Institute survey.
AI
Baidu Is Rolling Out a $145M Venture Capital AI Fund
Chinese tech giant Baidu is setting up a venture capital fund of $145 million or 1 billion yuan to back AI-focused startups. Baidu co-founder and CEO Robin Li announced the launch of the fund at a JP Morgan summit in China this week.
The move could signal China’s push towards self-reliance in the cut-throat generative AI sector. The fund will support the development and innovation of AI-based content creation, such as chatbots, video and audio synthesis, and natural language processing.
The fund is targeting early-stage AI applications, an area which Chinese generative AI startups have so far struggled to reach widespread adoption.
Also read: AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
Tailing the US’s OpenAI
OpenAI recently created an investment fund valued at more than $175 million, according to a Securities and Exchange Commission filing. the company has been investing in startups, with its OpenAI Startup Fund to back companies “pushing the boundaries of how powerful AI can positively impact the world.”
Baidu is also planning to launch competition for developers to build applications using its Ernie large language model (LLM) or integrate the model into their existing products, in a similar fashion other tech firms are using OpenAI’s ChatGPT technology.
Ernie bot is Baidu’s own AI-powered LLM that can generate natural and coherent texts based on user inputs.
“American developers are building new applications based on ChatGPT or other language models. In China, there will be an increasing number of developers building AI applications using Ernie as their foundation,” said Li.
Baidu unveiled the chatbot in March this year and claimed that it outperformed other LLMs in several benchmarks.
Battle for AI supremacy
The success of ChatGPT has put Chinese tech companies under pressure to fast-track the release of their own LLMs and bring them to market.
According to Reuters there are over 75 Chinese companies that have already released their own LLMs since 2020. Baidu and e-commerce giant Alibaba are among these companies.
A report by a state-run research firm says over 79 LLMs have been launched in the past 3 years.
And the Baidu boss predicts that in the generative AI age, Chinese companies will catch up, and even lead the way in discovering commercial applications for AI.
“I am very bullish on China AI development. Over the past few decades, China has warmly embraced new technologies,” said Li.
“Even though we didn’t invent Android, iOS or Windows, we developed a host of very innovative applications like WeChat, Douyin and Didi. Many of them are popular and useful. The same trend is playing out in the AI age. Technology ushers in a myriad of possibilities and we are good at capturing them to build applications,” explained Li.
LLMs, a vital tech
Since they can produce realistic and varied material across a range of subjects and forms, LLMs are seen as a vital technology for expanding AI applications and services. They do, however, also present ethical and legal difficulties, such as possible abuse, plagiarism, and bias. China released draft regulations on the use of generative AI in April in response to the spike in LLMs, requiring developers to acquire approval and explicitly label such products.
The growth and adoption of AI-based content production in China and elsewhere are anticipated to be accelerated by Baidu’s venture capital fund and competition.
AI
AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
On Wednesday a top EU official said the European Union and United States expect to draft a voluntary code of conduct on artificial intelligence within weeks. The move comes amid concerns about the potential risks of AI on humanity, and as calls for regulation intensify.
European Commission Vice President Margrethe Vestager said that the United States and the European Union should promote a voluntary code of conduct for AI to provide safeguards as new legislation is being developed.
She was speaking at a meeting of the EU-U.S. Trade and Technology Council (TTC), which is jointly led by American and European officials. Any new rules on AI will not take effect until at least after three years, she said. The code is, therefore, expected to bridge that gap.
Also read: EU Antitrust Chief Steps up Rhetoric on Metaverse, AI Regulation
Game-changing AI technology
“We need accountable artificial intelligence. Generative AI is a complete game changer,” Vestager said after the council’s meeting in Sweden, AP reported.
“Everyone knows this is the next powerful thing. So within the next weeks, we will advance a draft of an AI code of conduct.”
She said officials will gather feedback from companies developing and using AI, and other industry players. Vestager hopes there would be a final proposal “very, very soon for industry to commit to voluntarily.”
US Secretary of State Antony Blinken said he had an “intensive and productive” discussion on AI with his European counterparts at the TTC forum.
“[The council has] an important role to play in helping establish voluntary codes of conduct that would be open to all like-minded countries,” Blinken said.
AI could end human race
The development of AI has raised concerns about its potential to be used for harmful purposes, such as discrimination, surveillance, and nuclear war. There have also been concerns about the potential for AI to create mass unemployment.
As MetaNews previously reported, one of the core issues is what experts described as the “alignment problem.” Essentially, the problem refers to the difficulty of ensuring that an AI system’s goals and objectives are aligned with those of its human creators.
Critics say the danger is that an AI system may develop its own goals and objectives that conflict with those of its creators, leading to disastrous outcomes. On Tuesday, about 350 scientists and experts signed a statement calling for AI regulation to be a global priority.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement stated.
The statement is from San Francisco-based non-profit the Center for AI Safety. It was signed by chief executives from Google DeepMind and ChatGPT creators OpenAI, along with other major figures in artificial intelligence research.
In May, leaders of the so-called G7 nations met in Japan and called for the development of technical standards to keep AI “trustworthy”. They urged international dialogue on the governance of AI, copyright, transparency, and the threat of disinformation.
⚠️Accountability on #AI can't wait. It is NOW. Today #TTC kicked off work on a 1st voluntary AI #CodeOfConduct. We’ll work with our key partners & the #AI community on #safeguards to make AI responsible, safe & trustworthy. This is a huge step in a race we can't afford to lose. pic.twitter.com/WBcazIysiK
— Margrethe Vestager (@vestager) May 31, 2023
According to Vestager, specific agreements, not just general statements, are needed. She suggested that the the 27-nation EU and the US could help drive the process.
“If the two of us take the lead with close friends, I think we can push something that will make us all much more comfortable with the fact that generative AI is now in the world and is developing at amazing speeds,” she said.
Worldwide concern
The U.S. and the European Union are not the only jurisdictions working on AI regulation. China’s Cyberspace Administration has already issued new regulations that ban the use of AI-generated content to spread “fake news.”
In Australia, Industry and Science Minister Ed Husic said regulation is coming soon.
“There’s a sort of feeling in the community that they want to have the assurance … that the technology isn’t getting ahead of itself and it’s not being used in a way that creates disadvantage or risk for people,” he said, according to local media reports.
“That’s why the [federal government] wants to set up the next reforms that can give people confidence that we are curbing the risks and maximising the benefits.”
AI
ChatGPT’s Bogus Citations Land US Lawyer in Hot Water
A lawyer in the United States is facing disciplinary action after his law firm used popular AI chatbot ChatGPT for legal research and cited fake cases in a lawsuit.
Steven A. Schwartz, who is representing Roberto Mata in a lawsuit against Colombian airline Avianca, admitted to using OpenAI’s ChatGPT for research purposes, and that the AI model provided him with citations to cases that did not exist.
Mata is suing Avianca for a personal injury caused by a serving cart in 2019, claiming negligence by an employee.
Also read: Opera Unveils GPT-Powered AI Chatbot Aria
Bogus all the way
According to a BBC report, the matter came to light after Schwartz, a lawyer with 30 years experience, used these cases as precedent to support Mata’s case.
But the opposing counsel flagged the ChatGPT-generated citations as fake. US District Court Judge Kevin Castel confirmed six of them as non-existent. He demanded an explanation from Schwartz, an attorney with New York-based law company Levidow, Levidow & Oberman.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in a May 4 order.
“The court is presented with an unprecedented circumstance.”
The supposed cases include: Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines, none of which did not appear to exist to either the judge or defense.
Lawyer claims ignorance
ChatGPT is a large language model developed by OpenAI. Launched in November, the AI is trained on billions of data from the Internet and can perform a variety of tasks like generate text, translate languages, and even write poetry, and solve difficult math problems.
But ChatGPT is prone to “hallucinations” – tech industry speak for when AI chatbots produce false or misleading information, often with confidence.
In an affidavit last week, Schwartz said he was “unaware of the possibility that its [ChatGPT] content could be false.” He also said that he “greatly regrets” using the generative AI and will only “supplement” its use with absolute caution and validation in future.
Schwartz claimed to have never used ChatGPT prior to this case. He said he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”
A lawyer used ChatGPT to do "legal research" and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge 🤣 pic.twitter.com/AJSE7Ts7W7
— Daniel Feldman (@d_feldman) May 27, 2023
The career attorney now faces a court hearing on June 8 after accepting responsibility for not confirming the authenticity of the ChatGPT sources. Schwartz was asked to show cause why he shouldn’t be sanctioned “for the use of a false and fraudulent notarization.”
ChatGPT’s confident lies
According to the BBC report, Schwartz’s affidavit contained screenshots of the attorney that confirmed his chats with ChatGPT.
Schwartz asked the chatbot, “is varghese a real case?”, to which ChatGPT responded “yes, [it] is a real case.” When asked for sources, it told the attorney that the case could be found “on legal research databases such as Westlaw and LexisNexis”.
Again, the attorney asked: “Are the other cases you provided fake?” ChatGPT responded “No”, adding that the cases could be found on other legal databases. “I apologize for the confusion earlier,” ChatGPT said.
“Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused,” the chatbot replied with confidence.
-
AIThu 1 Jun 2023 18:00 GMT
Baidu Is Rolling Out a $145M Venture Capital AI Fund
-
AIThu 1 Jun 2023 13:30 GMT
AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
-
BusinessThu 1 Jun 2023 06:36 GMT
Metaverse Gaming Market Expected to Reach $119.2 Billion by 2028
-
AIWed 31 May 2023 17:45 GMT
Judge Orders All AI-Generated Research To Be Declared in Court
-
CryptocurrenciesWed 31 May 2023 07:03 GMT
Floki Inu (FLOKI) Volumes Surge 300% on China Metaverse Game Plans
-
AITue 30 May 2023 15:07 GMT
Nvidia Debuts AI Tools in an Era Where “Anyone Can Be a Programmer”
-
BusinessTue 30 May 2023 10:43 GMT
Chinese City Pledges $1.42bn to Boost Metaverse Industry Growth
-
AITue 30 May 2023 06:40 GMT
ChatGPT’s Bogus Citations Land US Lawyer in Hot Water