Connect with us

Featured February 15, 2023

Chat Fishing: How Artificial Intelligence Could Affect Online Dating



Chat Fishing: How Artificial Intelligence Could Affect Online Dating
istock Getty Images

Scared of Cat-fishing, Kitten-fishing, Woke-fishing? Time for Chat-fishing. The new frontiers of Conversational AI will mean fresh use-cases – and new dangers, possibilities and questions, too. The clock has struck 12.

In Spike Jonze’s 2013 movie ‘Her’, the protagonist, played by Joaquin Phoenix, falls in love with an AI virtual assistant. Theodore is enthralled by Samantha’s voice, her empathy, her unerring ability to actually ‘get’ him, as well as her talent for holding both silly and deep conversations the way only a lover can. The plot is predictable: you feel in your gut that he is doomed for despair.

And so it proves – Theodore’s heart is eventually broken when Samantha admits she has been deep-learning so fast that she has outgrown him.

That movie is already ten years old, and recent advancements in AI suggest the world of online dating will soon change. The question is, for better or worse?

Also read: Men Turn to Chatbots to Pen Love Letters

AI in Dating – Swipe Right?

‘Her’ could be happening right now. AI has entered our lives so quickly and viscerally that we didn’t even blink when our banks replaced human customer support staff with bots. After all, they are smart, personable, prompt, good at listening, and they never argue back – unlike humans.

Little did we know that these bots would soon trickle into our love-lives too. How do you know you are not chatting with an algorithm when you are dating online?

Ask Kwame Ferreira, co-founder of Impossible and Bond Touch (makers of long-distance wrist-bands for lovers to stay in touch), and you get some sense of how relationships can change in the AI world. Particularly as humans get used to interacting with dating algorithms.

“As the technology of AI deepens, the way we interact and form relationships is likely to change. One of the most significant ways this could happen is through the increased use of AI-powered avatars or chat-bots for dating or social interactions.

“In some ways, this could be seen as a positive development, as it could make it easier for people to connect with others, regardless of their physical location or social status. For example, people with social anxiety or mobility issues might find it easier to form relationships through AI-mediated interactions.”

Indranil Bandyopadhyay, principal analyst at Forrester, dismisses the possibility of humans being completely transplanted by AI here. “AI is very intelligent today but it is mimicking intelligence of natural neural architecture. In that, it has only reached the level of a brain of an insect. The human brain is far more complex. We ourselves do not know about the depth and intricacies of our brain. How can AI simulate it yet?”

As Jui Ramaprasad, Associate Professor of Information Systems at the University of Maryland’s Robert H. Smith School of Business, observes, platforms have been using AI to improve user experience for some time. Ramaprasad has recently published research into a feature that reveals ‘who likes you’ (WLY) on dating apps. So it pays to listen when she says bots on dating apps could be a cause for concern. She explains:

“Pushing the recommendation-algorithm path further, is using AI for the steps after an initial match on a dating site or a real estate listing site, i.e. chat-bots ‘making the first move’? This seems scary.”

AI for Dating – Swipe Left?

The potential negative consequences of AI in online dating cannot be overlooked. Deception is one potential danger, reminds Professor Andrea Stevenson Won, who heads up the Virtual Embodiment Lab at Cornell University.

“Just like with cat-fishing on other social applications, people could be hurt or disappointed if they interacted with another ‘person’ who was not what they seemed. And, similar to cat-fishing schemes, a bot could be a front for a bad actor looking to take financial or other advantage.”

The spectrum is even bigger than that: people could wield AI not just for financial gain but for their own enjoyment.

“It’s important to realize that a lot of people use virtual worlds to play with alternate identities for fun – i.e., there is a long tradition of using discrepant avatars playfully,” Stevenson Won notes.

Kwame Ferreira believes one of the major issues with AI-mediated relationships is the lack of transparency. “If people are unaware they are interacting with an AI rather than a human, they may form false impressions or misunderstandings about the other person. This can be especially problematic in dating or romantic contexts, where people may become emotionally invested in a relationship only to discover later that the other person was not real.

“Additionally, AI-mediated relationships may lack the emotional depth and nuance of real human interactions, which could lead to dissatisfaction or disappointment in the long run.”

So much of our society is based on trust between people, or between people and institutions, and so many of the transactions based on that trust assume it is relatively easy for a person to prove who they are, opines Kentaro Toyama, W. K. Kellogg Professor of Community Information at the University of Michigan School of Information.

Professor Toyama, author of Geek Heresy: Rescuing Social Change from the Cult of Technology, gives the example of how many of us have the experience of relaying a password or other secret information to a spouse or relative over FaceTime or Zoom.

“We trusted the other party because they looked and spoke like the person we know. But existing technology is very close to being able to fake such online interactions. When we can no longer tell when we are interacting with a person or a computer, we will have to rethink much of everything we believe about establishing trust.”

Jenny Fu, a doctoral student in Information Science in the Robots in Groups (RIG) lab at Cornell University, reminds how AI can fulfil people’s communicative and relational goals to others. She says:

“In online dating, people are motivated to present themselves as attractive and authentic. AI systems have the potential to support people with their self-presentation by assisting them in showcasing their attractiveness yet expressing their ‘real self’.”

However, since current AI-suggested messages tend to be more positive than negative, AI-mediated communication could have a downstream effect on people’s conversation dynamics, priming their perceptions of themselves and others and potentially altering their behavior to be more positive.

AI in Dating – How About Benching?

Ferreira argues that as the technology advances, we could see the emergence of AI that can mimic human behaviour and emotions in such a way that it’s almost indistinguishable.

“The ethical questions around this are certainly complex and nuanced, with many experts suggesting the need for strict regulations and guidelines to be put in place to ensure the safe use of these systems and to make sure that users are aware of the nature of their interactions.”

One can find both relief and caution in how Professor Ramaprasad looks ahead. “Though I cannot predict the future, I always remind myself that technology has automated the pieces that can be automated but has not diminished the importance of the human interaction: the first date, the job interview, the tour of a potential home to purchase, and so on.” She adds:

“So, if the past can predict the future (as AI believes it can!), my optimistic view is that AI will perhaps allow us to focus more on the important pieces of creating and maintaining social relationships.”

Hyper-connection, convenience, predictability, super-human responsiveness, algorithm-kissed, instant gratification, always-on love?

Or love which is messy, unpredictable, beautifully complex, unique and quintessentially human? The choice, as always, is ours – Bandyopadhyay says wistfully and wisely.

In ‘Her’, the protagonist Theodore Twombly taught us an important lesson, not in the final scene when the AI assistant Samantha departs – but rather, in the first. We learn that Theodore’s job is to write beautiful, heartfelt letters. Because people needed them. And they had forgotten how to write them themselves.

That’s the irony. That’s the choice.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.


Biden Administration Demands China’s ByteDance Sell TikTok to Avoid Ban



Biden Administration Demands China's ByteDance Sell TikTok to Avoid Ban

The U.S. government has demanded that TikTok’s Chinese owners ByteDance sell their shareholding in the widely popular video sharing app – or else face a ban, according to the Wall Street Journal.

America cites “national security” as the reason for the ban. The government has for a long time raised concerns about the social media site, voicing fears that China could use the app as a tool for espionage, and to possibly influence political outcomes in the U.S.

Former U.S. president Donald Trump threatened the exact same action in 2020, but the High Court struck the executive order down. Now President Biden is taking a tougher stance after Democrats were criticized for being weak on ByteDance, the Quartz reported.

TikTok rubbishes security concerns

The demand for ByteDance founders and owners to sell their 20% stake in TikTok came from the U.S. Committee on Foreign Investment, or CFIUS, a multi-agency federal task force responsible for national security risks in cross-border investments, per WSJ.

While the shares of Zhang Yiming, ByteDance CEO Liang Rubo, and others who helped found the company in Beijing in 2012 appear in the minority, the shareholding carries a reported “outsized” share in voting rights.

Global investors own 60% of ByteDance’s shares, and the other 20% is owned by employees. In a statement shared with Reuters news agency, TikTok spokeswoman Brooke Oberwetter explained that banning the video-sharing app on national security grounds would be a farce.

“If protecting national security is the objective, divestment doesn’t solve the problem: a change in ownership would not impose any new restrictions on data flows or access,” she said.

“The best way to address concerns about national security is with the transparent, US-based protection of US user data and systems.”

A Mar. 16 report by The Information suggested that the Chinese government will not take the U.S. threats lying down. It said Beijing will “oppose any attempt by the Biden administration to force TikTok’s Chinese shareholders to sell their stakes.”

Banning the app everywhere

The U.S. is a key market for ByteDance, with over 100 million people using TikTok in the country. That is partly why the company has been fighting tooth and nail to remain operational in America, even as the working environment looks increasingly difficult.

TikTok pledged to spend $1.5 billion on a program to protect U.S. user data and content from Chinese government access or influence, according to industry media. The plan would seal off U.S. operations, with all data stored within the country at Oracle, the U.S. tech firm. Oracle would have access to TikTok’s “algorithmic code and flag issues for government inspectors.”

But as MetaNews previously reported, the issue has become highly politicized. While TikTok CEO Shou Zi Chew is expected to testify on security issues before a House Committee on Energy and Commerce on Mar. 23, local lawmakers have already tightened their chokehold on foreign-owned tech companies.

Also read: TikTok Manipulates Own Algorithm to Promote Certain Landmarks

Earlier this month, the Biden administration endorsed proposed new legislation from about 12 senators that gives the secretary of commerce power to restrict tech firms based in six countries, namely Russia, Iran, North Korea, Venezuela, Cuba, and China. The U.S. considers all these countries “adversaries.”

It isn’t only Biden who is suspicious of TikTok. The app is also facing scrutiny in Canada, the UK, New Zealand and in the European Union. On Thursday, the UK banned legislators and other public officials from hosting the app on their work devices. And over 30 U.S. states have banned TikTok from being downloaded on state devices.

Continue Reading


Google Says AI Can’t Replace Human Ingenuity, Integrates It Into Applications



Google Says AI Can't Replace Human Ingenuity, Integrates It Into Applications

Google has integrated generative artificial intelligence (AI) into its workspace applications like Google documents, Gmail, and Slides, according to a recent statement.

Per the announcement, the workspace applications would possess similar features to OpenAI’s ChatGPT and DALL-E, alongside Stable Diffusion’s Canva.

Users of Google docs could use AI to generate, summarize and brainstorm text. AI might also be leveraged to generate full emails by writing brief bullet points. On the other hand, Slides users can tap into AI to generate images, audio, and video to enhance presentations.

Google’s Workspace vice president of products, Johanna Voolich Wright, said the product would allow:

“Workspace users to harness the power of generative AI to create, connect, and collaborate like never before.”

Google’s AI product to be available to some US testers

The product will be available to select US-based “trusted testers” later this month, while it will be rolled out for public use later this year. The firm did not provide further information on its launch timeline.

Meanwhile, Google added that the new features would help increase users’ productivity while saving them time.

“Workspace saves you the time and effort of writing that first version. Simply type a topic you’d like to write about, and a draft will instantly be generated for you. With your collaborative Al partner, you can continue to refine and edit, getting more suggestions as needed.”

Google plans to extend the product into its other applications.

Google says AI can’t replace “real people ingenuity”

Although it has talked up the advanced capabilities of its AI product, Google says generative AI cannot be a “replacement for the ingenuity, creativity, and smarts of real people.”

The company wrote that AI sometimes gets things wrong — an allusion to the tech’s costly error that shaved the company’s stock value by around $100 billion.

Due to this, the firm said its focus is on building responsible AI that keeps the users in control. Google added that the AI would only make suggestions that users can accept, edit, and change.

Also Read: Microsoft Eliminates AI Ethics and Society Team

Who is winning the AI tech race?

While Google’s previous AI strides had been fraught with issues, its technological rival Microsoft has made giant strides in developing its ChatGPT.

Previously, Microsoft revealed plans to integrate the ChatGPT program into its Office Suite packages. The firm aims to revolutionize human-machine interactions by integrating AI into its products and tools.

Microsoft has already integrated AI technology into developer tools like Azure and other products like GitHub Copilot, Microsoft Teams, and Microsoft Viva Sales. The firm also credited the chatbot for the astronomic growth of its search engine, Bing.

Concerns have, however, been raised about Microsoft’s apparently waning commitment to responsible AI development: recent layoffs mean the company no longer has a dedicated team to ensure its ethical AI principles are integrated into product design.

Of course, some will argue that AI should be an ideology-free zone, after ChatGPT’s political leanings were exposed.

Continue Reading


Bing Removes Waiting List For All AI Chatbot Users



Bing Removes Waiting List For All AI Chatbot Users

Microsoft’s Bing Chat waitlist is gone, allowing new users who sign up to immediately access the AI powered chatbot without waiting.

The tech giant’s search engine Bing has been making its way to the spotlight after they released their ChatGPT powered chatbot Bing Chat, which was previously available to a select few after joining the waitlist.

Also read: GPT-4 is Here: What You Need to Know About OpenAI’s New ChatGPT

This doesn’t seem to be the case anymore. Microsoft released the Bing Chat feature early last month, and they even added an icon on their Edge browser. However, access to the chatbot still required signing in and joining a waiting list.

Only until accepted would one have a go on the AI powered chatbot. That has not been the smoothest onboarding experience, which seems logical for Microsoft to make it easier for the market to use its new Bing.

As of Thursday, many people claimed that they were able to gain access to the chatbot soon after signing up. Windows Central, reported that multiple members of their team were able to use the chat feature instantly.

TechCrunch also tested using multiple email accounts and gained access as well with some of the emails they used. However, one still has to request to join the waitlist after signing up and if you’re as lucky you might get access instantly.

“I joined the waiting list yesterday, it was less than 2 sec long” tweeted one user identified as Khatarina.

While the company did not specify if the waitlist changes were permanent or not, Microsoft said in a statement that it is running various experiments to onboard more users.

“During this preview period, we are running various tests, which may accelerate access to the new Bing for some users. We remain in preview and you can sign up at,” said the company.

Bing gives users a taste of ChatGPT-4

The changes in the waitlist came after Microsoft confirmed its Bing AI chatbot has been running on OpenAI’s next generation AI language model, GPT-4.

The announcement generated a lot of interest and excitement on the market at a time GPT-4 is not yet publicly available.

Microsoft invested $10 billion with OpenAI towards the ChatGPT research. They have hit the jackpot as their ChatGPT-4 powered chatbot has Bing on the map. According to Jacob Roach Microsoft Bing Chat saw about 1 million users signing up for the waiting list soon after their initial announcement.

While OpenAI is only offering their latest model ChatGPT-4 to plus members, having access to the Bing Chat gives users a taste of the new ChatGPT which is used to power Bing Chat.

However unlike OpenAI’s chatbot Bing Chat does not allow users to use both text and image as input, but unlike OpenAI Bing has access to the internet which widens the results pool.

A downside to Bing Chat is that one can only have 15 conversational interactions before clearing the chat and starting again as compared to OpenAI, which allows one to save conversation even on the free version.

Tech firms haven’t been sitting by

Microsoft endured criticism when Bing was launched last month as users felt the company had rushed to release the product. But to the company’s credit, many of the challenges the first users encountered have been fixed.

Microsoft has been working non-stop in improving service delivery by integrating AI into most of their products. Recently they added the ChatGPT powered AI bot to windows 11 task bar according to TechCrunch.

Microsoft was scheduled to hold an event ‘Reinventing Productivity With AI’ on Thursday with the company expected to show off more AI features in its Office programs like Word, PowerPoint, and Outlook.

The popularity of ChatGPT has also put a lot of focus on Microsoft, while other tech companies are also busy incorporating the technology into their products and services.

Google is not just sitting by watching from the terraces. The search giant announced its Bard AI chatbot in February. Google also released AI-powered tools for its suite of online apps on Tuesday, ahead of Microsoft’s announcements later in the week.

Last month, Snapchat also released its AI chatbot ‘My AI’ powered by OpenAI’s ChatGPT technology.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe for Email Updates

* indicates required

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved