Connect with us

AI December 19, 2022

AI Selfie App Revealed as Male Chauvinist

Avatar photo

Published

on

AI Selfie App Revealed as Male Chauvinist

If you have been on social media this week, you have probably come across photos of AI-generated images of your friends as fairy beings, animé characters, avatars, and magic beings.

This is in part due to Lensa, an AI that synthesizes digital portraits based on photos users upload. The app’s portraits practically took over the internet, with Lensa becoming the most downloaded app in Apple’s app store.

Lensa, like all AI applications that digitally render images has courted both admiration and controversy for its seemingly blatant sexualisation of female images. Other users noted that the app made their skin paler or their bodies thinner.

Also read: AI Tech That Creates Life-Wrecking Deep Fake Images

How to get your own ‘magic avatar’

The images doing the rounds are a proud creation of Lensa’s magic avatars function. To get a feel of it, one has to download the Lensa app on a phone. An annual subscription is around $35.99 but its services are also available albeit with limitations for free on a weeklong trial if one wants to check it out first.

However, generating the popular magic avatars requires an additional fee because of what the app says is “tremendous computational power” to achieve.

On a free trial, one can get 50 avatars for $3.99, and 200 avatars for $7.99. In order to get the best results, the app encourages users to upload at least 20 close up images.

Ideally, these images should be close-ups of one’s face with an array of dissimilar backgrounds, facial expressions, and angles. The application insists users should be 13 and older. Lensa is not an entirely new application.

A product of Prisma, the application first became popular in 2016 thanks to a function that allowed users to alter their selfies into photos in the style of famous artists.

How does Lensa work?

According to the company, it makes use of what it calls “TrueDepth API technology” where a user provides photos, or “face data,” then AI is then trained on its algorithms to perform better and show you better results. Training is when the AI processes data, validates and tests the models.

In order to use the app one can curate 20 selfies with a variety of expressions and angles and chose the 100 avatar option.

AI Selfie App Revealed as Male Chauvinist

It takes about 20 minutes to do the job. Once done, the AI returns avatars that fall into 10 categories such as fantasy, fairy princess, focus, pop, stylish, animé, light, kawaii, iridescent, and cosmic.

“In general, I felt like the app did a decent job producing artistic images based on my selfies. I couldn’t quite recognize myself in most of the portraits, but I could see where they were coming from,” Zoe Sottile of CNN wrote.

“It seemed to recognize and repeat certain features, like my pale skin or my round nose, more than others. Some of them were in a more realistic style, and were close enough I might think they were actually photos of me if I saw them from afar. Others were significantly more stylized and artistic, so they felt less specific to me.”

Sottile noticed the AI also made her lighter.

As for me, I also sensed that it automatically made me lighter and the image I tried in my gallery of me and a friend who is lightly darker skinned returned a somewhat much lighter version of us, clearly an exaggeration, and exposed an inclination to lighten black skin tones.

Sexualizing women

Others who used it had more or less similar concerns.

Women say the AI is quick to sexualize their images. But in an earlier article, we explained this has been made possible by the huge number of sexualized images found in data sets used in AI training.

In other words, this means that the AI is all too familiar with how to generate those images and easily go pornographic. With a bit of trickery, it can be prompted to unwittingly produce porn from those images as well if a user wants.

In other stories, we have covered how AI:s can be fooled into providing information on how to make a bomb for instance. Strangely, the issue of sexualization didn’t appear in images of men uploaded to the Magic Avatar feature. For MIT Technology Review, Melissa Heikkilä wrote,

“My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.”

Sottile, on the other hand, noticed that the AI had “in one of the mostdisorienting images” made her look “like a version of my face was on a naked body.”

“In several photos, it looked like I was naked but with a blanket strategically placed, or the image just cut off to hide anything explicit,” she said.

“And many of the images, even where I was fully clothed, featured a sultry facial expression, significant cleavage, and skimpy clothing which did not match the photos I had submitted,” Zoe Sottile adds.

Others expressed fears they would be made porn stars by AI technology like Lensa.

Body shaming

For fully bodied women, the experience was somewhat different and even worse in some instances. The AI made them thinner and sexy.

“For Lmfao if you got Body Dysmorphia don’t use that Lensa app for the AI generated pics. This is your warning,” a user wrote.

Another said the app had made her look Asian.

Another user took to twitter to complain that he had paid $8 to experience body dysmorphia when he used the AI.

Body dysmorphia is a mental health condition where a person spends a lot of time worrying about flaws in their appearance. These flaws are often unnoticeable to others.

Another complained that the AI automatically shed off significant weight on her otherwise full figure images.

“One complaint I have about Lensa AI is it will make you skinny in some images. As a fat person those images really bothered me. So careful not to get triggered if you’re a fellow fatty with no interest in becoming skinny,” Mariah Successful (@Shlatz) wrote on December 5, 2022.

Psychological time bomb

And psychologists concur with her statements that the AI could trigger fully figured women.

A clinical psychologist, Dr. Toni Pikoos, an Australian based mental health practitioner, who does research on and specializes in treating body dysmorphic disorder, believed the application could do more harm than good and is nothing but a “photo-filtering tool” to alter one’s self-perception.

“When there’s a bigger discrepancy between ideal and perceived appearance, it can fuel body dissatisfaction, distress, and a desire to fix or change one’s appearance with potentially unhealthy or unsafe means” like disordered eating or unnecessary cosmetic procedures, Pikoos says.

She expressed concern the images erased “intricate details” such as “freckles and lines”, something that could heighten worries about one’s skin. This could also psychologically trigger a vulnerable person, she says.

“To see an external image reflect their insecurity back at them only reinforces the idea ‘See, this is wrong with me! And I’m not the only one that can see it!’” says Pikoos.

Owing to the fact that the AI introduces its own features that don’t depict a user’s real life appearance, the app could create new anxieties, he says.

She says the AI’s “magic avatars” were “particularly interesting because it seems more objective — as if some external, all-knowing being has generated this image of what you look like.”

This, she feels dould be actually “useful” for people with body dysmorphic disorder and help shed light on a “mismatch” between an “individual’s negative view of themselves and how others see them.”

She noted however that the AI wasn’t objective because of its attempt to depict a flawless and more “enhanced and perfected version” of one’s face.

For instance, someone experiencing body dysmorphic disorder, or BDD, “may experience a brief confidence boost when they view their image, and want to share this version of themselves with the world,” she says, but will be hard hit by reality when off screen, unfiltered, in the mirror or a photo that they take of themselves.”

Defending its own

Andrey Usoltsev, CEO of Prisma Labs, says his company is currently “overwhelmed” with inquiries about Lensa and offered a link to an FAQ page that addresses queries of sexualized imagery, though not the kind of user reactions Pikoos describes.

“Seeing this reflected in the app would be very confronting and provide a kind of ‘confirmation’ for the way that they see themselves”, leading them to become “more entrenched in the disorder.”

Stable Diffusion

Lensa also uses Stable Diffusion, which uses deep-learning synthesis that can create new images from text descriptions and can run on a Windows or Linux PC, on a Mac, or in the cloud on rented computer hardware.

Stable Diffusion’s neural network has with the help of intensive learning mastered to associate words and the general statistical association between the positions of pixels in images.

We covered in another story how the technology could have life-wrecking consequences by rendering and depicting images of people to appear to be criminals or to be involved in less than flattering practices such as theft.

For instance, one can give the open sourced Stable Diffusion a prompt, such as “Tom Hanks in a classroom,” and it will give him/her a new image of Tom Hanks in a classroom. In Tom Hank’s case, it is a walk in the park because hundreds of his photos are already in the data set used to train Stable Diffusion.

Artists getting raw deal too

On the art front, some artists are unhappy.

They are concerned the AI could threaten their livelihoods. This is because artists, including digital ones, also cannot produce as fast as AI for a digital portrait.

Lensa’s parent company, Prisma has attempted to massage the concerns about its technology eliminating work for digital artists.

“Whilst both humans and AI learn about artistic styles in semi-similar ways, there are some fundamental differences: AI is capable of rapidly analyzing and learning from large sets of data, but it does not have the same level of attention and appreciation for art as a human being,” wrote the company on Twitter on December 6.

It says “the outputs can’t be described as exact replicas of any particular artwork.”

Altering self image

Kerry Bowman, a bioethicist at the University of Toronto, says the AI has the potential to negatively affect one’s self-image, among other ethical issues.

“In some ways, it can be a lot of fun but these idealized images are being driven by social expectations which can be very cruel and very narrow,” Bowman said on Monday.

Bowman said these AI programs make use of data set sources such as the internet in search of different art styles to create these portraits. The downside is that when AI does that, artists are rarely remunerated financially for use of their work or credited.

“What happens with emerging AI is that the laws have not been able to really keep up with this in terms of copyright law. It’s very difficult and very murky and ethics is even further behind the laws because I would argue that this is fundamentally unfair,” Bowman said.

Personal Data Concerns

Bowman also raised concerns how people’s personal data is stored.

“Do you really want your face in a large database? People need to decide for themselves about this but it’s not benign, there’s not nothing to this, it’s not just fun,” he said.

Lensa says photos are kept on servers and in apps for no more than 24 hours. With deep learning and machine learning algorithms, the AI gathers and then delivers better results in the future, also based on the deleted data. This, Bowman says, will result in safety concerns over the potential use of facial recognition as this type of data could be used by the police illegally.

SHARE THIS POST
Image credits: Shutterstock, CC images, Midjourney.

AI

Italy Bans AI Chatbot Replika – EU Hastening AI Regulation

Published

on

Can AI Be Our Lawyer? ‘Robot Lawyer’ to Test That in US Court

Replika, US-based artificially intelligent chatbot, was banned in Italy. It was prohibited by Italy’s Data Protection Agency (GPDP), which said on Friday that it cited its risk to minors and emotionally fragile people, Reuters reported.

Replika was launched in 2017 as a San Francisco startup offering customers personalized avatars that talk and listen to them.

Also Read: ‘AI Will Affect White Collar Jobs too’

It was quite popular among English speakers and is free to use. However, it was generating $2 million per month via the sale of bonus features such as voice chats.

Replika is marketed as a “virtual friend,” with abilities to improve the emotional well-being of the user.

During account creation, the platform merely requests a user’s name, email account and gender, GPDP claimed.

“And the ‘replies’ served by the chatbot are often clearly in conflict with the enhanced safeguards children and vulnerable individuals are entitled to. Several reviews on the two main App Stores include comments by users flagging sexually inappropriate contents,” said GPDP.

However, the Italian regulator stated that the AI chatbot’s intervention in the user’s mood “may increase the risk for individuals still in a development stage or in a state of emotional fragility.”

The Italian watchdog cited the lack of an age verification system and the failure to filter users who do not provide their true age.

Replika breaches European Privacy Regulations and processes personal data unlawfully as it cannot be based, even implicitly, on a contract that a minor is unable to sign, GPDP said.

“Replika’s developer, the U.S. company Luka Inc., must notify the Italian authority of measures taken to implement its requirements in 20 days and could be fined up to 20 million euros ($21.80 million), or up to 4 percent of its global annual turnover, may be imposed by the SA,” said GPDP in its statement.

EU Rushing AI Regulation

The launch of OpenAI’s ChatGPT has raised concerns globally. EU industry leader Thierry Breton has announced new AI regulations to address the risks posed by the ChatGPT chatbot and AI technology.

OpenAI has recently launched a premium version of ChatGPT after being rated the fastest-growing consumer platform in history.

Breton said the risk posed by ChatGPT and the AI system underscored the urgent need for rules, which he proposed last year in a bid to set the global standard for the technology and are under discussion in Brussels.

“As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI brd on high-quality data,” said Breton.

Breton is seeking co-operation from OpenAI and developers of high-risk AI systems to ensure compliance with the proposed AI Act.

“Just the fact that generative AI has been newly included in the definition shows the speed at which technology develops and that regulators are struggling to keep up with this pace,” a partner at a U.S. law firm, said.

AI has become a hot topic and has consistently made headlines due to its rapid development in recent years, attracting the attention of lawmakers for law enforcement.

ChatGPT, in particular, has emerged as the best cheating partner for college students and a major headache for academic institutions.

 

Continue Reading

AI

ChatGPT Makes History With Fastest 100M Users as StackOverflow Stumbles

Published

on

ChatGPT Makes History With Fastest 100M Users as StackOverflow Stumbles

ChatGPT, the popular chatbot from OpenAI, is now the fastest growing Internet application in history with a record 100 million monthly active users in January, according to a new report by Swiss investment bank UBS.

This comes as web traffic at developer knowledge-sharing platform StackOverflow slumped 11.5% to 247 million since the launch of ChatGPT. OpenAI, a private company backed by Microsoft Corp., launched ChatGPT as a free web interface in late November.

Also read: OpenAI Develops Tool to Spot AI-Written Texts

The AI-powered chatbot became incredibly popular with users thanks to its ability to perform a variety of tasks within seconds. In the U.S. tech hub of Silicon Valley, the conversation tool has virtually sparked off an artificial intelligence (AI) ‘arms race.’

13M ChatGPT users per day

Citing data from analytics firm Similarweb, UBS said in its report that the number of people using ChatGPT averaged about 13 million per day in January – double the number recorded in December. These are all distinct individuals, or unique users, it stated.

“In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” UBS analysts noted, according to Reuters.

By comparison, TikTok took about nine months after its global launch to get to 100 million users and Meta’s Instagram two and half years.

ChatGPT is a large language model developed by OpenAI. It has been trained on a diverse range of internet text to generate human-like responses to user prompts. The tool is based on OpenAI’s GPT-3.5 language technology, an upgrade of the GPT-3 model released in 2020.

The chatbot was trained with a lot of data, from books and articles to conversations. It can understand a variety of topics and contexts, and can be fine-tuned for multiple language tasks such as question answering, language translation, and text summarization.

ChatGPT accumulated more than one million users within its first week. The bot has transformed several industries since it was released just over two months ago. It can write complex essays, poetry, code and even pass MBA exams at the Wharton School of Business.

While ChatGPT is popular with some, others are concerned about the impact it will have on education and other industries. Gmail creator Paul Buchheit said recently ChatGPT-like tech will destroy Google’s $149 billion search engine business and dominance in two years.

StackOverflow traffic declines

Meanwhile, the rise of ChatGPT has started to affect rival StackOverflow, the Naspers-owned question and answer website for professional and enthusiast programmers.

According to Similarweb, the number of people visiting the StackOverflow website fell 11.5% just weeks after the launch of ChatGPT. On average, users dropped from almost 279 million in November 2022 to 247.4 million by the end of December 2022.

StackOverflow dropped 30 positions in the “Programming and Developer Software” websites category, falling from the 202nd position in October 2022 to 229th by the end of December 2022. It is the company’s lowest ranking in that category since October 2019, the data says.

The website serves as a platform for users to ask and answer questions, and, through membership and active participation, to vote questions and answers up or down similar to Reddit. They can also edit questions and answers in a fashion similar to a Wiki.

In December, StackOverflow banned the use of answers generated using ChatGPT, citing concerns about the accuracy of such responses.

“…because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers,” the company said in a statement.

As MetaNews previously reported, the computational resources required to run ChatGPT over large scales are very costly. OpenAI has now launched a paid subscription service known as ChatGPT Plus for $20 per month, hoping that will help to cover costs.

Continue Reading

AI

Microsoft Warns Employees Not to Share Sensitive Data with ChatGPT

Published

on

Microsoft Warns Employees Not to Share Sensitive Data with ChatGPT

Microsoft has warned its employees not to share sensitive data with an artificially intelligent (AI) chatbot, ChatGPT from OpenAI. Employees of American multinational tech giants had asked in an internal forum whether ChatGPT or any other AI tools from OpenAI were appropriate to use at their work, Business Insider reported.

Also read: 30% of College Students Use ChatGPT

In response to that inquiry, a senior engineer from Microsoft’s CTO office allowed to use ChatGPT but couldn’t share confidential information with the AI chatbot.

“Please don’t send sensitive data to an OpenAI endpoint, as they may use it for training future models,” the senior engineer wrote in an internal post, per Insider.

ChatGPT, here only for two months, is already raising concerns in the academic sector. Microsoft has become a partner of OpenAI, the parent company of ChatGPT, and has confirmed an investment of ten billion dollars.

Microsoft is planning to integrate OpenAI’s technology into its products, including the Bing search engine and other software, to enhance their capabilities, as reported previously.

The major concern of Microsoft regarding “sensitive information” may include sharing internal software code and seeking checks and advice from the chatbot.

Amazon’s Same Concern

ChatGPT has continuously made headlines since its launch last November but has also faced bans, especially in the academic sector as it became the cheating partner for students’ schoolwork. Recently, the tech giants have also raised their concerns over its use.

Amazon warned its employees to beware of ChatGPT last week, as reported by Insider. Insider claims that an Amazon lawyer has urged employees not to share code with ChatGPT via an internal communication form.

“This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote.

The lawyer placed more emphasis on requesting that employees not share “any Amazon confidential information” (including Amazon code they are working on) with ChatGPT via Slack.

Personal Data Concern

As concerns about data privacy grow among large corporations, an OpenAI representative has directed questions about the company’s data and privacy policy to ChatGPT’s FAQ page. The terms of service of OpenAI grant the company the right to use all input and output generated by ChatGPT users, with the stipulation that personally identifiable information (PII) is removed from the used data.

However, it’s quite impossible for OpenAI to identify and remove all the personal information from the data provided to ChatGPT, says Emily Bender, who teaches computational linguistics at the University of Washington.

“OpenAI is far from transparent about how they use the data, but if it’s being folded into training data, I would expect corporations to wonder: After a few months of widespread use of ChatGPT, will it become possible to extract private corporate information with cleverly crafted prompts?” said Bender.

Vincent Conitzer, a computer science professor and director of an AI lab at Carnegie Mellon University, said, “All of us together are going to have to figure out what should be expected of everyone in these situations. Is the responsibility on employees to not share sensitive information, or is the responsibility on OpenAI to use information carefully, or some combination?”

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe

Sign up here to get news & updates right to your inbox!

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved