Connect with us

AI December 19, 2022

AI Selfie App Revealed as Male Chauvinist

Published

on

AI Selfie App Revealed as Male Chauvinist

If you have been on social media this week, you have probably come across photos of AI-generated images of your friends as fairy beings, animé characters, avatars, and magic beings.

This is in part due to Lensa, an AI that synthesizes digital portraits based on photos users upload. The app’s portraits practically took over the internet, with Lensa becoming the most downloaded app in Apple’s app store.

Lensa, like all AI applications that digitally render images has courted both admiration and controversy for its seemingly blatant sexualisation of female images. Other users noted that the app made their skin paler or their bodies thinner.

Also read: AI Tech That Creates Life-Wrecking Deep Fake Images

How to get your own ‘magic avatar’

The images doing the rounds are a proud creation of Lensa’s magic avatars function. To get a feel of it, one has to download the Lensa app on a phone. An annual subscription is around $35.99 but its services are also available albeit with limitations for free on a weeklong trial if one wants to check it out first.

However, generating the popular magic avatars requires an additional fee because of what the app says is “tremendous computational power” to achieve.

On a free trial, one can get 50 avatars for $3.99, and 200 avatars for $7.99. In order to get the best results, the app encourages users to upload at least 20 close up images.

Ideally, these images should be close-ups of one’s face with an array of dissimilar backgrounds, facial expressions, and angles. The application insists users should be 13 and older. Lensa is not an entirely new application.

A product of Prisma, the application first became popular in 2016 thanks to a function that allowed users to alter their selfies into photos in the style of famous artists.

How does Lensa work?

According to the company, it makes use of what it calls “TrueDepth API technology” where a user provides photos, or “face data,” then AI is then trained on its algorithms to perform better and show you better results. Training is when the AI processes data, validates and tests the models.

In order to use the app one can curate 20 selfies with a variety of expressions and angles and chose the 100 avatar option.

AI Selfie App Revealed as Male Chauvinist

It takes about 20 minutes to do the job. Once done, the AI returns avatars that fall into 10 categories such as fantasy, fairy princess, focus, pop, stylish, animé, light, kawaii, iridescent, and cosmic.

“In general, I felt like the app did a decent job producing artistic images based on my selfies. I couldn’t quite recognize myself in most of the portraits, but I could see where they were coming from,” Zoe Sottile of CNN wrote.

“It seemed to recognize and repeat certain features, like my pale skin or my round nose, more than others. Some of them were in a more realistic style, and were close enough I might think they were actually photos of me if I saw them from afar. Others were significantly more stylized and artistic, so they felt less specific to me.”

Sottile noticed the AI also made her lighter.

As for me, I also sensed that it automatically made me lighter and the image I tried in my gallery of me and a friend who is lightly darker skinned returned a somewhat much lighter version of us, clearly an exaggeration, and exposed an inclination to lighten black skin tones.

Sexualizing women

Others who used it had more or less similar concerns.

Women say the AI is quick to sexualize their images. But in an earlier article, we explained this has been made possible by the huge number of sexualized images found in data sets used in AI training.

In other words, this means that the AI is all too familiar with how to generate those images and easily go pornographic. With a bit of trickery, it can be prompted to unwittingly produce porn from those images as well if a user wants.

In other stories, we have covered how AI:s can be fooled into providing information on how to make a bomb for instance. Strangely, the issue of sexualization didn’t appear in images of men uploaded to the Magic Avatar feature. For MIT Technology Review, Melissa Heikkilä wrote,

“My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.”

Sottile, on the other hand, noticed that the AI had “in one of the mostdisorienting images” made her look “like a version of my face was on a naked body.”

“In several photos, it looked like I was naked but with a blanket strategically placed, or the image just cut off to hide anything explicit,” she said.

“And many of the images, even where I was fully clothed, featured a sultry facial expression, significant cleavage, and skimpy clothing which did not match the photos I had submitted,” Zoe Sottile adds.

Others expressed fears they would be made porn stars by AI technology like Lensa.

Body shaming

For fully bodied women, the experience was somewhat different and even worse in some instances. The AI made them thinner and sexy.

“For Lmfao if you got Body Dysmorphia don’t use that Lensa app for the AI generated pics. This is your warning,” a user wrote.

Another said the app had made her look Asian.

Another user took to twitter to complain that he had paid $8 to experience body dysmorphia when he used the AI.

Body dysmorphia is a mental health condition where a person spends a lot of time worrying about flaws in their appearance. These flaws are often unnoticeable to others.

Another complained that the AI automatically shed off significant weight on her otherwise full figure images.

“One complaint I have about Lensa AI is it will make you skinny in some images. As a fat person those images really bothered me. So careful not to get triggered if you’re a fellow fatty with no interest in becoming skinny,” Mariah Successful (@Shlatz) wrote on December 5, 2022.

Psychological time bomb

And psychologists concur with her statements that the AI could trigger fully figured women.

A clinical psychologist, Dr. Toni Pikoos, an Australian based mental health practitioner, who does research on and specializes in treating body dysmorphic disorder, believed the application could do more harm than good and is nothing but a “photo-filtering tool” to alter one’s self-perception.

“When there’s a bigger discrepancy between ideal and perceived appearance, it can fuel body dissatisfaction, distress, and a desire to fix or change one’s appearance with potentially unhealthy or unsafe means” like disordered eating or unnecessary cosmetic procedures, Pikoos says.

She expressed concern the images erased “intricate details” such as “freckles and lines”, something that could heighten worries about one’s skin. This could also psychologically trigger a vulnerable person, she says.

“To see an external image reflect their insecurity back at them only reinforces the idea ‘See, this is wrong with me! And I’m not the only one that can see it!’” says Pikoos.

Owing to the fact that the AI introduces its own features that don’t depict a user’s real life appearance, the app could create new anxieties, he says.

She says the AI’s “magic avatars” were “particularly interesting because it seems more objective — as if some external, all-knowing being has generated this image of what you look like.”

This, she feels dould be actually “useful” for people with body dysmorphic disorder and help shed light on a “mismatch” between an “individual’s negative view of themselves and how others see them.”

She noted however that the AI wasn’t objective because of its attempt to depict a flawless and more “enhanced and perfected version” of one’s face.

For instance, someone experiencing body dysmorphic disorder, or BDD, “may experience a brief confidence boost when they view their image, and want to share this version of themselves with the world,” she says, but will be hard hit by reality when off screen, unfiltered, in the mirror or a photo that they take of themselves.”

Defending its own

Andrey Usoltsev, CEO of Prisma Labs, says his company is currently “overwhelmed” with inquiries about Lensa and offered a link to an FAQ page that addresses queries of sexualized imagery, though not the kind of user reactions Pikoos describes.

“Seeing this reflected in the app would be very confronting and provide a kind of ‘confirmation’ for the way that they see themselves”, leading them to become “more entrenched in the disorder.”

Stable Diffusion

Lensa also uses Stable Diffusion, which uses deep-learning synthesis that can create new images from text descriptions and can run on a Windows or Linux PC, on a Mac, or in the cloud on rented computer hardware.

Stable Diffusion’s neural network has with the help of intensive learning mastered to associate words and the general statistical association between the positions of pixels in images.

We covered in another story how the technology could have life-wrecking consequences by rendering and depicting images of people to appear to be criminals or to be involved in less than flattering practices such as theft.

For instance, one can give the open sourced Stable Diffusion a prompt, such as “Tom Hanks in a classroom,” and it will give him/her a new image of Tom Hanks in a classroom. In Tom Hank’s case, it is a walk in the park because hundreds of his photos are already in the data set used to train Stable Diffusion.

Artists getting raw deal too

On the art front, some artists are unhappy.

They are concerned the AI could threaten their livelihoods. This is because artists, including digital ones, also cannot produce as fast as AI for a digital portrait.

Lensa’s parent company, Prisma has attempted to massage the concerns about its technology eliminating work for digital artists.

“Whilst both humans and AI learn about artistic styles in semi-similar ways, there are some fundamental differences: AI is capable of rapidly analyzing and learning from large sets of data, but it does not have the same level of attention and appreciation for art as a human being,” wrote the company on Twitter on December 6.

It says “the outputs can’t be described as exact replicas of any particular artwork.”

Altering self image

Kerry Bowman, a bioethicist at the University of Toronto, says the AI has the potential to negatively affect one’s self-image, among other ethical issues.

“In some ways, it can be a lot of fun but these idealized images are being driven by social expectations which can be very cruel and very narrow,” Bowman said on Monday.

Bowman said these AI programs make use of data set sources such as the internet in search of different art styles to create these portraits. The downside is that when AI does that, artists are rarely remunerated financially for use of their work or credited.

“What happens with emerging AI is that the laws have not been able to really keep up with this in terms of copyright law. It’s very difficult and very murky and ethics is even further behind the laws because I would argue that this is fundamentally unfair,” Bowman said.

Personal Data Concerns

Bowman also raised concerns how people’s personal data is stored.

“Do you really want your face in a large database? People need to decide for themselves about this but it’s not benign, there’s not nothing to this, it’s not just fun,” he said.

Lensa says photos are kept on servers and in apps for no more than 24 hours. With deep learning and machine learning algorithms, the AI gathers and then delivers better results in the future, also based on the deleted data. This, Bowman says, will result in safety concerns over the potential use of facial recognition as this type of data could be used by the police illegally.

SHARE THIS POST
Image credits: Shutterstock, CC images, Midjourney, Unsplash.

AI

Japan Leads the Way by Adapting Copyright Laws to the Rise of AI

Published

on

Japan Leads the Way by Adapting Copyright Laws to the Rise of AI

In a groundbreaking move, the Japanese government announced that copyrighted materials used in artificial intelligence (A.I.) training would not be protected under intellectual property laws, according to local media reports.

The Minister for Education, Culture, Sports, Science, and Technology, Keiko Nagaoka, confirmed this decision. Nagoka stated that it was applicable to A.I. datasets regardless of their purpose or source.

The policy shift was a response to the increasing significance of A.I. across various industries, including robotics, machine learning, and natural language processing. 

Japan aims to foster an open and collaborative environment by exempting A.I. training data from copyright restrictions to stimulate innovation and progress.

This move has sparked a global conversation about the evolving relationship between artificial intelligence and intellectual property rights, raising important questions about balancing innovation and copyright protection. 

A.I. training, copyright laws, and fair use policy

Japan’s decision to exempt A.I. training data from copyright laws has sparked global discussions on the delicate balance between intellectual property protection and A.I. advancements.

The Japanese copyright strategy is similar to the United States Fair Use Policy. The Fair use policy promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances. Most European countries also have an open policy toward using copyrighted materials in A.I. training.

Over the past months, several high-profile cases have involved A.I. training and copyright law. The U.S. House Judiciary Committee recently held a hearing examining the intersection of generative A.I. and copyright law.

Speaking at the committee hearing, Sy Damle, a former General Counsel of the U.S. Copyright Office, argued in support of the fair use policy, describing the use of copyrighted works to learn new facts as “quintessential fair use.”

How does this impact the A.I. industry?

Several experts have aligned with Japan’s notion that removing copyright barriers in A.I. training will expedite the development of innovative solutions, ultimately driving economic growth in AI-dependent sectors.

Additionally, the move could prompt a reassessment of copyright laws in other nations as governments grapple with the challenges presented by A.I. technology.

While its long-term impact remains uncertain, Japan’s bold step signifies a significant milestone in the global conversation surrounding A.I., copyright, and the necessary legal frameworks to support these emerging technologies reshaping our world.

Japan warns OpenAI about collecting sensitive data

Reuters reported that Japanese regulators had warned OpenAI against collecting sensitive information without people’s consent.

Japan’s Personal Information Protection Commission told the ChatGPT-creator to minimize its collection of sensitive data for machine learning, adding that it may take action against the firm if its concerns persist.

The warning is coming amid reports that over half of Japan’s population wants more stringent control of the A.I. sector. According to the report, there is widespread concern among the people about the general use of such tools.

Meanwhile, Japan is not the only country concerned about OpenAI’s data collection methods. Earlier in the year, Italy temporarily banned ChatGPT over privacy concerns.

Continue Reading

AI

Metaverse Experiences Must Be Worth Having, Says Stephenson

Published

on

Metaverse Creator Stephenson Opens Up On It 30 Years Later

The success of the metaverse depends on the ability of developers to build quality experiences that “millions or billions” want to have. To do that the sector must attract, find, and financially incentivize the very top talent from related industries. 

This is the verdict of Neal Stephenson, the man credited with coining the word metaverse in his 1992 novel Snow Crash.

Source the best developers

Famed author and futurist Neal Stephenson says the metaverse must find and attract the most talented people to make the sector a success. Stephenson’s comments came during an appearance at AWE USA 2023 convention on Wednesday. 

“If we’re going to have a metaverse that’s being used all the time by millions or billions of people, then there have to be experiences in the metaverse that are worth having,” Stephenson said.

“That seems like an obvious statement but for me, there’s a glaring and frustrating lack of support for the kinds of people who make those experiences,” added the author. “Right now the skill set that is needed to create the metaverse is basically what you see in the game industry. People who know how to use game engines and how to create the assets that feed into those game engines. Those people by and large have jobs and other things they could be doing. 

“We need to create the economic basis for [developers] to get rewarded if they succeed in creating metaverse experiences that a lot of people enjoy.”

Stephenson cited a number of ways that developers may be rewarded, but his personal vision is for a tokenized metaverse owned and controlled by its citizens.

In June last year, Stephenson announced Lamina1, a layer 1 blockchain infrastructure and open metaverse company. Stephenson co-founded the “batteries-included blockchain” with Peter Vessenes, reflecting their vision for an incentivized metaverse that, according to its website, could “empower a new generation of interconnected, immersive experiences across gaming, entertainment, fashion, music, and beyond.”

Metaverse Experiences Must Be Worth Having, Says Stephenson

Seeing double: Ori Inbar and Ori Inbar introduce Neal Stephenson

A tale of two metaverses

Ori Inbar, the CEO of AWE, hosted the conversation with Stephenson on what marked the opening of the 14th annual AWE convention. The latest event is running from May 31 to June 2 in Santa Clara, California. Those who can’t attend in person are invited to participate online.

In an entertaining introduction, a virtual facsimile of Inbar initially addressed conference attendees, only for the real Inbar to interrupt and reveal the entire monologue was written by ChatGPT. 

Inbar then asserted that AI makes “original voices… even more valuable than before.”

Once Inbar sat down with Stephenson the pair discussed just how far technology is developing across multiple fields. Inbar asked Stephenson where he believed the metaverse was headed; to something dystopian or something utopian.

“I think it would be very early and premature to make announcements today about whether it is going in one particular direction,” said Stephenson.

To Stephenson, both the positives and the negatives of the metaverse can co-exist within just one reality – something he explored in his 1992 novel.

“Our initial exposure to the metaverse is a kind of very vast market, a lowest common denominator to include … the worst of television,” said Stephenson said as he described the surface level of the metaverse. “But later on, as we get farther into the book, we see that people have used it to make beautiful works of art. There are some people … who lavished a lot of time and attention on making homes in the metaverse that are exquisite works of art, both visually and in this sonic environment.”

That ambition of a deeper metaverse is worth striving for, especially as a driver of its long-term success.

Continue Reading

AI

Baidu Is Rolling Out a $145M Venture Capital AI Fund

Published

on

Baidu is Rolling Out a $145M Venture Capital AI Fund

Chinese tech giant Baidu is setting up a venture capital fund of $145 million or 1 billion yuan to back AI-focused startups. Baidu co-founder and CEO Robin Li announced the launch of the fund at a JP Morgan summit in China this week.

The move could signal China’s push towards self-reliance in the cut-throat generative AI sector. The fund will support the development and innovation of AI-based content creation, such as chatbots, video and audio synthesis, and natural language processing.

The fund is targeting early-stage AI applications, an area which Chinese generative AI startups have so far struggled to reach widespread adoption.

Also read: AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe

Tailing the US’s OpenAI

OpenAI recently created an investment fund valued at more than $175 million, according to a Securities and Exchange Commission filing. the company has been investing in startups, with its OpenAI Startup Fund to back companies “pushing the boundaries of how powerful AI can positively impact the world.”

Baidu is also planning to launch competition for developers to build applications using its Ernie large language model (LLM) or integrate the model into their existing products, in a similar fashion other tech firms are using OpenAI’s ChatGPT technology.

Ernie bot is Baidu’s own AI-powered LLM that can generate natural and coherent texts based on user inputs.

“American developers are building new applications based on ChatGPT or other language models. In China, there will be an increasing number of developers building AI applications using Ernie as their foundation,” said Li.

Baidu unveiled the chatbot in March this year and claimed that it outperformed other LLMs in several benchmarks.

Battle for AI supremacy

The success of ChatGPT has put Chinese tech companies under pressure to fast-track the release of their own LLMs and bring them to market.

According to Reuters there are over 75 Chinese companies that have already released their own LLMs since 2020. Baidu and e-commerce giant Alibaba are among these companies.

A report by a state-run research firm says over 79 LLMs have been launched in the past 3 years.

And the Baidu boss predicts that in the generative AI age, Chinese companies will catch up, and even lead the way in discovering commercial applications for AI.

“I am very bullish on China AI development. Over the past few decades, China has warmly embraced new technologies,” said Li.

“Even though we didn’t invent Android, iOS or Windows, we developed a host of very innovative applications like WeChat, Douyin and Didi. Many of them are popular and useful. The same trend is playing out in the AI age. Technology ushers in a myriad of possibilities and we are good at capturing them to build applications,” explained Li.

LLMs, a vital tech

Since they can produce realistic and varied material across a range of subjects and forms, LLMs are seen as a vital technology for expanding AI applications and services. They do, however, also present ethical and legal difficulties, such as possible abuse, plagiarism, and bias. China released draft regulations on the use of generative AI in April in response to the spike in LLMs, requiring developers to acquire approval and explicitly label such products.

The growth and adoption of AI-based content production in China and elsewhere are anticipated to be accelerated by Baidu’s venture capital fund and competition.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced web3 world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe for Email Updates

* indicates required

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved

Welcome

Install
×