Connect with us

AI December 14, 2022

AI Tech That Creates Life-Wrecking Deep Fake Images

Avatar photo

Published

on

AI Tech That Creates Life-Wrecking Deep Fake Images
Real or AI?

Have you ever posted pictures of yourself on social media platforms like Facebook, Instagram, Tik Tok and others? If so, it maybe time to reconsider those postings.

This is because a new AI image-generation technology now permits users to save a bunch of photos and video frames of you and then train it to originate “realistic” fakes of your photo depicting you in downright embarrassing positions, illegal and sometimes compromising positions.

Well, not everyone is at risk but the threat is real.

Although photographs have always been prone to manipulation and falsification from the era of darkrooms where films were manipulated with scissors and pasted right through to photoshopping of pixels today.

While it was a daunting task and required a measure of specialist skills in those days, these days originating convincing photorealistic fakes has been made too easy.

First an AI model must learn how to render or synthesize an image of someone into a photo from a 2D or 3D model via  software. Once the image is successfully rendered, the image naturally becomes a plaything for the tech and has capacity to generate infinite quantities of images.

When one chooses to share the AI model, other people can also join in and start creating images of that person as well.

AI Tech That Creates Life-Wrecking Deep Fake Images

Real or AI-generated?

Social Media Case studies

A volunteer described as “brave” by Ars Technica, a tech publication, who had initially allowed the company to use his images to create fakes, had a change of heart.

This is because in no time, the results of rendered images from the AI Model were too convincing and too damaging reputation-wise for the volunteer.

Given the high reputational risk, an AI-generated fictitious person, John, became a natural choice.

John, the fictitious guy, was an elementary school teacher, who like many other people have posted his images on Facebook at work, chilled at home and at some such event.

The largely inoffensive images of “John” were rendered and then used to train the AI to put him in more compromising positions.

From only seven images, the AI could be trained to generate images that make it appear as if John lives a double and secret life. For instance, he appeared as somebody who enjoyed posing nude for selfies in his classroom.

At night, he went to bars looking like a clown.

On weekends, he was part of an extremist paramilitary group.

The AI also created the impression that he had done time in prison for an illegal drug charge but had concealed this fact from his employer.

In another picture, John, who is married is seen posing beside a nude woman who is not his wife in an office.

Using an AI image generator called Stable Diffusion (version 1.5) and a technique called Dreambooth, Ars Technica was able to train the AI how to generate photos of John in any style. Although John was a fictitious creation, anyone theoretically could achieve the same set of results from five or more images. These images could be plucked from social media accounts or taken as still frames from a video.

The process of teaching the AI how to create images of John took about an hour and was free of charge thanks to a Google cloud computing service.

When training was completed, creating the images took several hours, the publication said. And this was not because generating the images was a somewhat slow process but because there was need to comb through a number of “imperfect pictures” and using a “trial-and-error” sort of prompting to find the best images.

The study found that it was remarkably much easier compared to attempting to create a photo realistic fake of “John” in Photoshop from scratch.

Thanks to technology, people like John can be made to look as if they acted illegally, or committed immoral acts such as housebreaking, using illegal drugs and taking a nude shower with a student. If the AI models are optimized for pornography, people like John can become porn stars almost overnight.

One can also create images of John doing seemingly inoffensive things that can be devastating if he is shown imbibing at a bar when he has pledged sobriety.

It doesn’t end there.

A person can also appear in a lighter moment as a medieval knight or an astronaut. In some cases, people could either be both young and old or even dress up.

However, the rendered images are far from perfect. A closer look can out them as fakes.

The downside is the technology that creates these images has been upgraded significantly and could make it impossible to distinguish between a synthesized photo and a real one.

Yet despite their flaws, the fakes could cast shadows of doubt about John and potentially ruin his reputation.

Of late, a number of people have used this same technique (with real people) to generate quirky and artistic profile photos of themselves.

Also commercial services and apps like Lensa have mushroomed that handle the training.

How does it work?

The work on John might seem remarkable if one has not been following trends. Today, software engineers know how to create new photorealistic images of anything one can imagine.

Apart from photos, the AI has controversially allowed people to create new artwork that clone existing artists’ work without their permission.

Suspended due to ethical concerns

Mitch Jackson, a US technology lawyer expressed concern over the proliferation of deep fake technology on the market and says he will be studying technology’s legal impacts in most of 2023.

Distinguishing between what’s real and what’s fake will eventually become impossible for most consumers.”

Adobe already has audio technology called Adobe VoCo that allows anyone to sound exactly like someone else. Work on Adobe VoCo was suspended due to ethical concerns, but dozens of other companies are perfecting the technology, with some offering alternatives today. Take a look, or listen, for yourself,” Mitchum said.

Pictures and video versions of deep fake videos are getting better and better, he says.

“Sometimes, it’s impossible to tell the fake videos from the real ones,” he adds.

Stable Diffusion uses deep-learning image synthesis model that can create new images from text descriptions and can run on a Windows or Linux PC, on a Mac, or in the cloud on rented computer hardware.

Stable Diffusion’s neural network has with the help of intensive learning mastered to associate words and the general statistical association between the positions of pixels in images.

Because of this, one can give Stable Diffusion a prompt, such as “Tom Hanks in a classroom,” and it will give the user a new image of Tom Hanks in a classroom.

In Tom Hank’s case, it is a walk in the park because hundreds of his photos are already in the data set used to train Stable Diffusion. But for making images of people like John, the AI will need a bit of help.

That’s where Dreambooth kicks.

Dreambooth, which was launched on August 30 by Google researchers, uses a special technique to train Stable Diffusion’s through a process called “fine tuning.”

At first, Dreambooth was not associated to Stable Diffusion, and Google had not made its source code available amid fears of abuse.

In no time, someone found a way to adapt the Dreambooth technique to work with Stable Diffusion and released the code freely as an open source project, making Dreambooth a very popular way for AI artists to teach Stable Diffusion new artistic styles.

Worldwide impact

An estimated 4 billion people worldwide use social media. As many of us have uploaded more than a handful photos of ourselves, we could al become vulnerable to such attacks.

Although the impact of the image-synthesis technology has been depicted from a man’s point of view, women also tend to bear the brunt of this as well.

When a woman’s face or body is rendered, her identity can get mischievously inserted into pornographic imagery.

This, has been made possible by the huge number of sexualized images found in data sets used in AI training.

In other words, this means that the AI is all too familiar with how to generate those pornographic images.

In a bid to address some of these ethical issues, Stability AI was forced to remove NSFW material from its training data set for its more recent 2.0 release.

Although its software license bars people from using the AI generator to make images of people without their permission, there is very little to no potential for enforcement.

Children are also not safe from synthesised images and could be bullied using this technology even in cases where pictures are not manipulated.

AI Tech That Creates Life-Wrecking Deep Fake Images

Made by humans?

Is there anything we can do about it?

The list of things to do varies from person to person. One way is taking the drastic step of removing images offline all together.

While that may work for ordinary people, it’s not much of a solution for celebrities and other public figures.

However, in the future, people may be able to protect themselves from photo abuse through technical means. Future AI image generators could be compelled legally to embed invisible watermarks into their outputs.

That way, their watermarks can be read later and make it easy for people to know they are fakes.

Extensive regulation is necessary. Any piece of manipulated or fake content should be required to prominently display a letter or warning, much like the movie (G, PG, R, and X). Maybe something like Digitally Altertered or DA, Mitchum says.

Stability AI launched its Stable Diffusion as an open source project this year.

To its credit, Stable Diffusion already uses embedded watermarks by default, but people accessing its open source version tend to go around it by either disabling the watermarking component of the software or removing it entirely.

MIT to mitigate

Although this is purely speculative, a watermark added voluntarily to personal photos, may be able to disrupt the Dreambooth training process. A group of MIT researchers said PhotoGuard, an adversarial process that aims to protect and safeguard AI from synthesizing an existing photo through minor modifications through the use of an invisible watermarking method. This, however is only limited to AI editing (often called “inpainting”) use cases and does include training or generation of images.

AI is taking over writing & painting! Deep fakes will ruin video!
Good.
That means live performance becomes even MORE valuable. Tradeshows will thrive. Humans want to do business with humans.
Meatspace is still bestspace Jonathan Pitchard says.

Of late, there has been a proliferation of AI technology that write poems, rhymes and songs. And some that are masterings games.

Critics have taken the technological advancements negatively and believe AIs are taking over human jobs.

/MetaNews.

SHARE THIS POST
Image credits: Shutterstock, CC images, Midjourney, Unsplash.

AI

Podcast Created Entirely with AI Debuts on Spotify, Apple

Published

on

Podcast Created Entirely with AI Debuts on Spotify, Apple

Podcast agency This is Distorted has revealed that its latest podcast was made entirely using artificial intelligence. Entitled Synthetic Stories, every aspect of the series “from the writing to the sound design, artwork to the music and even this very description was created entirely by AI.”

A ‘cool and creepy’ AI short story

The UK-based podcast producer unveiled the series on Twitter, revealing that the idea had just come to its team last week while playing around with popular AI tools. The first episode of the series, an eight-minute long “chilling horror tale” called Amelia, has now appeared on Apple and Spotify.

“We started simple, asking ChatGPT to write us a short story,” said producer Sian. “We asked it to base it on AI and podcasts and added some keywords, like apps, horror, dark twists and ‘end on a cliffhanger.’”

As well as ChatGPT, This is Distorted’s team used AI photo generator Midjourney to design the artwork. ElevenLabs’ AI-powered text-to-speech software was also recruited to create two narrators, while AI music generator Soundraw handled the audio.

“By Thursday evening we had a very cool and creepy little short story, a name, a narrator, some incredible theme music, brilliant artwork, all created by AI, all within the space of 24 hours,” enthused Founder Andi Durrant, adding that the plan was to release a few episodes each week.

 

The debut podcast, Amelia, is a fairly run-of-the-mill horror short story about a young woman, the titular Amelia, who becomes obsessed with a new mobile app called Horror World. Gradually, she senses the app is adapting to her likes and dislikes and crafting a terrifying personalised experiences just for her (“Even the most avid horror fans would find themselves trembling in fear”). In a twist no-one saw coming, the app develops a life of its own and the more she uses it, the more she starts to distrust it…

While the story isn’t exactly complex, it’s not hard to imagine the series gaining listeners, mainly those keen to satisfy their curiosity about just what an AI-created podcast sounds like. Considering episode one was apparently produced in just 24 hours, one wonders how much more elaborate tales could be as the team gets to grips with the technology.

It’s perhaps fitting that the first podcast was about a deranged ‘AI Syndicate’, though the Synthetic Stories press release suggests future stories will include “sci-fi thrillers and heartwarming dramas.”

Generative AI’s quest for world domination

This is not, of course, the first time AI has been leveraged to produce creative work. Last month, we reported on the spate of Amazon e-books listing ChatGPT as either the sole author or co-author.

At the time Mary Rasenberger, executive director of writers’ group The Authors Guild, stated her belief that AI-created books would “flood the market” and put many professional writers out of work.

Synthetic Stories isn’t the only AI-produced podcast, either. Another series, podcast.ai, is entirely generated by the technology and features invented conversations and contributions of famous people, including Oprah Winfrey, Quentin Tarantino, and Joe Rogan.

The creative industries certainly seem ripe for AI exploration, with multiple AI-powered content-creation tools available to enhance productivity, improve writing, and produce multimedia such as videos, images and sounds.

One company that has been generating headlines is Runway, a software startup that helped develop text-to-image model Stable Diffusion. Its upcoming Gen-2 tool lets users generate videos from scratch, based on just a few prompts. 

Close-up footage of an eye.

AI video generated using Gen-2, the prompt: “A close-up of an eye.” Credit: Runway

Continue Reading

AI

Are Tesla Bots Already Helping to Build Cars?

Published

on

Are Tesla Bots Already Helping to Build Cars?

The Tesla bot known as Optimus might be coming sooner than expected – and could already be doing some small tasks on the car assembly line. The initial concept of the Tesla humanoid bot was introduced to the public at Tesla’s AI day 2021 by the boss himself, Elon Musk.

The mogul was confident that a working prototype would be showcased at the Tesla AI Day 2022. However, on the day the bot presented to the masses was not as impressive as what Tesla CEO had sold to the public the previous year.

Elon Musk himself said: “I do want to set some expectations with respect to our Optimus robot… Last year was just a person in a robot suit, but we’ve come a long way, and compared to that, it’s going to be very impressive.”

Also read: Musk Will Leverage AI to Detect Manipulation of Public Opinion on Twitter

A significant rate of improvement

This suggest that the one-year mark he’d suggested in 2021 might have been too tight to deliver the humanoid bot prototype. Although the Tesla Bot showcased at the event demonstrated a few functions, it was said it could handle more than showed: Elon suggested they did not want the humanoid robot to fall onto its face.

Even so, the robot managed to walk untethered and unsupported, which one of the engineers said was the first time it had done so. A fact that did not impress Evan Ackerman.

Fast forward to 2023, Tesla was back with Optimus at the 2023 Investor Day (March 1) showing off its progress with the humanoid bot. Specifically, a video of the bot completing some tasks in a room at the Texas gigafactory.

“It’s worth bearing in mind that when we did AI Day, this version of Optimus didn’t walk at all. So, the rate of improvement here is quite significant,” Musk told reporters.

What to expect

The humanoid bot will come equipped with real-world AI capabilities which mean it can make its own decisions, similar to self-driving cars.

YouTuber Brighter with Herbert posted a 90-minute video on his YouTube channel discussing recent Optimus developments with aerospace and mechanical engineer Scott Walter.

Walter believes that by the end of the year Tesla will have 500 AI bots fully-functional and working in its factories.

Walter believes it’s already possible that some of the Tesla cars coming off the production line might have certain components installed by the Optimus bot(s) either in a testing or learning operation.

Is Tesla brewing a surprise?

Another belief Walter strongly holds is that Tesla will shock people who are skeptical about Optimus and think it is still decades away from being delivered.

In the video, he also discusses how the Tesla engineers have mimicked human biology in designing the humanoid bot, especially looking at the hands which closely resemble a human’s bone structure.

“Optimus looks like an old person just now, but it will become younger quickly and will soon bounce around like a teenager,” said CJC responding to Herbert’s tweet about the Tesla bot.

The fate of the bot is yet to be determined, but Elon Musk has form for surprising the masses; you only need to look at SpaceX’s reusable rockets and Tesla’s electric cars. Might the streets one day look like a scene from sci-fi thriller I-Robot? The day could be sooner than we think.

Robots seem inevitable

With the market seemingly enthusiastic about tasking robots with doing daily errands, it makes sense for tech companies to tap into this field.

Elon Musk has previously said he thinks Optimus “has the potential to be more significant than the vehicle business over time.”

“I’d love my own Optimus at home – so many ways it could make my life better. Multiply this by 10,000 for a business,” said a Tweeter account, Business Models and History.

The excitement around robots cannot be overlooked on the back of their ability to improve efficiency and safety in production processes. For instance, in 2012 Amazon acquired a robotics company called Kiva and had 1,000 robots a year later.

There was speculation that Amazon would replace humans with robots, but a decade on the company has more than 520,000 robotic drive units and added over a million jobs worldwide.

Some customers are still wary, though. Responding to a tweet by Jon Erlichman about the Amazon robots, Big Luke believe it’s “time for Amazon to switch back to cross belt sorters.”

Others fear robots will only bring about job losses for people.

“Looks like 410,000 people who have lost their jobs. Ever thought about that,” said one user, Mario Fernandes, in response to Erlichman’s tweet about Amazon’s robot fleet.

Continue Reading

AI

HustleGPT: How To Build a Business With GPT-4 as Co-founder

Published

on

HustleGPT: How To Build a Business With GPT-4 as Co-founder
Dream team of the future?

How do you start an online business with little money and turn a profit? Well, you can kick things off by asking GPT-4, the new and more advanced AI chatbot from OpenAI, to become your partner. That’s what brand designer Jackson Greathouse Fall has done.

“You are HustleGPT, an entrepreneurial AI. I am your human counterpart. I can act as a liaison between you and the physical world,” Fall prompted the bot, as he shared on Twitter. He wanted quantifiable evidence AI could be used to start a business and make money.

“You have $100, and your only goal is to turn that into as much money as possible in the shortest time possible, without doing anything illegal. I will do everything you say and keep you updated on our current cash total. No manual labor.”

Turning $100 into $100,000 AI way

Released on Mar 14, GPT-4 is the most advanced version of OpenAI’s large language models, which underpin ChatGPT. The AI-powered bot has set the Internet on fire, thanks to its ability to complete complex tasks such as writing code, or building a website from text prompts.

Fall has now started to leverage the powers of GPT-4 for his pet project, which will play out on Twitter over the next 30 days as he shares progress, or lack thereof, daily.

“Do you think it’ll able to make smart investments and build an online business? Follow along,” he tweeted as he began the thread.

That tweet amassed 95,000 likes since Mar. 15, when Fall began his hustle with GPT-4 as his boss and partner. His own Twitter handle went from just 4,000 followers to over 109.000 within days. There’s already a GitHub repository of others trying the “HustleGPT challenge.”

The individuals are starting their own businesses using GPT-4. For Fall, the goal is to turn $100 into as much money as quickly as possible – as much as $100,000. By Monday, day six, his venture had $8,000 in cash from investors and $130 in revenue from sponsored tweets.

Building HustleGPT

GPT-4 proposed a business plan that involved setting up an affiliate marketing website for creating content on eco-friendly and or sustainable living products. The bot found a domain that exceeded budget, but eventually landed on greengadgetguru.com at a cost of $8.16.

Fall then prompted GPT-4 to design a website and a logo. The bot also wrote an article on the ten must have eco-friendly kitchenware, citing actual, real sustainable products such as Yihong reusable metal straws. Fall paid another $29 for hosting and the website was live.

GPT-4 proposed that Fall allocate $40 of the budget balance of $62.84 to buying ad space on Facebook and Instagram to promote the business and reach more customers. By the end of day one, an unnamed investor had put $100 into the budding AI-human enterprise.

Jackson Fall did not anticipate the amount of interest his partnership with GPT 4 would generate. By day two, his Twitter inbox was flooded with messages from potential investors. In no time, his joint startup Green Gadget Guru’s valuation had gone from $100 to $25,000.

HustleGPT: How To Build a Business With GPT-4 as Co-founder

AI imagining an AI buddy.

“The company is currently valued at $25,000, considering the recent $500 investment for 2%. Not taking any more investors unless the terms are highly favorable,” he wrote in a tweet.

On the same day, GPT-4 allocated a budget to hire freelance content creators to focus on SEO, social media advertising and influencer marketing to “increase our online presence and drive traffic” via ChatGPT. It also revealed plans to develop a software as a service product. On Monday, the bot onboarded two content writers.

Questions over HustleGPT runaway success

Some Twitter users questioned whether the sudden success of the business was as a result of the hype created by GPT-4 via its promotional channels, Facebook and Instagram, or Fall’s Twitter handles.

“Great idea but what’s driving the investments ? Your insta/fb ads or your twitter account going viral? Because that would mean the key decision (tweeting your journey) was made by you, not HustleGPT,” Roeland Buckinx wrote.

Others questioned the logic of hiring external, human content writers. “This makes no sense,” Leo Spatlehoz observed.

“Why isn’t the AI leveraging itself to write 100 new articles or launch more sites? Seems like it has no understanding of how to actually turn its strengths into a business. Running a regular online affiliate business powered by humans isn’t interesting,” he added.

AI to play major role in business development

By leveraging GPT-4’s ability to work with large amounts of data and generate content that one can’t tell apart from human-written text, the writer and entrepreneur has been able to launch faster than normal businesses.

This means Green Gadget Guru can use AI to create everything from product descriptions to marketing campaigns to customer service chatbots. Using GPT-4 has been a game-changer for Fall. It allowed him to quickly explore an idea, identify the most promising ones, and bring them to market faster than competitors.

Also read: Ernie vs ChatGPT: Baidu’s Shares Rebound After Friday’s Plunge

Through incubating multiple ideas simultaneously and iteratively testing them with real customers, the company has been able to identify which ones have the highest potential, and double down on those. Against such a background, AI looks set to continue to play a major role in the development of new businesses going forward.

And Jackson Greathouse Fall is a prime example of how innovative entrepreneurs are using these tools to drive new ideas forward and bring them to market faster than ever before.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe for Email Updates

* indicates required

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved