Connect with us

AI November 30, 2022

Crypto Seed Phrases Are Being Transformed Into AI-Generated Quiz Art



Crypto Seed Phrases Are Being Transformed Into AI-Generated Quiz Art

Ethereum wallet seed phrases are being transformed into AI-generated art, creating mind-boggling picture puzzles for crypto prize hunters.

The bizarre competition is the brainchild of Nick Johnson, lead developer of the Ethereum Naming Service, and an alum of the Ethereum Foundation. According to Johnson, he was inspired by an earlier post from Jameson Lopp.

Lopp started using random seed phrases to create AI-generated artwork as a ‘hobby.’ When Johnson saw the idea he almost immediately had his own notions about where to take the idea. Johnson’s innovation is to add funds to the seed phrase wallet and then ask users to guess the 12-word phrase from the generated image or images.

If at first, you don’t succeed

Nick Johnson announced his first AI-generated seed phrase on Monday, but not all went to plan the first time around.

Taking to Twitter the puzzle pioneer described the process he had used and precisely how to win the crypto prize.

“I have generated a random 12-word seedphrase, and sent 0.5 ETH to it,” said Johnson of his game. “Then, I used the seedphrase to generate these images with Midjourney v4. If you can guess the seedphrase, you can have the ETH. If nobody does in a week, I’ll claim it back and reveal the phrase.”

Midjourney is an AI-powered imaging platform that opened to the public in March. Users are able to set up an account with the service and generate artwork by description alone.

Having used the software to create the puzzle Johnson went on to add that “I fully expect this first one to be outright impossible,” but he was swiftly forced to eat humble pie. The crypto was gone within 48 hours of the wallet being funded. Johnson had left his Midjourney profile public, allowing one user to simply look up the phrase on the site. 

The solution to the first puzzle was, “potato label dinner also vague metal royal gun foster basket rich flight.”

Having apparently learned from his mistake Johnson remains undeterred and has launched a further picture puzzle for his Twitter audience. The latest version remains unsolved and can be found here.

Seed phrases fit for Parody

The AI-generated puzzle has already captured the imagination of Twitter users, and while the number of puzzles has not significantly increased, parodies of the idea have already begun to appear.

A parody of Johnson’s puzzle game

While the parody puzzles may amuse, Johnson is offering 0.5 ETH to anyone who can solve his current AI-generated riddle.

With the current dollar value of the prize at close to $700, that’s no joke.


Image credits: Shutterstock, CC images, Midjourney.


CNET Suspends AI After Publishing a Series of Bad Articles



CNET AI writes nonsense articles
CNet choose to trust the AI.

From November to January, technology website CNET published a series of financial advice articles written by artificial intelligence (AI), but an audit confirmed that the majority of these pieces contained factual errors, serious omissions, and plagiarised content.

The CNET editorial team failed to catch and correct these errors prior to publication.

CNET suspends disastrous ‘test’

Technology site CNET has admitted to publishing a string of low-quality articles written by artificial intelligence. All of the articles were misleadingly published under the byline “CNET Money Staff,” implying to casual observers that they were the work of a human hand.

On Tuesday CNET confirmed it would end the practice in what it now calls a “test,” of a an “internally designed AI engine.”

The publication offered no apology to its audience for the long list of inaccuracies and wrong financial advice it had printed since November.

“We stand by the integrity and quality of the information we provide our readers,” said CNET.

CNET’s AI engine was tasked with creating financial services explainers and over three month period proceeded to serve up wrong financial information to its readers. Some of the subjects of these articles included topics such as, “What Factors Determine Your Credit Score?” and “What Is a Credit Card Number?”

Under the guise of providing expert information, CNET published a total of 77 AI-written pieces without adequate safeguards. According to the technology outlet, the articles were outlined by human staff members before being written by the AI and were subsequently checked by their editors before final publication. 

Somewhere CNET’s system fell apart. When one of the articles was recently cited for obvious falsehoods, CNET was forced to audit the AI’s body of work. It found that 41 of the 77 pieces required significant corrections. It appears that absolutely everybody took their eye off the ball.

A compendium of nonsense

Included in the errors which made their way to curious CNET readers was the following explanation of how compound interest works.

According to the CNET AI, “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.”

This certainly sounds exciting, but in reality, an investor would receive the rather smaller sum of $300 of interest over the first year. 

In the same piece, the following explanation of loan repayments is offered: “For example, if you take out a car loan for $25,000, and your interest rate is 4%, you’ll pay a flat $1,000 in interest per year.”

This, again, is completely wrong. Once the person who has taken the loan starts to pay back on a monthly basis, they would only owe interest on the remaining sum. In reality, there is never a year in which the person repaying the loan would ever pay back “a flat $1,000.”

Other basic mistakes and errors are littered across the 41 inaccurate AI articles. For instance, an article on credit card penalties stated the wrong dollar value of a late fee, while a report on certificates of deposit failed to include key information and facts, and an article about prequalification for credit cards was found to contain plagiarised content.

As one former CNET staffer put it, “this is so incredibly disappointing and disheartening, but it’s not surprising. What other choice do you have when you lay off all your talented and loyal writers?”

Humans make ‘mistakes’ too

The editorial line taken by CNET is that AIs, like humans, are prone to making mistakes. Even so, CNET has sought to minimize the scandal by stating that only a small number of their error-strewn articles required “substantial correction.”

It is clear then that the failure at CNET was not one simply of artificial intelligence but of human bosses failing to exercise their better judgment. The heart of the matter lies in why those mistakes were allowed to be made in the first place. 

What motivated CNET to churn out large numbers of partially inaccurate and partially plagiarised articles, and why did the company only reveal its ‘test’ after it had been caught in the act?

It’s worth noting that CNET’s parent company, Red Ventures, profits through affiliate advertising programs. When a visitor to one of their sites ultimately purchases a credit card from one of its sites, CNET makes money. Besides CNET the company owns numerous other publications including Bankrate, The Points Guy, and

Given that the AI articles performed well as SEO bait, CNET’s biggest mistake may have been placing affiliate dollars before the integrity of their own journalistic teams. That, sadly, is a very human mistake indeed.

Continue Reading


ChatGPT Pricing and Professional Features Revealed



ChatGPT Pricing and Professional Features Revealed

The professional edition of ChatGPT, which is said to be faster and confer additional benefits to users, is set to cost $42 a month. OpenAI is rolling out the ‘Professional Plan’ of ChatGPT to some users with a number of benefits. Professional plan members will receive faster response times, and priority access to new features. 

Earlier this month, MetaNews reported that OpenAI has plans to move to a subscription model to “continue improving and maintaining the service.” The service is now being rolled out as some users have already received an invitation to upgrade.

ChatGPT Professional

The professional service should also be more reliable than the standard service which throttles at peak times and frequently exceeds its available capacity. Screen captures taken by users who were offered the service show that the professional plan will be “available even when demand is high.”

In an unusual step, OpenAI has refrained from making an official announcement regarding the launch. The last official reference of ChatGPT Professional appears to have come from company President Greg Brockman earlier this month.

“Working on a professional version of ChatGPT; will offer higher limits & faster performance,” said Brockman on January 11

Since then there has been no additional word from either Brockman, company CEO Sam Altman, the company website, or its social media channels.

The lack of an official announcement might leave room for skepticism about whether the rollout has actually occurred, but this would seem unwarranted given the volume and quality of evidence from user accounts.

One user named Zahid Khawaja went as far as to post a video screen-capture of his experience on ChatGPT Professional.

Given the hype wave that ChatGPT now seems to be surfing, not making an official announcement and inviting further speculation from an excited public, may be the wiser marketing strategy.

Microsoft partnership

The rollout of a subscription model for ChatGPT comes hotly on the heels of further investment from Microsoft.

On Monday Microsoft announced the ‘third phase’ of its long-term partnership with OpenAI in a ‘multibillion dollar’ deal. Microsoft failed to put a specific cost figure on the terms of their third phase contract but it has widely been reported to be in the region of $10 billion.

As MetaNews reported last week, the terms of that deal will include an aggressive cost recovery program. With such a huge investment in the company, the need for recouping that money swiftly becomes self-evident. That program will see 75% of company profits funneled back to Microsoft until their capital injection is recovered.

According to OpenAI Sam Altman, the cost of maintaining ChatGPT on an ongoing basis is said to be “eye-watering.”

Microsoft plans to fold AI solutions into its long-running services. The software firm is planning to incorporate ChatGTP functionality into the Bing search engine as the firm seeks to steal market share from Google.

Free rivals are still available

A free version of ChatGPT is still available to users, but for those who wish to look further afield, alternatives are available. is one of the major competitors in the market, combining the features of Google and ChatGPT in one platform. and YouChat got a jump on the market by being one of the first chatbots to launch in the wake of ChatGPT.

One of the main differentiators to ChatGPT is that YouChat offers links to supporting evidence and research papers. On the negative side, YouChat occasionally cites ghost papers that do not actually exist. 

Another competitor AI in the field is Claude. Its parent company Anthropic was founded by former members of the OpenAI team, and it claims to offer better answers and analysis thanks to its Constitutional AI model.

Continue Reading


How Google Is Joining the Text-to-Image AI Race



How Google Is Joining the Text-to-Image AI Race
Think up your pictures.

Two of the AI technologies that have been gaining traction are the Text-to-Image and Text-to-Video AI systems, and Google has joined the race to provide them. These systems are trained on large datasets of images, videos and their associated text descriptions.

They can be used for a variety of applications, such as creating photorealistic images from written descriptions, generating images for product listings, or creating illustrations for books and other documents.

On the fore front of development has been OpenAI’s DALL-E text-to-image AI system, which has been dominating for a while. However, tech giants like Microsoft, Meta, and Google have joined the race to create similar AI tools to bring to the market. Also, Midjourney has been garnering lots of attention.

Also read: New Metaverse Product M3 is Focused on Senior Citizens Healthcare

According to Joseph Foley Google announced and showcased a glimpse of their text-to-image AI system in May 2022 and officially released parts of the working system to the public in November of the same year. One of the key features of Imagen is its ability to generate photorealistic images.

This means that the images generated by the system are so realistic, they could be mistaken for actual photographs.

However, this might not be the case when it comes to AI systems that generate images from text, as some of the images come out blurry and the AI often confuses what the user prompted to the image that is produced.

DALL-E is one system that has been found to misunderstand the text prompts and producing different results than intended.

Google beat their chest

Google research claims that their text-to-image AI is the best on the market beating other systems like VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2 both in terms of sample quality and image-text alignment.

According to Joseph Foley’s article, Google has also released sample images that suggest that it is a high-end AI tool that could take on the competition. However, the images have raised concerns over copyright abuse and the security of artists jobs.

Although Google’s Imagen AI seems to be claiming its place within the AI space, Google have not yet released the entire system to the public, they have only given a glimpse with limited functionalities through their AI test kitchen app, which they use for Beta testing of their AI systems under development.

AI systems reinforcing stereotypes?

Whilst these AI systems have proved that they can be used to create beautiful artwork, concerns have been raised over their social biases and stereotypes when generating images. According to James Vincent, researchers have also found that OpenAI’s DALL-E has the ability to create images that are based on some social bias like gender, race and sexuality.

Both Google and OpenAI have decided not to give full public access of their image-to-text AI systems with Google citing their system is not yet ready for full public use, while they work on a way to address social and cultural bias for future release.

Google also noted that there were limitations to this, including “several ethical challenges facing text-to image research broadly.”

The company admitted this could impact “society in complex ways,” and there is a risk of misuse of such models.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.


Sign up here to get news & updates right to your inbox!

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved