Connect with us

AI January 11, 2023

Next, AI-Designed Proteins for Medicine

Published

on

Proteins designed by AI

The same technology which underpins AI-generative art is being leveraged to create new proteins for medicine, fueling hopes of new treatments for a range of sicknesses and diseases.

It is hoped that the creation of new proteins will help our bodies to fight illness in ways that were previously impossible.

Also read: Apple Plans To Launch Reality VR Headset This Fall

Speeding up the process

Proteins are an important part of how our body accomplishes everyday tasks from digesting food to transporting oxygen molecules through our cardiovascular system.

The 20,000 different proteins which handle these tasks are a known quantity, but new proteins could hold the potential to new abilities and better healthcare outcomes.

Doctor David Baker, the director of the Institute for Protein Design at the University of Washington, is one of the key figures working on new “artisanal” protein designs. In 2017 the team proved that it was possible to design and create new proteins that would take on the shape they wanted.

The work and research continue but are now turbocharged by artificial intelligence.

“What we need are new proteins that can solve modern-day problems, like cancer and viral pandemics,” Dr. Baker told the New York Times on Monday.

Dr Baker went on to add that with the aid of AI “we can design these proteins much faster, and with much higher success rates, and create much more sophisticated molecules that can help solve these problems.”

At this stage, the design of protein blueprints is shortened from years to weeks. Just last year Dr. Baker published a paper detailing how AI could accelerate protein design. These papers and the technologies which underpin them have already been superseded by further advancements in AI. 

The neural network technology of AI-generative art models such as DALL-E and MidJourney is a further advancement on what scientists previously viewed possible.

Next, AI-Designed Proteins for Medicine

Google’s DeepMind helped solve the protein-folding problem.

Neural networks advance medicine

Neural networks learn by analyzing huge amounts of data. For example, by analyzing thousands of pictures of bicycles the system can eventually learn to recognize a bicycle. DALL-E was trained by analyzing millions of images and the text captions that described them. In this way, the system learns to recognize the connections between the images and words.

When a prompter provides a description to DALL-E, the neural network generates a set of key features that the image may contain. A second neural network, known as a diffusion model, generates the pixels needed to create these features.

A similar technique is used to create new protein models. Proteins are comprised of compounds that fold and twist in three dimensions to determine their behavior. AI labs such as Google’s DeepMind have proven that neural networks can predict the shape of a protein based on its compounds alone.

Nate Bennett, a researcher working at the University of Washington told NYT how these AI assistants can speed up the process in previously impossible ways.

“One of the most powerful things about this technology is that, like DALL-E, it does what you tell it to do,” said Bennett. “From a single prompt, it can generate an endless number of designs.”

Namrata Anand, an entrepreneur, and former Stanford University researcher agrees with Bennett on the benefits.

“With DALL-E, you can ask for an image of a panda eating a shoot of bamboo,” said Anand. “Equivalently, protein engineers can ask for a protein that binds to another in a particular way, or some other design constraint, and the generative model can build it.”

The future beckons

For now, AI-generated medicine has focused on speeding up processes, and shortening development times by considerable orders of magnitude.

The next stage is to open up new fields of research that human minds cannot accomplish or perhaps even conceive of on their own.

SHARE THIS POST
Image credits: Shutterstock, CC images, Midjourney.

AI

AI Could Kill Everyone, Researchers Warn MPs

Published

on

AI Could Kill Everyone, Researchers Warn MPs

Artificial Intelligence (AI) researchers have warned of the danger of superhuman AI and told MPs they should be regulated in the same way as nuclear weapons. Oxford University researchers told House of Commons Science and Technology Committee that this could risk human extinction.

Doctoral student Michael Cohen said: With superhuman AI, there is a particular risk that is of a different sort of class, which is …it could kill everyone.”

“If you imagine training a dog with treats, it will learn to pick actions that lead to getting it treats, but if the dog finds the treat cupboard, it can get the treats itself without doing what we wanted it to do,” added Cohen.

Also read: CNET Suspends AI After Publishing a Series of Bad Articles

The Independent reported the researchers have also warned the AI industry had already become a “literal arms race” as competition mounts to produce both commercial and military applications with the technology.

“I think the bleak scenario is realistic because AI is attempting to bottle what makes humans special, that has led to humans completely changing the face of the earth,” said University of Oxford professor of machine learning Michael Osborne.

“Artificial systems could become as good as good at outfoxing us geopolitically as they are in the simple environments of game,” added Osborne.

AI as dangerous as nukes

This is not the first time AI scientists have warned of the risk posed by technology. The latest warning echoes that of a thought experiment put forward by philosopher Nick Bostron about 20 years ago.

The Paperclip Maximizer problem hypotheses that a super intelligent AI would ultimately destroy humanity even if its initial goal of producing the most amount of paperclips possible – was not explicitly malicious.

“There are some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons. AI is as comparable a danger as nuclear weapons,” said Osborne.

Elon Musk has also previously called for more government oversight saying the technology has potential to cause more harm than nuclear weapons.

Self-authored Bill to regulate self

On Wednesday, Congressman Ted Lieu introduced the first ever piece of federal legislation written by artificial intelligence.

Using the artificial language model ChatGPT, Lieu offered the following prompt: “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.”

The resulting resolution introduced was the first in the history of Congress to have been written by AI.

Earlier in the week, Lieu authored an op-ed in the New York Times calling for Congress to take action to regulate AI.

“We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future,” wrote Lieu.

Recognizing the potential positive impacts of artificial intelligence, Lieu’s resolution outlines Congress’ “responsibility to ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans.”

Lieu, who is one of the few members of Congress with technology background also said that some of the harm associated with AI could be “deadly.”

“As one of the just three members of Congress with computer science degree, I am enthralled by AI and excited about the incredible ways it will continue to advance society.”

“And as a member of Congress, I am freaked out by AI, specifically AI that is left unchecked and unregulated,” he wrote.

The rise of AI and tools like ChatGPT have raised a lot of concerns with regards cheating in the classroom.

Earlier this month, New York City’s Department of Education announced a ban on ChatGPT from its schools’ devices and networks on fears this could inspire more cheating by students.

Although not entirely a new technology, ChatGPT – a chatbot created by OpenAI rose to fame on social media late last year with some describing the bot a better search engine than Google due to its conversational speaking style and coherent, topical response style.

Continue Reading

AI

CNET Suspends AI After Publishing a Series of Bad Articles

Published

on

CNET AI writes nonsense articles
CNet choose to trust the AI.

From November to January, technology website CNET published a series of financial advice articles written by artificial intelligence (AI), but an audit confirmed that the majority of these pieces contained factual errors, serious omissions, and plagiarised content.

The CNET editorial team failed to catch and correct these errors prior to publication.

CNET suspends disastrous ‘test’

Technology site CNET has admitted to publishing a string of low-quality articles written by artificial intelligence. All of the articles were misleadingly published under the byline “CNET Money Staff,” implying to casual observers that they were the work of a human hand.

On Tuesday CNET confirmed it would end the practice in what it now calls a “test,” of a an “internally designed AI engine.”

The publication offered no apology to its audience for the long list of inaccuracies and wrong financial advice it had printed since November.

“We stand by the integrity and quality of the information we provide our readers,” said CNET.

CNET’s AI engine was tasked with creating financial services explainers and over three month period proceeded to serve up wrong financial information to its readers. Some of the subjects of these articles included topics such as, “What Factors Determine Your Credit Score?” and “What Is a Credit Card Number?”

Under the guise of providing expert information, CNET published a total of 77 AI-written pieces without adequate safeguards. According to the technology outlet, the articles were outlined by human staff members before being written by the AI and were subsequently checked by their editors before final publication. 

Somewhere CNET’s system fell apart. When one of the articles was recently cited for obvious falsehoods, CNET was forced to audit the AI’s body of work. It found that 41 of the 77 pieces required significant corrections. It appears that absolutely everybody took their eye off the ball.

A compendium of nonsense

Included in the errors which made their way to curious CNET readers was the following explanation of how compound interest works.

According to the CNET AI, “if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.”

This certainly sounds exciting, but in reality, an investor would receive the rather smaller sum of $300 of interest over the first year. 

In the same piece, the following explanation of loan repayments is offered: “For example, if you take out a car loan for $25,000, and your interest rate is 4%, you’ll pay a flat $1,000 in interest per year.”

This, again, is completely wrong. Once the person who has taken the loan starts to pay back on a monthly basis, they would only owe interest on the remaining sum. In reality, there is never a year in which the person repaying the loan would ever pay back “a flat $1,000.”

Other basic mistakes and errors are littered across the 41 inaccurate AI articles. For instance, an article on credit card penalties stated the wrong dollar value of a late fee, while a report on certificates of deposit failed to include key information and facts, and an article about prequalification for credit cards was found to contain plagiarised content.

As one former CNET staffer put it, “this is so incredibly disappointing and disheartening, but it’s not surprising. What other choice do you have when you lay off all your talented and loyal writers?”

Humans make ‘mistakes’ too

The editorial line taken by CNET is that AIs, like humans, are prone to making mistakes. Even so, CNET has sought to minimize the scandal by stating that only a small number of their error-strewn articles required “substantial correction.”

It is clear then that the failure at CNET was not one simply of artificial intelligence but of human bosses failing to exercise their better judgment. The heart of the matter lies in why those mistakes were allowed to be made in the first place. 

What motivated CNET to churn out large numbers of partially inaccurate and partially plagiarised articles, and why did the company only reveal its ‘test’ after it had been caught in the act?

It’s worth noting that CNET’s parent company, Red Ventures, profits through affiliate advertising programs. When a visitor to one of their sites ultimately purchases a credit card from one of its sites, CNET makes money. Besides CNET the company owns numerous other publications including Bankrate, The Points Guy, and CreditCards.com

Given that the AI articles performed well as SEO bait, CNET’s biggest mistake may have been placing affiliate dollars before the integrity of their own journalistic teams. That, sadly, is a very human mistake indeed.

Continue Reading

AI

ChatGPT Pricing and Professional Features Revealed

Published

on

ChatGPT Pricing and Professional Features Revealed

The professional edition of ChatGPT, which is said to be faster and confer additional benefits to users, is set to cost $42 a month. OpenAI is rolling out the ‘Professional Plan’ of ChatGPT to some users with a number of benefits. Professional plan members will receive faster response times, and priority access to new features. 

Earlier this month, MetaNews reported that OpenAI has plans to move to a subscription model to “continue improving and maintaining the service.” The service is now being rolled out as some users have already received an invitation to upgrade.

ChatGPT Professional

The professional service should also be more reliable than the standard service which throttles at peak times and frequently exceeds its available capacity. Screen captures taken by users who were offered the service show that the professional plan will be “available even when demand is high.”

In an unusual step, OpenAI has refrained from making an official announcement regarding the launch. The last official reference of ChatGPT Professional appears to have come from company President Greg Brockman earlier this month.

“Working on a professional version of ChatGPT; will offer higher limits & faster performance,” said Brockman on January 11

Since then there has been no additional word from either Brockman, company CEO Sam Altman, the company website, or its social media channels.

The lack of an official announcement might leave room for skepticism about whether the rollout has actually occurred, but this would seem unwarranted given the volume and quality of evidence from user accounts.

One user named Zahid Khawaja went as far as to post a video screen-capture of his experience on ChatGPT Professional.

Given the hype wave that ChatGPT now seems to be surfing, not making an official announcement and inviting further speculation from an excited public, may be the wiser marketing strategy.

Microsoft partnership

The rollout of a subscription model for ChatGPT comes hotly on the heels of further investment from Microsoft.

On Monday Microsoft announced the ‘third phase’ of its long-term partnership with OpenAI in a ‘multibillion dollar’ deal. Microsoft failed to put a specific cost figure on the terms of their third phase contract but it has widely been reported to be in the region of $10 billion.

As MetaNews reported last week, the terms of that deal will include an aggressive cost recovery program. With such a huge investment in the company, the need for recouping that money swiftly becomes self-evident. That program will see 75% of company profits funneled back to Microsoft until their capital injection is recovered.

According to OpenAI Sam Altman, the cost of maintaining ChatGPT on an ongoing basis is said to be “eye-watering.”

Microsoft plans to fold AI solutions into its long-running services. The software firm is planning to incorporate ChatGTP functionality into the Bing search engine as the firm seeks to steal market share from Google.

Free rivals are still available

A free version of ChatGPT is still available to users, but for those who wish to look further afield, alternatives are available.

You.com is one of the major competitors in the market, combining the features of Google and ChatGPT in one platform. You.com and YouChat got a jump on the market by being one of the first chatbots to launch in the wake of ChatGPT.

One of the main differentiators to ChatGPT is that YouChat offers links to supporting evidence and research papers. On the negative side, YouChat occasionally cites ghost papers that do not actually exist. 

Another competitor AI in the field is Claude. Its parent company Anthropic was founded by former members of the OpenAI team, and it claims to offer better answers and analysis thanks to its Constitutional AI model.

Continue Reading

News Feed

Advertise With Us

Unlock a wide range of advertising
opportunities with MetaNews to reach the
fast-paced meta world.

Publish Your PR

Share your press release with
MetaNews’s growing global audience,
fans, and followers.

Subscribe

Sign up here to get news & updates right to your inbox!

Copyright © 1997 – 2023 MetaNews All Rights Reserved

Copyright © 1997 - 2023 MetaNews All Rights Reserved