Search

Google Claims Its AI Model Acquired an Unexpected Skill by Itself

Google Claims Its AI Model Acquired an Unexpected Skill by Itself

Google executives have claimed that one of their artificial intelligence programs taught itself a new language which it was not trained to know.

In a recent interview with CBS’s Scott Pelley, James Manyika, Google’s senior vice president of technology and society, said the AI program was somehow able to learn Bengali without training after being prompted in the language.

“We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali,” said the Zimbabwe-born Manyika.

“So now, all of a sudden, we now have a research effort where we’re now trying to get to a thousand languages.”

Bengali is widely spoken in South East Asia, particularly Bangladesh. The AI program in question was trained on over 400 languages – reportedly the largest language coverage seen in a speech model today.

Also read: Alphabet Shares Fall as Samsung Plans to Ditch Google for AI-powered Bing

The AI black box

Google CEO Sundar Pichai backed Manyika’s claims, admitting that the California-based tech giant did not fully understand the capabilities of its AI technology. One of the major concerns discovered with Google’s Bard was something he called “emergent properties.”

Pichai is referring to artificial intelligence systems that may now be capable of learning stuff by themselves. According to Google’s research, language models can acquire new skills independently, without being explicitly programmed.

“There is an aspect of this which we call – all of us in the field call it as a ‘black box.’ You know, you don’t fully understand,’ Pichai said, during the same interview.

“And you can’t quite tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But that’s where the state of the art is.”

The CEO emphasized the importance of being humble while developing AI, given its continuous evolution. He said the pace of AI evolution was moving faster than societal institutions can keep up with, raising concerns about whether society is prepared for what is coming.

Google’s dominance was challenged when Microsoft unveiled its new AI-powered Bing search in February, prompting Google to release its own version named Bard. The chatbot is designed to help brainstorm ideas and generate content like speeches, blog posts, and emails.

Google Claims Its AI Model Acquired an Unexpected Skill by Itself

Bard’s program is mostly self-taught, which can be unsettling, and it appears to possess the sum of human knowledge with microchips more than 100-thousand times faster than the human brain according to Google executives.

Google AI raises ethical questions

Google’s announcement has raised concerns about AI developing skills independently of its programmers’ intentions, which has long been a topic of discussion among scientists, ethicists, and science fiction writers.

There are real concerns about AI developing skills that could be harmful, unethical, uncontrollable, and pose a threat to humanity.

However, Google executives explained that the AI program’s ability to learn a new language was not a result of any unintended consequences or errors in the programming. Instead, it was a deliberate feature of the program’s design.

Pichai was forthright about the risks of rushing the new technology. He said Google has “the urgency to work and deploy it in a beneficial way, but at the same time it can be very harmful if deployed wrongly.”

“We don’t have all the answers there yet, and the technology is moving fast. So does that keep me up at night? Absolutely,” he added.

Pichai remains optimistic that society is becoming more aware of the implications of AI and that conversations about the technology are starting in a serious way. As with any tech, AI will be as good or as evil as human nature allows, he says.

Not sentient

James Manyika, the Google SVP, said the fact that the artificial intelligence could learn a new skill without training does not mean that the program is sentient – meaning to have feelings, emotions, ideas, thoughts, perspectives etc, just like human beings.

“They’re not sentient. They’re not aware of themselves. They can exhibit behaviors that look like that. Because keep in mind, they’ve learned from us. We’re sentient beings,” he stated.

Issues of AI sentience remain a hotly debated topic. A former Google engineer once claimed the company’s AI models had become sentient, a claim the firm denied. More recently, Microsoft claimed that OpenAI’s GPT-4 may already be as smart as humans across a variety of fields.

The company claimed the language model, which it integrated into its Bing AI search engine, is an early form of artificial general intelligence (AGI) – meaning that it has gained the ability to perform any intellectual task that a human can, and perhaps even better.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×