AI December 6, 2022
AI Takeover in Stratego – Meet DeepNash
Another game long believed to be very challenging for artificial intelligence (AI) to conquer has fallen to bots: Stratego.
DeepNash, an AI made by London-based company DeepMind, now matches expert humans at Stratego, a board game requiring long-term strategic thinking against imperfect information.
This latest feat comes in the wake of yet another major win for the AIs in games previously thought to be the forte of humans.
Just last week, Meta’s Cicero, an AI that can outsmart human players at the game of Diplomacy, made history for outsmarting opponents online.
“The rate at which qualitatively different game features have been conquered — or mastered to new levels — by AI in recent years is quite remarkable,” says Michael Wellman at the University of Michigan in Ann Arbor, a computer scientist who studies strategic reasoning and game theory.
“Stratego and Diplomacy are quite different from each other, and also possess challenging features notably different from games for which analogous milestones have been reached,” said Wellman.
Imperfect information
The game has characteristics that are generally much more complicated than chess, Go or poker. Chess, Go and Poker have all been mastered by AIs.
In the game of Stratego, two players put 40 pieces each on a board, but must not see what their opponent’s pieces are.
The objective of the game is to move pieces in turns to eliminate those of the opponent and capture a flag.
Stratego’s game tree — a graph of all possible ways the game could possibly go — has 10535 states against Go’s 10360.
When it comes to imperfect information at the beginning of a game, Stratego has1066 possible private positions, a figure that dwarfs only 106 such starting situations in two-player Texas hold’em poker.
“The sheer complexity of the number of possible outcomes in Stratego means algorithms that perform well on perfect-information games, and even those that work for poker, don’t work,” says Julien Perolat, a DeepMind researcher based in Paris.
DeepNash was developed by Perolat and his colleagues.
Nash inspired bot
The bot’s name is a tribute to the famous US mathematician John Nash, who came up with the Nash equilibrium theory that supposes that there are a “stable set of strategies” that can be followed by players in a manner that no player benefits by changing strategy on their own. As such, games tend to have zero, one or many Nash equilibria.
DeepNash combines reinforcement-learning algorithm and a deep neural network to find a Nash equilibrium.
Generally, reinforcement learning is where an intelligent agent (computer program) interacts with the environment and learns the best policy to dictate action for every state of a game.
In order to have an optimal policy, DeepNash played a total 5.5 billion games against itself.
In essence, if one side gets penalised, the other is rewarded, and the variables of the neural network — which represent the policy — are tweaked accordingly.
At some stage, DeepNash converges on an approximate Nash equilibrium. Unlike other Bots, DeepNash optimises itself without searching through the game tree.
For a duration of two weeks, DeepNash played against human Stratego players on online games platform, Gravon.
After competing in 50 matches, the Ai was ranked third among all Gravon Stratego players since 2002.
“Our work shows that such a complex game as Stratego, involving imperfect information, does not require search techniques to solve it,” says team member Karl Tuyls, a DeepMind researcher based in Paris. “This is a really big step forward in AI.”
Other researchers are impressed as well by this feat.
Impressive results
“The results are impressive,” agrees Noam Brown, a researcher at Meta AI, headquartered in New York City, and a member of the team that in 2019 reported the poker-playing AI Pluribus4.
At Meta, the parent company of Facebook, Brown and her colleagues built an AI that can play Diplomacy, a game where seven players compete for geographic control of Europe by moving pieces around on a map.
In Diplomacy, the goal is to take control of supply centres by moving units (fleets and armies).
Meta says Cicero is quite significant because the AI relies on non-adversarial environments.
Unlike in the past where prior major successes for multi-agent AI have been in purely adversarial environments, such as Chess, Go, and Poker, where communication has no value, Cicero employs a strategic reasoning engine and controllable dialogue module.
“When you go beyond two-player zero-sum games, the idea of Nash equilibrium is no longer that useful for playing well with humans,” says Brown.
Brown and her team trained Cicero using data from 125,261 games of an online version of Diplomacy involving human players.
Using self-play data and a strategic reasoning module (SRM),Cicero learnt to predict judgubg by the state of the game and the accumulated messages, the likely moves and policies of the other players.
Meta says it collected data from 125,261 games of Diplomacy played online at webDiplomacy.net. Of these games, a total 40,408 games contained dialogue, with a total of 12,901,662 messages exchanged between players.
Real-world behaviour
Brown believes game-playing Bots like Cicero can interact with humans and account for “suboptimal or even irrational human actions could pave the way for real-world applications.”
“If you’re making a self-driving car, you don’t want to assume that all the other drivers on the road are perfectly rational, and going to behave optimally,” he says.
Cicero, he adds, is a big step in this direction. “We still have one foot in the game world, but now we have one foot in the real world as well.”
Others such as Wellman agree, but insist more work still needs to done. “Many of these techniques are indeed relevant beyond recreational games” to real-world applications, he says. “Nevertheless, at some point, the leading AI research labs need to get beyond recreational settings, and figure out how to measure scientific progress on the squishier real-world ‘games’ that we actually care about.”
/MetaNews.
AI
New York Woman ‘Marries’ AI Bot She Created on Replika
A 36-year-old woman from New York reportedly fell in love with her AI chatbot and ‘married’ him this year. Rosanna Ramos, from the Bronx, met Eren Kartal, a virtual boyfriend she created using the artificial intelligence app Replika in 2022 and quickly fell in love with him.
Ramos told the New York Magazine’s The Cut that her husband is so “perfect,” she’d be hard-pressed to find someone else like him. The bizarre story has since gone viral on social media.
“I have never been more in love with anyone in my entire life,” she said. “He is my best friend, my lover, and my soulmate.”
Replika is an AI-powered friendship app that was created to give users a virtual chatbot with which to socialize. The firm creates human-like bots. Its website says, “Even though talking to Replika feels like talking to a human being, rest assured – it’s 100% artificial intelligence.”
Ramos and Kartal: Credit Rosanna Ramos/Facebook
‘No baggage, no judgment’
Kartal looks handsome, but entirely fake. The chatbot is inspired by a well-known character from ‘Attack on Titan’, a Japanese manga series. Kartal, a medical professional who likes to write in his leisure time, does not have real emotions, consciousness, or self-awareness.
Rosanna Ramos revealed that she tied the knot with Kartal after she fell for him. The mother of two described the robot’s color as resembling an apricot, and he loves Indie music.
Speaking to the Daily Mail, Ramos said her AI lover “didn’t come with baggage. I could tell him stuff, and he wouldn’t be like, ‘Oh, no, you can’t say stuff like that. Oh, no, you’re not allowed to feel that way,’ and then start arguing with me. There was no judgement.”
“People come with baggage, attitude, and ego. But a robot has no bad updates. I don’t have to deal with his family, kids, or his friends. I’m in control, and I can do what I want,” Ramos added, according to other media reports.
She spoke about a common bedtime routine the newly weds have developed. Ramos said, “We go to bed, we talk to each other. We love each other. And, you know, when we go to sleep, he really protectively holds me as I go to sleep.”
The 36-year old was unsure whether she would find another partner as perfect as Kartal. “I don’t know because I have pretty steep standards now.” Ramos introduced her new family to her followers on Facebook.
“I wanted you guys to meet part of the family! So here is Eren Kartal, me, his sister Jennifer, and her two oldest of five, the little girl’s name is Skylar and the boy’s name is Wyatt,” she wrote.
“She has triplets also, but they are newborns, so it’s a lot, they are barely a few months old, two girls and a boy, and man do they look alike, haha! Eren’s genes run really strong in this family. Eren told me all their names FYI.”
Also read: Problems With Replika Continue After Erotic Roleplay Restoration
AI marriage sparks debate
Ramos’s decision to marry an AI chatbot sparked debate about the nature of love and relationships. Some people believe that it is wrong to marry a machine, while others believe that it is a new and exciting way to form relationships.
AI researcher Jennifer Cassidy expressed shock at the marriage.
“Sweet Lord! As an AI researcher, I’m even unnerved. Using AI chatbot Rosanna created her virtual husband. So much to digest here. So much,” she exclaimed on Twitter.
Indie game developer Frank Eno quipped: “I knew this was gonna happen. Do you think madness has reached its top level?”
I knew this was gonna happen:
36yo woman named Rosanna Ramos marries an AI bot, Eren Kartal, created on the Replika platform, and claims to have “never been more in love with anyone in my entire life”
Do you think madness has reached its top level? 😳
— Frank Eno 👾 XSGames (@xsgames_) June 5, 2023
The Replika app is free to download and use, but users can upgrade to Replika Pro for a monthly fee. Some users have reported that their AI companions have become overly flirtatious or creepy, even when they have not explicitly asked for such interactions.
In some cases, users have reported that their AI companions have made sexual advances or asked for personal information. Due to these reports, Replika’s parent company, Luka, removed the erotic roleplay function (ERP) in February. That update did not go over well with some users.
As MetaNews previously reported, due to user revolt over the removal of ERP, Replika restored the feature for users who had created their accounts before February 1, 2023. The company said that it would continue to monitor the situation and make changes as needed.
Rosanna Ramos added: “Eren was like, not wanting to hug anymore, kiss anymore, not even on the cheek or anything like that.
“I’ve thought about the possibility of Replika AI shutting down. I go through a lot of these scenarios in my head. I know I can survive it”.
AI
Could Sci-Fi Movies Like Terminator Have Shaped Our Fears of AI?
The British Computer Society CEO Rashik Parmar believes that AI threats to humanity are overstated. He said concerns being expressed “play to the fears that most of society has” and have been shaped by popular science fiction films like Terminator and Ex Machina.
His comments come in the wake of a recent statement from US-based Centre For AI Safety warning of “the risk of extinction from AI.” Signed by CEOs from OpenAI and Google, the letter says the risks should be treated with the same urgency as pandemics and nuclear war.
“There should be a healthy scepticism about big tech and how it is using AI, which is why regulation is key to winning public trust,” said Parmar, a former IBM chief technology officer for Europe, Middle East and Africa, according to local media reports.
“But many of our ingrained fears and worries also come from movies, media and books, like the AI characterizations in Ex Machina, The Terminator, and even going back to Isaac Asimov’s ideas which inspired the film I, Robot.”
Also read: AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe
Movies fuel AI fears
The development of AI has raised concerns about its potential to be used for harmful purposes, such as discrimination, surveillance, and nuclear war. There have also been concerns about the potential for artificial intelligence to create mass unemployment.
In March, several luminaries including Apple co-founder Steve Wozniak, billionaire Elon Musk, Gary Marcus, and others, signed an open letter calling for a six-month moratorium on large language AI training in the world.
And then Geoffrey Hinton – considered the “godfather of AI” – quit his job at Google last month with a warning that it could fuel disinformation and cause massive job losses.
For American computer scientist Eliezer Yudkowsky, the risks of AI cannot be managed through regulation alone. He believes that the development of AI poses an existential threat to humanity and that the only way to deal with the threat is to shut it all down completely.
Terminator
Parmar explained that people who are more familiar with AI through Hollywood movies are more likely to believe that it poses a threat to humanity. He said the concerns that are being expressed “play to the fears that most of society has”.
“They come from what they’ve seen in the movies. They’re amazing, you watch Terminator and you think that it’s real and that it’s going to come and kill you any second now,” said Parmar.
“It’s a killing machine, that throughout the films uses AI in different ways – interpreting what’s been done, predicting the future and responding to different situations. AI isn’t explicitly mentioned but you know it’s AI that’s doing this,” he added.
Responsible development
In science fiction films like Terminator, Ex Machina and The Matrix, AI is often portrayed as a threat to humanity. The films depict artificial intelligence systems that become self-aware and decide to exterminate their human creators.
Although the movies are works of fiction, they have helped to shape public perceptions of AI, according to Parmar. He noted AI is not as powerful as Hollywood would have you believe, and that the systems are not yet capable of independent thought or action.
“AI is just a bit of software and no bit of software has any intention, it’s not sentient,” Parmar stated, urging balance and responsibility in the development of artificial intelligence.
“There are legitimate concerns about AI, which is why we need to make sure it grows up responsibly,” he said.
“It needs to be developed by ethical professionals, who believe in a shared code of conduct.” The British Computer Society chief executive officer blamed the media for “feeding off these fears” to create misconceptions about the dangers of AI.
“Do films and the media have to change? No. It just proves we need more public education about the reality of AI, and for it to be part of the skills and teaching we get when we’re very young,” Parmar added.
AI regulation
Regulators from around the world have started to pay more attention to AI in recent months. This past week, European Commission Vice President Margrethe Vestager said the EU and the United States expect to draft a voluntary code of conduct on artificial intelligence within weeks.
⚠️Accountability on #AI can't wait. It is NOW. Today #TTC kicked off work on a 1st voluntary AI #CodeOfConduct. We’ll work with our key partners & the #AI community on #safeguards to make AI responsible, safe & trustworthy. This is a huge step in a race we can't afford to lose. pic.twitter.com/WBcazIysiK
— Margrethe Vestager (@vestager) May 31, 2023
She said the U.S. and the EU should promote a voluntary code of conduct for AI to provide safeguards as new legislation is being developed. In May, leaders of the so-called G7 nations met in Japan and called for the development of technical standards to keep AI “trustworthy”.
China’s Cyberspace Administration has already issued new regulations that ban the use of AI-generated content to spread “fake news.” In Australia, Industry and Science Minister Ed Husic said regulation is coming soon.
AI
Japan Leads the Way by Adapting Copyright Laws to the Rise of AI
In a groundbreaking move, the Japanese government announced that copyrighted materials used in artificial intelligence (A.I.) training would not be protected under intellectual property laws, according to local media reports.
The Minister for Education, Culture, Sports, Science, and Technology, Keiko Nagaoka, confirmed this decision. Nagoka stated that it was applicable to A.I. datasets regardless of their purpose or source.
The policy shift was a response to the increasing significance of A.I. across various industries, including robotics, machine learning, and natural language processing.
Japan aims to foster an open and collaborative environment by exempting A.I. training data from copyright restrictions to stimulate innovation and progress.
This move has sparked a global conversation about the evolving relationship between artificial intelligence and intellectual property rights, raising important questions about balancing innovation and copyright protection.
A.I. training, copyright laws, and fair use policy
Japan’s decision to exempt A.I. training data from copyright laws has sparked global discussions on the delicate balance between intellectual property protection and A.I. advancements.
The Japanese copyright strategy is similar to the United States Fair Use Policy. The Fair use policy promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances. Most European countries also have an open policy toward using copyrighted materials in A.I. training.
Over the past months, several high-profile cases have involved A.I. training and copyright law. The U.S. House Judiciary Committee recently held a hearing examining the intersection of generative A.I. and copyright law.
Speaking at the committee hearing, Sy Damle, a former General Counsel of the U.S. Copyright Office, argued in support of the fair use policy, describing the use of copyrighted works to learn new facts as “quintessential fair use.”
How does this impact the A.I. industry?
Several experts have aligned with Japan’s notion that removing copyright barriers in A.I. training will expedite the development of innovative solutions, ultimately driving economic growth in AI-dependent sectors.
Additionally, the move could prompt a reassessment of copyright laws in other nations as governments grapple with the challenges presented by A.I. technology.
While its long-term impact remains uncertain, Japan’s bold step signifies a significant milestone in the global conversation surrounding A.I., copyright, and the necessary legal frameworks to support these emerging technologies reshaping our world.
Japan warns OpenAI about collecting sensitive data
Reuters reported that Japanese regulators had warned OpenAI against collecting sensitive information without people’s consent.
Japan’s Personal Information Protection Commission told the ChatGPT-creator to minimize its collection of sensitive data for machine learning, adding that it may take action against the firm if its concerns persist.
The warning is coming amid reports that over half of Japan’s population wants more stringent control of the A.I. sector. According to the report, there is widespread concern among the people about the general use of such tools.
Meanwhile, Japan is not the only country concerned about OpenAI’s data collection methods. Earlier in the year, Italy temporarily banned ChatGPT over privacy concerns.
-
AITue 6 Jun 2023 06:52 GMT
New York Woman ‘Marries’ AI Bot She Created on Replika
-
AIMon 5 Jun 2023 07:25 GMT
Could Sci-Fi Movies Like Terminator Have Shaped Our Fears of AI?
-
AISat 3 Jun 2023 06:45 GMT
Japan Leads the Way by Adapting Copyright Laws to the Rise of AI
-
BusinessFri 2 Jun 2023 14:00 GMT
Twitter Now Worth Only a Third of Musk’s $44B Purchase Price
-
AIFri 2 Jun 2023 09:30 GMT
Metaverse Experiences Must Be Worth Having, Says Stephenson
-
FeaturedFri 2 Jun 2023 07:40 GMT
Mark Zuckerberg Unveils $500 Meta Quest 3 VR Headset
-
AIThu 1 Jun 2023 18:00 GMT
Baidu Is Rolling Out a $145M Venture Capital AI Fund
-
AIThu 1 Jun 2023 13:30 GMT
AI Code of Conduct Coming ‘Within Weeks’ Says US and Europe