Meta Platforms’ AI research team has announced important advancements in adaptive skill coordination and visual cortex replication. The company claims such developments will enable AI-powered robots to function in real-world scenarios without the need for any actual real-world data.
The recent breakthroughs are significant strides in the development of all-purpose “embodied AI agents” that can operate in the real world without human intervention.
Known as “adaptive (sensorimotor) skill coordination (ASC),” this new approach has achieved near-perfect performance (98 percent success) on the challenging task of robotic mobile manipulation in physical environments. Which involves navigating to an object, picking it up, navigating to another location, placing the object, and repeating.
If you’re getting I, Robot vibes, you’re not alone.
Also Read: Meta’s Clegg Says Metaverse is Still the Next Big Thing
Innovative methods for general embodied AI agents
According to Meta Platforms, breakthroughs in the field of embodied AI have been made possible through data: the company says AI requires data to learn, especially embodied AI, which requires specific data that captures interactions with the environment.
However, collecting this interaction data traditionally involves either gathering large amounts of demonstrations or allowing the robot to learn from scratch. Both of which are too resource-intensive to scale toward the learning of a general embodied AI agent.
To address this, the company has developed two new ways for robots to learn. The first involves training a general-purpose visual representation model from a large number of egocentric videos, including their open source Ego4D dataset, which shows first-person views of people doing everyday tasks.
Today, we're sharing two major advancements in our work toward general-purpose embodied AI agents: VC-1 & ASC. We're excited for how this work will help build toward a future where AI agents can assist humans in both the virtual & physical world.
Details ⬇️
— Meta AI (@MetaAI) March 31, 2023
The second method involves pre-training robots to perform long-horizon rearrangement tasks in simulation. Then transferring the policy zero-shot to a real Spot robot to perform such tasks in unfamiliar real-world spaces.
“We’ve built a way for robots to learn from real-world human interactions by training a general-purpose visual representation model from a large number of egocentric videos,” stated the company.
These videos include the open source Ego4D dataset, which shows first-person views of people completing everyday chores.
Additionally, the company built a “way to pre-train our robot to perform long-horizon rearrangement tasks in simulation.”
AI-powered brand safety measures
In other news, Meta has unveiled a new set of inventory filters for Facebook and Instagram feeds, which aim to facilitate brand safety in ad placement.
These filters provide brands with an easy means of avoiding unwanted or objectionable content, thereby offering an additional layer of protection against offensive material.
Advertisers can now select a safety level for their ad placement on Meta. There are now three options for ad placements: expanded, moderate, and limited.
The expanded inventory is the default setting, displaying ads alongside content that meets both the Community Standards and monetization criteria. The moderate inventory, meanwhile, excludes ad placement near content deemed risky, such as non-violent crime, coarse language, and mildly suggestive topics, following the GARM Brand Suitability Framework.
A final placement option, the limited inventory, ensures ads won’t be displayed next to any high or medium-risk content.