Mark Zuckerberg may have popularized the term “metaverse”, but the Meta founder and CEO may no longer be pitching the metaverse to users and advertisers with the same enthusiasm as he did before. Or so it seems. Lately, Meta has shown greater focus on generative AI.
Generative AI is a set of machine learning techniques that allow computers to generate text, draw pictures, and create other media that resemble human output. Now, Meta is trying to get users and advertisers hooked on its TikTok-like short-form videos and AI tools.
The California-based tech company recently announced the creation of a new product unit focused on artificial intelligence – Meta AI. The division is headed by current chief product officer Chris Cox and combines several teams across Meta.
Meta unveils DinoV2
The Meta team is developing AI personas that can help users in many ways, Zuckerberg claims. This includes trials with AI chat experiences in WhatsApp and Messenger, AI image filters and ad formats in Instagram, and AI video and multi-modal experiences.
Last year, Meta AI introduced Make-A-Video, an artificially intelligent system that allows users to generate videos from a text prompt. More recently, it launched several AI products, including DinoV2 and SAM.
Released on April 17, DinoV2 is a generative AI model that can rapidly generate 3D assets for virtual worlds. According to a blog post, the model is able to create three-dimensional (3D) shapes with topology, rich geometric details, and textures from data like images and videos.
DinoV2 uses self-supervised learning, a technique that enables the model to learn from vast amounts of unlabeled data without any external labeling assistance, says Meta. The tool can be very useful for video content creators and in other applications.
Announced by Mark Zuckerberg this morning — today we're releasing DINOv2, the first method for training computer vision models that uses self-supervised learning to achieve results matching or exceeding industry standards.
— Meta AI (@MetaAI) April 17, 2023
Meta said it used the model in collaboration with Restore Forward to “map forests, tree-by-tree, across areas the size of continents.” DinoV2 can identify and recognize various objects within a video like people, pets, and other items. It is also able to identify the relationships between these objects and the scene as a whole.
Digital entrepreneur Abah described the model on Twitter as “a groundbreaking step towards achieving industry-level computer vision models. The use of self-supervised learning is a game-changer and is sure to make waves in the tech industry.”
Others expressed interest to use DinoV2 in agriculture, medicine and other industries. SAM, on the hand, is a new AI model that can identify individual objects from within an image. It comes with a dataset of image annotations that is available for researchers to use.
Meta’s metaverse problems
A growing number of big technology companies have in recent months gone cool on the metaverse as focus shifts to the new AI chatbot craze.
In February, Chinese tech giant Tencent Holdings cut staff at its extended reality (XR) unit and folded plans for virtual reality (VR) hardware. Both Meta and Microsoft have scaled back their metaverse plans in a big way.
Meta’s Reality Labs division, the part of the business focused on VR and the metaverse, has been losing money since its inception and reported more than $13.7 billion in losses last year. The company is on track to cut more than 21,000 jobs this year.
Microsoft shut down its VR metaverse arm AltspaceVR on March 10 and also sacked the entire staff at its popular extended reality projects HoloLens and Mixed Reality Tool Kit (MRTK).
The cutbacks coincide with the current hype surrounding AI chatbots, which began with OpenAI’s breakout hit ChatGPT. Microsoft is leading a spending spree in AI tech, with billions of dollars poured into acquiring ChatGPT-powers for its Bing search.
As the global tech giants’ exodus raised questions about whether the metaverse is losing steam, Meta has become more aggressive in its AI development, in a move that could paradoxically bolster its metaverse ambitions.
The company’s entry into generative tools like DinoV2 may be looked at as a move towards the metaverse, the virtual world where people can interact with each other as they do in the real world. With DinoV2, Meta hopes to create a more immersive experience for users and push the boundaries of what is possible with AI technology.
AI race heats up
Meta’s AI focus is part of a larger trend in the tech industry, as companies race to incorporate artificial intelligence into their products and services. Adobe, for example, recently unveiled several AI tools, including Adobe Sensei.
As MetaNews previously reported, Sensei uses machine learning to automate tasks and improve user experience. Adobe’s AI tools also include Adobe Stock, an AI-powered tool that helps users find the right images for their projects.
1. Adobe adds Firefly to video editing
Edit videos using simple text commands, plus:
-AI color corrections/enhancements
-Animated text and motion graphics generation
-Matching B-roll footage
-Music and sound eﬀects
— Rowan Cheung (@rowancheung) April 18, 2023
There is also Adobe Experience Cloud, which uses AI to personalize customer experiences. However, Adobe does not have any products that are directly comparable to DinoV2 or SAM.
However, the fact that DinoV2 needs for amounts of data also means that data accuracy can be a major challenge for the AI model since incorrect or inconsistent data can impact the model’s performance and accuracy negatively.