Search

RSC, a giant metaverse supercomputer by Meta

Meta, the tech giant previously known as Facebook, revealed Monday that the company built one of the world’s fastest supercomputers. A behemoth called the Research SuperCluster, or RSC. Facebook CEO Mark Zuckerberg says the system is the fastest for AI activities. In addition with 6,080 graphics processing units packed into 760 Nvidia A100 modules.

The Perlmutter supercomputer, which employs more than 6,000 of the same Nvidia GPUs. Moreover is now the world’s fifth fastest supercomputer, can match that computing power. In a second phase, Meta intends to increase performance by a factor of 2.5 this year by expanding to 16,000 GPUs.

RSC

MetaNews.

Meta will use RSC for a host of research projects

Meta will use RSC for a number of research projects requiring next-generation performance, including “multimodal” AI. Which draws conclusions from a combination of sound, imagery, and actions rather than just one sort of input data. This could be useful in dealing with the nuances of one of Facebook’s major issues: identifying dangerous content.

Meta, a leading AI researcher, is hoping that the investment will pay off by allowing RSC to assist in the development of the company’s most recent priority. The virtual environment known as the metaverse. RSC could be powerful enough to interpret speech for a big number of people who each speak a different language at the same time.

In a statement, Meta CEO Mark Zuckerberg said; “The experiences we’re designing for the metaverse demand immense compute capacity”. “With RSC, new AI models will be able to learn from billions of samples. Grasp hundreds of languages, and much more.”

According to Meta researchers Kevin Lee and Shubho Sengupta, RSC is around 20 times faster than its prior 2017-era Nvidia machine when it comes to one of the most popular uses of AI: training an AI system to recognize what’s in a photo. It’s nearly three times faster at decoding human speech.

RSC also could help a particularly thorny AI problem that Meta calls self-supervised learning

RSC could also aid with a particularly difficult AI challenge that Meta refers to as self-supervised learning. Most AI models require meticulous tagging. Using stop signs, for example, we can train AI for autonomous vehicles. Moreover the audio is used to train speech recognition AI with transcripts.

Instead, raw, unlabeled data is used in the more difficult process of self-supervised training. So far, humans have the upper hand over computers in this domain.

Meta and other proponents of artificial intelligence have demonstrated that training AI models with greater data sets generates better results. It takes far more computational power to train AI models than it does to run them, which is why iPhones can unlock when they identify your face without having a connection to a data center full of servers.

Supercomputer designers customize their machines by picking the right balance of memory

Designers of supercomputers tailor their machines by balancing memory, GPU performance, CPU performance, power consumption, and internal data paths. The GPU, a type of processor originally developed for accelerated graphics but now utilized for many other computer tasks, is often the star of the show in today’s AI.

The cutting-edge A100 chips from Nvidia are designed for AI and other heavy-duty data center activities. Big firms like Google, as well as a slew of startups, are developing AI processors, some of which are the world’s largest semiconductors.

In combination with Facebook’s own PyTorch AI engine, the A100 GPU foundation is the most productive development environment, according to Facebook.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×