Search

Grok AI Model Challenged by the Viral Groq AI Model

Grok AI Model Challenged by the Viral Groq AI Model

With the Groq AI model making ChatGPT appear weak and drawing parallels with Elon Musk’s model, also named Grok, social media users are beginning to notice.

Groq is the latest artificial intelligence (AI) model that is causing quite a stir on social media thanks to its response speed and innovative technologies that may eliminate the need for GPUs.

Also read: AI is Making Phishing More Convincing Than Ever

The AI model became an overnight sensation after its public benchmark tests went viral on the social media platform X, demonstrating that Grog outperformed the popular AI chatbot ChatGPT.

Groq using language processing unit (LPU)

Groq’s response speed is due to the team behind the AI model developing a unique application-specific integrated circuit (ASIC) chip for large language models (LLMs), which allows it to produce 500 tokens per second. The publicly available version of the model,  ChatGPT-3.5, can generate around 40 tokens per second. 

Rather than using the expensive and rare graphics processing units (GPUs) usually used to run AI models, the developer of this model, Groq Inc., claims to have created the first language processing unit (LPU) through which it runs its model.

According to Groq, LPUs are also more energy efficient. LPUs can do more computations per watt thanks to their ability to prevent underutilization of cores and reduce the effort required to manage multiple threads.

Additionally, multiple TSPs can be connected thanks to Groq’s chip design, which eliminates the traditional bottlenecks associated with GPU clusters. According to Groq, this minimizes the hardware requirements for large AI models and makes the system scalable.

Groq challenges Grok.

However, the company behind Groq was founded in 2016, and Groq was registered as a trademark. The creators of the Groq model released a blog post last November, calling out Elon Musk for his choice of moniker, Grok (but spelled with a “k”), as the AI model was beginning to gain more popularity.

In the blog post, Groq said they can see why Musk might want to adopt their name. Musk likes fast things (rockets, hyperloops, one-letter company names), and their product, the Groq LPU Inference Engine, is the quickest way to run large language models (LLMs) and other generative AI applications. However, they must ask Musk to please choose another name and fast.

Neither Grok nor Musk has commented on X (formerly Twitter)  on the similarity between the names of the two models since Groq went viral on social media.

Users react

However, several users on the platform have started comparing the LPU model and other popular GPU-based models. 

According to one user who works in AI development, Groq is a “game changer” for products that require low latency (which refers to the time it takes to execute a request and get a response).

Another user said that Groq’s LPUs might be a good substitute for the “high-performing hardware” of the highly sought-after Nvidia A100 and H100 chips, as well as a “massive improvement” over GPUs in the future when it comes to meeting the demands of AI applications.

This occurs when leading AI developers are working to create their chips internally to avoid depending solely on Nvidia’s models. 

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×