Search

SuperPrompt Engineer Encourages ChatGPT Hallucinations

SuperPrompt Engineer Encourages ChatGPT Hallucinations

ChatGPT hallucinations are generally viewed as a failure and problem for AI systems. But one researcher is going against the grain by pursuing ever deeper chatbot hallucinations as a means of discovery.

According to Brian Roemmele the hallucinating “ChatGPT SupePrompt runs itself,” and does so “into infinity.”

The never-ending hallucination

A SuperPrompt designed to give ChatGPT hallucinations can effectively run forever, says its creator.

A chatbot hallucination is a response to a user prompt that often sounds plausible but is factually incorrect. Sometimes hallucinations can be wildly wrong. Most ChatGPT users and commentators find hallucinated answers an inconvenience, and in some cases even worrying, given how plausible the wrong answers can seem.

But one SuperPrompt engineer is challenging this conventional wisdom by intentionally inducing hallucinations from chatbot AI. Brian Roemmele created the ‘Ingo’ SuperPrompt to push the limits of ChatGPT’s hallucinatory powers.

“As most of the world in the fog of ‘war’ decry AI hallucinations as ‘pure fiction’, ‘embarrassing’ and ‘dangerous’ among other things, we will bravely build a path right into the center of this fog and we will explore,” said Roemmele in his typically dramatic style, on Twitter this Wednesday.

“AI Hallucinations can be a massive force in ‘creative’ thinking. Have no illusions, they do not replace human creative thinking. They amplify it.”

The wrong term

Roemmele argues the term ‘hallucination’ is actually a misnomer and doesn’t appropriately describe the phenomenon at play within the AI. Instead, he attributes the flights of fantasy to something more closely approximating human speech patterns. 

“Like a human that is stringing together ideas contemporaneously, it will fill in some missing parts,” says Roemmele. 

As described, the process that leads to hallucinations is something similar to the human experience of searching for the right word, not finding it, and substituting it with something else. The main difference is that rather than a missing word, the AI is contending with a missing truth; something it simply does not know.

While most socially adjusted humans might not replace missing facts with ones of their own creation, AI is unencumbered by human morality. When AI runs out of facts, it simply creates new ones.

According to Roemmele, this state AI enters when struggling with missing information is something more akin to the hypnagogic state in humans, sometimes also referred to as waking or lucid dreams.

While Roemmele may have found a use for this invention of facts, most users would undoubtedly prefer if the AI could stick to things that are real.

Personalized AI draws nearer

MetaNews has previously reported on Roemmele’s efforts to create a chatbot that could run from a personalized computer without an internet connection.

According to an update on Wednesday, the GPT-3.5 model is almost ready for release and takes up only 4GB of space on a local computer. Roemmele is conducting the work in collaboration with Andriy Mulyar – and is named GPT4ALL.

OpenAI’s ChatGPT5, meanwhile, is in the pipeline and predicted to complete training by December, at least according to developer Siqi Chen. The upgrade may usher in the first chatbot capable of comprehensive AGI (Artificial General Intelligence), the ability of an AI system to perform any intellectual task a human can do.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×