OpenAI’s ChatGPT Faces Criticism for Incomplete and Unresponsive Behavior

OpenAI's ChatGPT Faces Criticism for Incomplete and Unresponsive Behavior

OpenAI recently admitted that its large language model, ChatGPT 4, has become ‘lazy’, which could influence the progress of AI in the future. This revelation follows numerous user complaints about the model’s deteriorating performance.

ChatGPT users noticed that the ChatGPT-4 was becoming lazy and reportedly refusing to do some tasks or returning specified results. OpenAI, the company behind the popular language model ChatGPT, acknowledged that the model has become lazy, but the company isn’t sure why.

Also read: Deepfake Audio of Michal Šimečka Rocks Slovakian Election Scene

According to OpenAI, this could have implications for the future of artificial intelligence (AI).

Some X (formerly Twitter) users are calling the occurrence a ‘winter break hypothesis’. Though unproven, the fact remains that the world of AI language models has become weird, seeing as AI researchers are taking it seriously.

ChatGPT-4 becomes ‘lazy’

Several user reports came up about the model’s performance degradation. They highlighted issues such as incomplete tasks, shortcuts, and avoidance of responsibility for instructed ChatGPT tasks.

OpenAI acknowledged the feedback in a series of tweets and claimed that since November 11th, the model has not undergone any updates. They continued by saying that the observed laziness was not intentional and attributed it to the unpredictable nature of large language models.

Some X accounts openly made tweets about their displeasure. Martian asked if large language models might simulate seasonal depression. Also, Mike Swoopskee asked if the language model learned from its training data that people usually slow down in December and put bigger projects off until the new year, and that’s why it has been lazy lately.

Additionally, people noted that since the system prompt for ChatGPT feeds the bot the current date, some began to think that there’s something more to the idea.

Such a weird proposition was entertained, as research has shown that large language models like GPT-4 respond to human-style encouragement. GPT-4 powers the paid version of ChatGPT and responds to some human-style encouragement, like telling the bot to ‘take a deep breath’ before doing a math problem.

However, people have less formally experimented with telling the large language model that it will receive a tip for doing the work. Also, telling the bot that it has no fingers will help lengthen the outputs if the model gets lazy.

Way forward?

The slow performance of ChatGPT-4 may suggest that it may take longer than expected to reach true artificial intelligence that can think and solve problems on its own. Hence, uncertainty brews about AI’s ability to independently handle tasks. Several areas that depend on AI will be affected by this delay in the process.

However, we can see it as a chance to learn instead of seeing this as a complete stop.

Significantly, scientists can learn more about how AI works by figuring out why ChatGPT is having issues. This will guide them in making AIs that stay active and smart in the future.

Moreover, ChatGPT’s difficulties are a reminder of the tough path ahead, even though AI’s immediate future might not be as amazing as we thought. We can move closer to real AI with better knowledge and care by recognizing and dealing with these challenges.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.