OpenAI recently admitted that its large language model, ChatGPT 4, has become ‘lazy’, which could influence the progress of AI in the future. This revelation follows numerous user complaints about the model’s deteriorating performance.
ChatGPT users noticed that the ChatGPT-4 was becoming lazy and reportedly refusing to do some tasks or returning specified results. OpenAI, the company behind the popular language model ChatGPT, acknowledged that the model has become lazy, but the company isn’t sure why.
Also read: Deepfake Audio of Michal Šimečka Rocks Slovakian Election Scene
According to OpenAI, this could have implications for the future of artificial intelligence (AI).
Is it just me or ChatGPT is becoming “lazy” to a point of uselessness?
It is straight refusing to do chain of thought analysis and brainstorming sessions which used to be its core strength. @karpathy @gdb @OpenAI do something!
— Starson 🇺🇸🚀🦾🇺🇸 (@DrStarson) January 8, 2024
Some X (formerly Twitter) users are calling the occurrence a ‘winter break hypothesis’. Though unproven, the fact remains that the world of AI language models has become weird, seeing as AI researchers are taking it seriously.
ChatGPT-4 becomes ‘lazy’
Several user reports came up about the model’s performance degradation. They highlighted issues such as incomplete tasks, shortcuts, and avoidance of responsibility for instructed ChatGPT tasks.
OpenAI acknowledged the feedback in a series of tweets and claimed that since November 11th, the model has not undergone any updates. They continued by saying that the observed laziness was not intentional and attributed it to the unpredictable nature of large language models.
we've heard all your feedback about GPT4 getting lazier! we haven't updated the model since Nov 11th, and this certainly isn't intentional. model behavior can be unpredictable, and we're looking into fixing it 🫡
— ChatGPT (@ChatGPTapp) December 8, 2023
Some X accounts openly made tweets about their displeasure. Martian asked if large language models might simulate seasonal depression. Also, Mike Swoopskee asked if the language model learned from its training data that people usually slow down in December and put bigger projects off until the new year, and that’s why it has been lazy lately.
Additionally, people noted that since the system prompt for ChatGPT feeds the bot the current date, some began to think that there’s something more to the idea.
the system prompt feeds in the current date, so you might be unironically on to something
— yes (@seedoilmaxxer) December 9, 2023
Such a weird proposition was entertained, as research has shown that large language models like GPT-4 respond to human-style encouragement. GPT-4 powers the paid version of ChatGPT and responds to some human-style encouragement, like telling the bot to ‘take a deep breath’ before doing a math problem.
However, people have less formally experimented with telling the large language model that it will receive a tip for doing the work. Also, telling the bot that it has no fingers will help lengthen the outputs if the model gets lazy.
New prompt engineering trick I found for GPT4 code writing:
The new version of the ChatGPT tries to cut scripts during the wiring with phrases like "/* Repeat for other … */" or "/* … rest of your script … */", which is annoying and requires a lot of copy-pasting.… pic.twitter.com/4TrYvvmRt6
— Denis Shiryaev 💙💛 (@literallydenis) November 15, 2023
Way forward?
The slow performance of ChatGPT-4 may suggest that it may take longer than expected to reach true artificial intelligence that can think and solve problems on its own. Hence, uncertainty brews about AI’s ability to independently handle tasks. Several areas that depend on AI will be affected by this delay in the process.
However, we can see it as a chance to learn instead of seeing this as a complete stop.
Significantly, scientists can learn more about how AI works by figuring out why ChatGPT is having issues. This will guide them in making AIs that stay active and smart in the future.
Moreover, ChatGPT’s difficulties are a reminder of the tough path ahead, even though AI’s immediate future might not be as amazing as we thought. We can move closer to real AI with better knowledge and care by recognizing and dealing with these challenges.