Is ChatGPT any old news? It seems impossible as the explosion in popularity of AI is permeating every aspect of our lives – whether it be digital masterpieces created with it best AI art generators or help us with online shopping.
But despite the fact that it’s at the forefront of the AI arms race – and when it comes to power Microsoft’s Bing AI – it looks like ChatGPT could lose momentum. Accordingly SimilarWebTraffic on OpenAI’s ChatGPT website fell by almost 10% compared to last month, while readings from sensor tower also showed that the downloads of the iOS app are also declining.
As reported by insiderpaying users of the more powerful GPT 4 model (access to it is included in the price). ChatGPT Plus) have complained on social media and on OpenAI’s own forums about a drop in the output quality of chatbot.
There was general consensus that GPT-4 was able to generate results faster but with lower quality. Peter Yang, a product lead for Roblox, took to Twitter to criticize the bot’s recent work, claiming that “the quality seems to be down”. One forum user said the recent GPT 4 experience felt “like driving a Ferrari for a month and then suddenly turning into a battered old pickup”.
The edition of GPT4 has recently changed. It’s generated faster, but the quality seems to be worse. Perhaps OpenAI tries to save costs. Has anyone else noticed this?May 21, 2023
Why GPT-4 Suddenly Has Problems?
Some users were even tougher, calling the bot “dumber” and “lazier” than before, with a long thread on the OpenAI forums full of complaints of all kinds. One user, “bitbytebit”, described it as “completely awful now” and “brain dead compared to before”.
According to users, there was a point a few weeks ago where GPT-4 got massively faster – albeit at the expense of performance. The AI community has speculated that this could be due to a shift in the design ethos from OpenAI towards the more powerful machine learning model – namely, splitting it into multiple smaller models that can be trained in specific areas and work together to achieve the same end result At the same time, OpenAI is cheaper to run.
OpenAI has yet to officially confirm this as there has been no mention of such a big change in how GPT-4 works. It’s a credible explanation, according to industry experts like Sharon Zhou, CEO of an AI development company laminiwho described the multi-model idea as the “natural next step” in GPT-4 development.
AIs eat AIs
However, there’s another pressing issue with ChatGPT that some users suspect could be at the root of the recent performance drop — a problem the AI industry seems ill-prepared for.
If you’re unfamiliar with the term “AI cannibalism,” let me break it down quickly: Large Language Models (LLMs) like ChatGPT and Google bard Search the public internet for data to use in generating responses. In recent months there has been a veritable boom in AI-generated content on the Internet – including an unwanted flood of Novels Authored by AI on Kindle Unlimited – means that when searching for information on the Internet, LLMs are increasingly picking up materials that have already been created by an AI.
This risks creating a feedback loop where AI models “learn” from content that is itself AI-generated, leading to a gradual decline in output coherence and quality. With numerous LLMs now available to both professionals and the general public, the risk of AI cannibalism is increasing – especially since there is still no strong evidence of how AI models accurately separate “real” information from AI generated content.
The discussions around AI mainly focused on what risks it poses to society – For example, Facebook owner Meta recently declined to open up its new speech-generating AI to the public after this happened deemed “too dangerous” to be released. But content cannibalization poses a greater risk to the future of AI itself; Something that threatens to break the functionality of tools like ChatGPT, which rely on original human-made materials for learning and content generation.
Are you using ChatGPT or GPT-4? If so, have you felt the quality has gone down lately, or have you just lost interest in the chatbot? I would like to hear from you on twitter. With so many competitors now emerging, is it possible that OpenAI’s dominance is coming to an end?