As regular TechRadar readers will know, the heavily touted AI chatbot enhancements recently added to Bing didn’t have this smoothest of all starts – and now Microsoft is making some changes to improve the user experience.
in one blog entry (opens in new tab) (above The edge (opens in new tab)), Microsoft says the tweaks “should help focus chat sessions”: the AI portion of Bing will be capped at 50 chat rounds (one question and answer) per day and five replies per chat session.
This has come: Microsoft executives have previously become on record They said they were looking for ways to curb some of the strange behaviors noticed by early testers of the AI bot service.
Put to the test
Those early testers tested pretty hard: they were able to get the bot, which is based on an updated version of OpenAI’s ChatGPT engine, to do it return incorrect answersget angry and even question the nature of his own existence.
Putting your search engine through an existential crisis when you were just looking for a list of best phones is not ideal. Microsoft says very long chat sessions confuse its AI and that the “vast majority” of searches can be answered in 5 answers.
The AI add-on for Bing isn’t available to everyone just yet, but Microsoft says it’s making its way via the waiting list. If you plan to try out the new functionality, remember to keep your interactions short and to the point.
Analysis: Don’t believe the hype just yet
Despite the early problems there are clearly a lot of potential in the AI-supported search tools under development by Microsoft and Google. Whether you’re looking for ideas for party games or places to visit, they’ll deliver quick, well-founded results — and you don’t have to wade through pages of links to find them.
At the same time, of course, there is still a lot to do. Large Language Models (LLMs) like ChatGPT and Microsoft’s version of it doesn’t really “think” as such. They’re like supercharged autocorrect engines, predicting which words should follow each other to give a coherent and relevant response to what’s being asked of them.
In addition, there is the question of sourcing – whether people will rely on AI to tell them what to do best laptops are leaving human writers out of work, these chatbots don’t have the data they need to produce their responses. Like traditional search engines, they are still very dependent on content compiled by real people.
We of course took the opportunity to ask the original ChatGPT why long interactions confuse LLMs: Apparently, it can cause the AI models to “focus too much on the specific details of the conversation” and cause them to “fail to generalize to other contexts or topics,” resulting in looping behavior and responses that “can repeat or are irrelevant”.