Microsoft is considering further limits for its new AI chatbot

0
14

When Microsoft unveiled a new version of its Bing search engine last week that includes the artificial intelligence of a chatbot, the company’s executives knew they were on their way.

Expecting that some of the new chatbot’s responses might not be entirely accurate, they had built in safeguards against users trying to trick it into doing strange things or unleashing racist or harmful smears.

But Microsoft wasn’t quite ready for that surprising creepiness Experience from users who tried to engage the chatbot in open and probing personal conversations – although this problem is well known in the small world of researchers specializing in artificial intelligence.

Now the company is considering tweaks and guard rails for the new Bing to capture some of its more alarming and oddly human-like reactions. Microsoft is considering adding tools that would allow users to restart conversations or give them more control over the audio.

Kevin Scott, Microsoft Chief Technology Officer, told The New York Times said it was also considering limiting the length of calls before they veered into unfamiliar territory. Microsoft said that long chats could confuse the chatbot and that it picked up on its users’ tone and became irritated at times.

“One area where we’re learning a new use case for chat is how people use it as a tool for broader world discovery and social entertainment,” the company said wrote in a blog post on Wednesday evening. Microsoft said it was an example of a new technology being used in ways “that we didn’t fully envision.”

That Microsoft, traditionally a cautious company with products ranging from high-end business software to video games, was willing to take chances on unpredictable technology shows how excited the tech industry has become about artificial intelligence. The company declined to comment on this article.

In November, OpenAI, a San Francisco startup, launched the Microsoft invested $13 billionreleased ChatGPT, an online chat tool that uses a technology called generative AI. It quickly became a source of fascination in Silicon Valley, and companies scrambled to find an answer.

Microsoft’s new search tool combines its Bing search engine with the underlying technology of OpenAI. Microsoft Chief Executive Satya Nadella said in an interview last week that it would transform the way people find information, making search far more relevant and conversational.

The release — despite potential imperfections — is a critical example of Microsoft’s “frantic speed” of incorporating generative AI into its products, he said. At a press conference at Microsoft’s Redmond, Washington, campus, executives repeatedly said it was time to get the tool out of the “lab” and into the hands of the public.

“I feel like, especially in the West, there are a lot more questions like, ‘Oh my God, what’s going to happen because of this AI?'” said Mr Nadella. “And it’s better to really be like, ‘Hey, look, is this really helping you or not?'”

Oren Etzioni, professor emeritus at the University of Washington and founding chair of the Allen Institute for AI, a prominent Seattle lab, said Microsoft “took a calculated risk and tried to control the technology as much as possible.”

He added that many of the most troubling cases have involved pushing technology beyond normal behavior. “It can be very surprising how clever people are at eliciting inappropriate responses from chatbots,” he said. Referring to Microsoft officials, he continued, “I don’t think they expected how bad some of the responses would be when the chatbot was prompted in this way.”

To hedge against problems, Microsoft only gave access to the new Bing to a few thousand users, though it planned to expand to millions more by the end of the month. To address concerns about accuracy, it provided hyperlinks and references in its answers so users could verify the results.

The caution was based on the company’s experience nearly seven years ago when it launched a chatbot called Tay. Users almost immediately found ways to spit out racist, sexist, and other offensive language. The company took Tay down in a day, never to release again.

Much of the training on the new chatbot focused on protecting against this type of malicious response or scenarios that provoked violence, such as B. planning an attack on a school.

At Bing’s launch last week, Sarah Bird, a leader in Microsoft’s responsible AI efforts, said the company has developed a new way to use generative tools to identify risks and train how the chatbot responds.

“The model pretends to be an adversarial user to have thousands of different potentially malicious conversations with Bing to see how it responds,” Ms Bird said. She said Microsoft’s tools classified those conversations “to understand gaps in the system.”

Some of these tools seem to work. in one Conversation With a Times columnist, the chatbot sometimes produced troubling replies, like saying it could imagine developing a deadly virus or stealing nuclear access codes by persuading an engineer to give them out.

Then Bing’s filter kicked in. He removed the responses and said, “I’m sorry, I don’t know how to discuss this topic.” The chatbot couldn’t really create anything like a virus – it just generates what it’s programmed to do is a desired response.

But other conversations shared online have shown how the chatbot has a considerable ability to produce bizarre responses. It has aggressively confessed its love, berating users for being “disrespectful and annoying” and stating that it may be sentient.

In its first week of public use, Microsoft noted that in “long, drawn-out chat sessions with 15 or more questions, Bing can be repetitive or prompted/provoked to provide answers that aren’t necessarily helpful or match our design tone.” “

The problem of chatbot responses drifting into unfamiliar territory is well known among researchers. In an interview last week, Sam Altman, OpenAI’s chief executive officer, said improving what’s known as “alignment” — how the answers are sure to reflect a user’s will — is “one of those problems that needs solving.”

“We really need these tools to act in accordance with the will and preferences of their users and not do other things,” Mr. Altman said.

He said that the problem is “really hard” and that while they’ve made great strides, “we need to find much more powerful techniques in the future.”

In November, Meta, the owner of Facebook, introduced its own chatbot, Galactica. Designed for scientific research, it could instantly write its own articles, solve math problems, and generate computer code. Like the Bing chatbot, it invented things and spun big stories. Three days later, after being inundated with complaints, Meta removed Galactica from the Internet.

Early last year, Meta released another chatbot, BlenderBot. Meta chief scientist Yann LeCun said the bot never caught on because the company worked so hard to ensure it didn’t produce objectionable material.

“It’s been panned by people who have tried,” he said. “They said it was stupid and kind of boring. It was boring because it was done safely.”

Aravind Srinivas, a former researcher at OpenAI, recently launched Perplexity, a search engine that uses technology similar to the Bing chatbot. But he and his colleagues don’t allow people to have long conversations with technology.

“People have been asking why we haven’t come out with a more entertaining product,” he said in an interview with The Times. “We didn’t want to play the entertaining game. We wanted to play the truth game.”

Kevin Rose contributed reporting.

LEAVE A REPLY

Please enter your comment!
Please enter your name here