An AI chatbot released by the New York City government designed to assist business owners in accessing information has come under scrutiny for sharing inaccurate and misleading guidance.
A report by The Markup, co-published with local nonprofit newsrooms Documented and The City, reveal multiple instances where the chatbot provided wrong advice about legal obligations.
For example, the AI chatbot claimed that bosses could accept workers’ tips and that landlords are allowed to discriminate based on source of income – both wrong pieces of advice.
Chatbot fail?
Launched in October 2023 by Mayor Adams’s administration as an extension of the MyCity portal, the chatbot, described as “a one-stop shop for city services and benefits,” is powered by Microsoft’s Azure services. Despite its intention to serve as a reliable source of information sources directly from the city government’s websites, the pilot program has been found to generate flawed responses.
One example given by The Markup sees the chatbot asserting that businesses could operate as cashless establishments, despite the New York City’s 2020 ban on such practices.
Responding to the report, Leslie Brown, spokesperson for the NYC Office of Technology and Innovation, acknowledged the chatbot’s imperfections, emphasizing ongoing efforts to refine the AI tool:
“In line with the city’s key principles of reliability and transparency around AI, the site informs users the clearly marked pilot beta product should only be used for business-related content, tells users there are potential risks, and encourages them via disclaimer to both double-check its responses with the provided links and not use them as a substitute for professional advice.”
After a months-long honeymoon period, the cracks are beginning to show as businesses and government agencies start to question the reliability, safety and security of artificial intelligence, with many imposing bans and others introducing strict regulations.