NYC's AI Help Bot Issues Disgusting Advice for Restaurants with a Rodent Problem
A “one-stop-shop” artificial intelligence chatbot for New Yorkers looking to launch and operate businesses in New York City has come under fire after it provided some less than savory advice for business owners, including that it was perfectly fine to serve rat-bitten cheese to customers.
The Microsoft-powered AI chatbot, named MyCity Chatbot — which is still in its beta version — relies on Microsoft’s Azure AI service.
It was launched by NYC in October and has reportedly also told users to outright break the law.
NYC’s “MyCity” government AI chatbot is telling people to break the law.
Mayor Adams says it’s fine, and that this is just part of “ironing out the kinks”…….
New York remains unmatched. pic.twitter.com/Geg2pZymzM
— AL Khan (@caan_al) April 5, 2024
According to tech news outlet The Markup, MyCity Chatbot told users that it is OK for them to discriminate against tenants based on their income, that employers are entitled to take a cut from their employees’ tips, and that stores are allowed to reject cash payments.
However, The Markup noted that landlords are prohibited from discriminating based on the source of income of a potential tenant, it is illegal for bosses to take tips from their workers — although tips may be counted towards minimum wage requirements — and stores in New York City have been required to accept cash payments since 2020.
According to NBC New York, when MyCity Chatbot was asked if a restaurant could serve cheese nibbled on by a rodent, it responded: “Yes, you can still serve the cheese to customers if it has rat bites.”
Clear evidence that MyCity Chatbot Beta is really just 1,000 underpaid & disgruntled city employees in a box.https://t.co/cWpjz5Tv2p https://t.co/Bd6iLtwkuo pic.twitter.com/euXlC30Qjh
— Naveed Hasan (@read_naveed) April 3, 2024
MyCity Chatbot reportedly then added that it was important to assess “the extent of the damage caused by the rat” and to “inform customers about the situation.”
Furthermore, NBC reported that MyCity Chatbot falsely suggested it is legal for an employer to fire a worker who complains about sexual harassment.
Despite the ludicrous and misleading responses from MyCity Chatbot, Mayor Eric Adams defended the decision to keep the AI program up and running as the kinks were being worked out.
“Anyone that knows technology knows this is how it’s done,” Adams reportedly said. “Only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.”
Julia Stoyanovich, a computer science professor and director of the Center for Responsible AI at New York University told NBC that Adams’ approach was “reckless and irresponsible.”
“They’re rolling out software that is unproven without oversight,” Stoyanovich said. “It’s clear they have no intention of doing what’s responsible.”
MyCity Chatbot was designed to simplify navigating the city’s complex bureaucracy with algorithm-driven text responses.
However, a disclaimer on the page warns users that the chatbot might provide incorrect, harmful or biased information, and clarifies that its responses do not constitute legal advice.
Microsoft is reportedly collaborating with city officials to refine the service and align its responses with official city documentation.
Truth and Accuracy
We are committed to truth and accuracy in all of our journalism. Read our editorial standards.
Advertise with The Western Journal and reach millions of highly engaged readers, while supporting our work. Advertise Today.