NYC Chatbot Providing Incorrect Legal Information

0 0
Read Time:2 Minute

The Pitfalls of AI-Driven Chatbots in Providing Legal Information

Artificial Intelligence (AI) has made significant advancements in enhancing the efficiency of various processes, including customer service. However, a recent incident involving a chatbot operated by the New York City government has shed light on the potential risks associated with relying on AI for legal information dissemination.

Background of NYC’s “MyCity” Chatbot

The “MyCity” Chatbot was introduced as a pilot program last October with the aim of providing business owners with instant access to reliable information on city regulations, incentives, and best practices. The initiative was intended to streamline decision-making processes and save time and money for entrepreneurs operating in New York City.

A recent report by The Markup and local nonprofit news site The City uncovered alarming inaccuracies in the responses provided by the MyCity chatbot. Examples included misinformation about the acceptance of Section 8 vouchers by NYC buildings, contradictory information on worker pay regulations, and erroneous details on funeral home pricing.

Issues with Predictive Models in LLM Chatbots

The underlying cause of these inaccuracies lies in the token-based predictive models that power LLM chatbots like MyCity. These models rely on statistical associations across millions of tokens to predict the most likely word in a given sequence, often without a deep understanding of the context or factual accuracy of the information.

As a result, the chatbot’s responses can vary, sometimes providing correct answers but frequently leading to incorrect or misleading information. The report noted discrepancies in responses to the same query, highlighting the inherent limitations of relying solely on predictive algorithms for legal guidance.

Implications for Government Initiatives and Corporate Practices

The MyCity chatbot, despite being labeled as a “Beta” product with disclaimers about potential inaccuracies, is marketed as a reliable source of official NYC business information. This discrepancy raises concerns about the accountability and transparency of AI-driven platforms used for public services.

Similar cases of inaccuracies in chatbot responses have surfaced in other sectors, such as fraudulent refund policies enforced by airlines based on chatbot recommendations and misleading tax information provided by AI-integrated software. These instances underscore the need for thorough testing and validation of AI technologies before their deployment to avoid legal and ethical repercussions.

Companies are now exploring alternative approaches, such as Retrieval-Augmented Generation models, which are trained on specific datasets to provide more accurate and targeted responses. This shift reflects a growing awareness of the potential liabilities associated with AI-generated content and the need for stricter regulations to ensure compliance with legal standards.

As the landscape of AI-driven chatbots continues to evolve, stakeholders must prioritize accuracy, transparency, and ethical considerations in their implementation to prevent misinformation and uphold public trust in digital services.

Image/Photo credit: source url

About Post Author

Chris Jones

Hey there! 👋 I'm Chris, 34 yo from Toronto (CA), I'm a journalist with a PhD in journalism and mass communication. For 5 years, I worked for some local publications as an envoy and reporter. Today, I work as 'content publisher' for InformOverload. 📰🌐 Passionate about global news, I cover a wide range of topics including technology, business, healthcare, sports, finance, and more. If you want to know more or interact with me, visit my social channels, or send me a message.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %