AI chatbot Grok falsely accuses NBA star of vandalism

0 0
Read Time:3 Minute

X’s Chatbot Issues

X’s chatbot Grok, designed as an AI engine to sift through platform posts for breaking news updates, recently faced significant scrutiny as its inaccuracies were brought to light once more. A trending-tab post on X, previously known as Twitter, for several days falsely accused an NBA star of engaging in criminal vandalism activities. The misleading post, titled “Klay Thompson Accused in Bizarre Brick-Vandalism Spree,” garnered attention with its inaccurate content.

Details of the False Accusation

The post by Grok claimed that NBA star Klay Thompson had been involved in vandalizing multiple houses with bricks in Sacramento. This led to authorities launching investigations into the matter following reports from individuals about their properties being damaged. Windows were reportedly shattered by bricks, but Thompson, understandably, had not yet addressed these baseless accusations. Apart from property damage, no injuries were reported, leaving the community unsettled. The purported motive behind the alleged vandalism remains ambiguous.

It appears that Grok’s error stemmed from misinterpreting a common basketball term. When players miss a shot significantly without hitting the rim, it is colloquially referred to as throwing a “brick.” This incident coincided with Thompson’s ‘all-time rough shooting’ night as reported by SF Gate, where he failed to make a single shot on his emotional final game with the Golden State Warriors before transitioning to unrestricted free agency.

Despite containing a disclaimer that states, “Grok is an early feature and can make mistakes. Verify its outputs,” X users, in a lighthearted manner, proceeded to perpetuate the misinformation. Engaging in humorous banter, some users posted sarcastic victim reports to continue the jest surrounding the false accusations against Thompson.

Legal Ramifications and Industry Response

Similar incidents involving other AI-powered technologies have led to defamation lawsuits against tech giants like Microsoft and chatbot creator OpenAI. False criminal allegations, such as those fabricated by ChatGPT, have prompted legal actions against these companies. While disclaimers may suggest an attempt to absolve liability, the legal landscape remains complex, with experts debating the liability of platforms knowingly distributing false information.

See also
LeBron James Criticizes NBA Officiating

Concerns about the impact of disinformation and AI-generated content have drawn regulatory attention, with the Federal Trade Commission (FTC) initiating investigations. The FTC’s scrutiny into OpenAI’s AI outputs reflects broader fears surrounding misleading or damaging content dissemination. Since the FTC refrains from public commentary on ongoing investigations, the exact repercussions facing OpenAI remain unknown.

For individuals contemplating legal action against AI companies, the necessity of safeguarding against inaccurate outputs is apparent. Lawsuits like that of radio host Mark Walters against OpenAI underscore the need for accountability and diligence in evaluating AI-generated content.

Grok’s Introduction to Premium Users

X recently expanded Grok’s availability to all premium users, coinciding with the platform’s initiative to grant premium access to select users. Amidst this rollout, X highlighted Grok’s enhanced capacity to summarize trending news and topics, potentially facilitating increased utilization of the feature. Notably, Grok’s controversy surrounding the NBA star surfaced shortly after this expanded access.

Despite the dissemination of false information by Grok, no official statements have been issued by Thompson regarding the incident. While this may be the first publicized instance of potential defamation by Grok, past incidents suggest a pattern of susceptibility to manipulation by users, raising concerns about the spread of misinformation on the platform.

Conclusion

The anecdotal use of Grok for humorous purposes underscores a broader vulnerability to exploitation by malicious actors for disseminating misinformation or propaganda. As evidenced by previous instances of falsely generated headlines, such as during the solar eclipse and the fictitious report on Iran attacking Israel, Grok’s capabilities warrant further scrutiny to mitigate the risks associated with AI-powered news aggregation and dissemination.

Image/Photo credit: source url

About Post Author

Chris Jones

Hey there! 👋 I'm Chris, 34 yo from Toronto (CA), I'm a journalist with a PhD in journalism and mass communication. For 5 years, I worked for some local publications as an envoy and reporter. Today, I work as 'content publisher' for InformOverload. 📰🌐 Passionate about global news, I cover a wide range of topics including technology, business, healthcare, sports, finance, and more. If you want to know more or interact with me, visit my social channels, or send me a message.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %