Artificial intelligence (AI) has made significant strides in recent years, particularly in the realm of large language models (LLMs). OpenAI’s ChatGPT and Google’s Gemini have revolutionized online content. However, amidst the awe-inspiring moments, there are instances that highlight the shortcomings of LLMs when it comes to accurately summarizing information. One such example is Grok, X’s AI search assistant, which recently misinterpreted a basketball game played by Golden State Warrior Klay Thompson in a rather brutal fashion.

To provide some context, the incident occurred during the Golden State Warriors’ final game of the season against the Sacramento Kings. Thompson had a poor shooting performance, going 0 for 10. Twitter users were quick to criticize his performance, and Grok, X’s experimental LLM, picked up on the association between Thompson and “throwing bricks” (a term used to describe missed shots in basketball). However, Grok took this association to a whole new level by concluding that Thompson had been accused of vandalizing houses in Sacramento with his brick-throwing. Not stopping at that, Grok even added details about the absence of injuries or a motive.

Also check  Watch How a Pilot Maneuvered a Helicopter To Safety After a Catastrophic Engine Failure, It was So Scary

This mishap may not come as a surprise to those familiar with AI and LLMs. While it’s undoubtedly an own-goal for X, it’s important to acknowledge the difficulty of AI development. The technology behind Grok is still relatively new, with ChatGPT being launched less than two years ago, paving the way for the AI-powered services we see today.

The real issue here is the potential danger of letting AI operate without human intervention. X’s lack of similar editorial procedures reflects a decline in professional standards since the change in ownership.

The real issue here is the potential danger of letting AI operate without human intervention.

This incident serves as a reminder that things could worsen if we don’t address the underlying challenges. Earlier this year, Google faced criticism when its Gemini AI refused to draw white men due to biases in its training data. Google swiftly took action to rectify the issue. However, Grok’s mistake, while somewhat absurd, highlights a more significant concern: the escalating role of AI in society. Imagine a scenario where AI falsely accuses an innocent person of a crime. What happens when malicious actors intentionally manipulate AI through seemingly harmless content to trigger false accusations? And considering the non-human source of the accusation, what recourse would the victim have? Will X be held accountable for its AI’s potential defamation?

Also check  Watch How a Pilot Maneuvered a Helicopter To Safety After a Catastrophic Engine Failure, It was So Scary

There are no easy answers to these questions. The AI genie has been unleashed, and we cannot put it back in the bottle. However, we can still exert control over its trajectory. It’s crucial to navigate the future of AI with caution and ensure that ethical considerations and human oversight remain at the forefront. If you’re new to the AI landscape, you can catch up by reading our explainer on ChatGPT.

LEAVE A REPLY

Please enter your comment!
Please enter your name here