In case you required additional proof that GenAI is prone to inventing things, Google’s Gemini chatbot—previously known as Bard—believes that the 2024 Super Bowl has already taken place. It’s even supported by the (fictitious) statistics.
According to a Reddit post, Gemini, which is using Google’s eponymous GenAI models, is responding to queries regarding Super Bowl LVIII as though it had finished weeks or even days ago. It appears, like many bookmakers (sorry, San Francisco supporters) to prefer the Chiefs over the 49ers.
In at least one instance, Gemini embellishes somewhat creatively. Giving a player data breakdown, he suggests that Brock Purdy had 253 running yards and one touchdown, while Kansas Chief quarterback Patrick Mahomes rushed 286 yards for two touchdowns and an interception.
It extends beyond Gemini. The Copilot chatbot from Microsoft also maintains that the game has concluded and cites incorrect sources to support its assertion. The 49ers, not the Chiefs, were declared the winners “with a final score of 24-21,” which may indicate a bias toward San Francisco.
A GenAI model that powers Copilot is comparable, if not the same, as the one that powers OpenAI’s ChatGPT (GPT-4). However, ChatGPT was reluctant to commit the same error during my testing.
All of it is rather ridiculous, and since this reporter couldn’t duplicate the Gemini responses in the Reddit post, it may have been fixed by now. (I’d be surprised if Microsoft wasn’t also working on a solution.) However, it also highlights the serious drawbacks of current GenAI and the risks associated with over-relying on it.
Real intelligence is absent from GenAI models. AI models learn how often it is for data (like text) to occur based on patterns, including the context of any surrounding data, by feeding them a large number of samples, most of which are taken from the public internet.
This probabilistic method performs exceptionally well in large quantities. Though it’s far from guaranteed, the range of words and their probabilities should produce a paragraph that makes sense. For example, LLMs can produce anything like the Golden Gate assertion that is grammatically correct but incoherent. Alternately, they could spread false information and erroneously use training data.
The LLMs have no evil intent. They don’t mean any harm, and they don’t understand the difference between truth and untrue. All they’ve done is trained to mistakenly identify certain words or phrases with particular ideas, even when such associations are false.
Thus, the Super Bowl 2024 (and 2023, for that matter) lies from Gemini and Copilot.
Like the majority of GenAI suppliers, Google and Microsoft are quick to admit that their programs aren’t flawless and can make mistakes. However, I contend that because these acknowledgements are included in small text, it is easy to overlook them.
Disinformation at the Super Bowl isn’t the worst instance of GenAI gone awry. That differentiation most likely resides in supporting the use of torture, propagating racial and ethnic stereotypes, or penning well-researched conspiracy theories. Nonetheless, it serves as a helpful reminder to verify claims made by GenAI bots. There’s a good probability they’re not real.
For more information https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation
Previous Post The foldable iPhone or iPad is Apple’s next big thing