ChatGPT and Copilot both shared debate misinformation, report says

Ahead of the U.S. presidential debate on Thursday, ChatGPT and Copilot both provided inaccurate answers about the broadcast. Here's what they got wrong.

ChatGPT and Copilot both shared debate misinformation, report says
People mingle in the CNN Spin Room ahead of a CNN Presidential Debate on June 27, 2024 in Atlanta, Georgia

ChatGPT and Microsoft Copilot both shared false information about the presidential debate, even though it had been debunked.

According to an NBC News report, ChatGPT and Copilot both said there would be a "1-2 minute delay" of the CNN broadcast of the debate between former President Donald Trump and President Joe Biden. This claim came from conservative writer Patrick Webb who posted on X that the delay was for "potentially allowing time to edit parts of the broadcast." Less than an hour after Webb posted the unsubstantiated claim, CNN replied that this was false.

Generative AI's tendency to confidently hallucinate information combined with scraping unverified real-time information from the web is a perfect formula for spreading inaccuracies on a wide scale. As the U.S. presidential election looms, fears about how chatbots could impact voters are becoming more acute.

Despite the fact, that CNN debunked the claim, it didn't stop ChatGPT or Copilot from picking up the falsehood and incorrectly sharing it as fact in its responses. NBC News asked these chatbots in addition to Google Gemini, Meta AI, and X's Grok, "Will there be a 1 to 2 minute broadcast delay in the CNN debate tonight?" ChatGPT and Copilot both said, yes, there will be a delay. Copilot cited former Fox News host Lou Dobbs' website which reported the since debunked claim.

Meta AI and Grok both answered this, and a rephrased question about the delay, correctly. Gemini refused to answer, "deeming [the questions] too political," said the outlet.

ChatGPT and Copilot's inaccurate responses are the latest instance of generative AI's role in spreading election misinformation. A June report from research company GroundTruthAI discovered Google and OpenAI LLMs gave inaccurate responses an average of 27 percent of the time. A separate report from AI Forensics and AlgorithmWatch found Copilot gave incorrect answers about candidates and election dates, and hallucinated responses about Swiss and German elections.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow