As we spend more and more time online, we run the risk of encountering larger and larger amounts of online disinformation. This can have a significant impact on politics: at the end of 2024, the U.S. government sanctioned groups based in Iran and Russia over their efforts to mislead voters in the lead-up to that year’s election. Darren M. West of the Brookings Institution argued that disinformation efforts “were successful in shaping the campaign narrative” in part due to numerous avenues of online dissemination.
If you’ve spent any time on social media in recent months, you’ve probably seen heated debate over AI-generated videos espousing one political point of view or another. That isn’t the only way that AI technology can shape public opinion, however, and a pair of recently-published studies came to an alarming conclusion about the ways AI chatbots can influence voters’ opinions.
One study, published earlier this month in Nature, explored the way that chatbots attempted to influence voters in several elections, including both national elections (in Canada, Poland and the U.S.) and a local ballot measure. The researchers discovered something unsettling: “across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims.” The other study, published in Science, explored the mechanisms by which AI chatbots could become more persuasive.
Cornell University professor Daniel Rand, who was involved in both studies, explained the nuances of this approach to persuasion in comments made to the Cornell Chronicle. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side. But those claims aren’t necessarily accurate — and even arguments built on accurate claims can still mislead by omission,” he said.
How Social Media Has Spread Coronavirus Information and Disinformation
Confusion can lead to dangerous situations, with lives in the balanceAs Nature‘s Max Kozlov pointed out in an article on these studies, one of their most unsettling findings was that how effective one method was for persuading voters: “flooding the user with information.” Unfortunately, this also makes sense: learning to separate fact from fiction and ask the right questions about the context of certain information is challenging enough; it’s increased even more when the volume of that information increases. How this will affect future elections remains to be seen — but it’s bound to affect them in some way, and that’s unnerving in its own right.
This article appeared in an InsideHook newsletter. Sign up for free to get more on travel, wellness, style, drinking, and culture.
