Unmasking Misinformation: Is AI to Blame?
Most people are aware that misinformation and disinformation are an unfortunate staple of the internet. While it can be tempting to blame artificial intelligence (AI) for the rise in deepfakes, the truth is far more complex. A recent article by Sayash Kapoor and Arvind Narayanan, "We Looked at 78 Election Deepfakes: Political Misinformation Is Not an AI Problem" (also crossed-published on the AI Snake Oil Substack), argues that AI is not the root cause of the misinformation crisis. Instead, the real issue lies in how we, as human beings, process and share information.
At Wright to Learn, we take a similar approach when teaching about misinformation. While AI is an important part of the conversation, we focus on the psychological reasons why people believe and spread false information. Understanding these factors is key to building resilience against being fooled by, and unwittingly spreading, untrue or misleading stories, whatever technology is involved.
The Knight Institute’s Insights: It’s Not About the Tech
Kapoor and Narayanan analyzed 78 reported instances of AI use in global elections in 2024 and found something surprising: AI was not the main driver of misinformation. In 50% of these cases, AI-generated content wasn’t even intended to deceive, while in many of the intentional disinformation examples, the same effect could be easily and cheaply achieved through non-AI-based means. This finding reinforces an important idea—AI is a tool, but the problem lies in how people react to the information they encounter.
Instead of fixating on technology, we need to look at why people fall for misinformation in the first place.
Why Do We Fall for Misinformation?
There are several psychological and emotional reasons why misinformation feels so convincing. Here are a the most prominent:
We believe what fits our views. When something aligns with what we already think, it feels true. This is called confirmation bias, and it’s a big reason why false information that matches our beliefs is so powerful.
Repetition makes things stick. Hearing the same claim over and over—true or not—can make it seem believable. Familiarity creates a false sense of accuracy.
Emotions cloud our judgment. If something makes us feel scared, angry, or even hopeful, we’re more likely to remember and share it without checking whether it’s accurate.
We’re overconfident. A lot of us think we’re too smart to fall for misinformation. Ironically, that confidence makes us less likely to spot when we’re being duped.
How Wright to Learn Tackles the Problem
At Wright to Learn, we’ve built our approach to misinformation education around these human tendencies. AI has been a valuable entry point for these conversations, but instead of focusing just on the technology, we help people recognize their own biases and vulnerabilities. When you understand how your mind works, it’s easier to spot when something isn’t quite right.
Our Digital Resilience Training emphasizes critical thinking and emotional awareness, teaching people how to:
Recognize when content is designed to provoke a strong emotional reaction
Question the source and credibility of the information they see
Recognize what kinds of deepfakes are possible with current technology, without fixating on the technology itself as the source of the problem
Avoid falling into the trap of believing something just because they’ve seen it multiple times.
By focusing on the human side of the problem, we’re giving people tools to navigate a world full of misinformation, regardless of how it was generated.
AI Isn’t the Villain
While AI gets a lot of attention, it’s not the mastermind behind misinformation. As the Knight Institute’s study shows, AI is just one piece of the puzzle. Tackling this issue means addressing the way people interact with information, not merely how it was created.
A Smarter Way Forward
The real solution to misinformation lies in understanding ourselves. By recognizing how psychological factors like confirmation bias and emotional triggers influence our decisions, we can start to make better choices about the information we consume and share.
At Wright to Learn, we’re committed to helping individuals and organizations build these critical skills. When we know how to think critically, question sources, and manage our emotional responses, we’re better equipped to combat misinformation.
Sources:
We Looked at 78 Election Deepfakes. Political Misinformation Is Not an AI Problem. (Knight First Amendment Institute at Columbia University); We Looked at 78 Election Deepfakes. Political Misinformation Is Not an AI Problem. (AI Snake Oil, Substack)
What psychological factors make people susceptible to believe and act on misinformation? (American Psychological Association)
Psychological factors contributing to the creation and dissemination of fake news among social media users: a systematic review (BMC Psychology)
The psychological drivers of misinformation belief and its resistance to correction (University of Bristol)