Misinformation and Fact-Checking Online
Big news from Meta: Last month, Mark Zuckerburg announced that he was cutting his fact-checking teams. That means Facebook, Instagram, Threads and WhatsApp will now rely on a community notes model to check facts.
While this will save Meta money, will it protect its users from seeing misinformation in their feeds?
Misinformation on the Internet
Misinformation is a growing problem. We started seeing major consequences during the COVID-19 pandemic as fake news spread across the Internet and into the real world. Many of our leaders have legitimized it using "fake news," which has had many big repercussions.
Misinformation erodes trust in our institutions and forces us to make independent decisions instead of decisions based on a shared understanding of facts. It also makes us more vulnerable to espionage tactics and can completely change our beliefs.
The Impact of Misinformation
Social media users are 70% more likely to share fake news rather than facts. We've seen families who still don't speak to each other because of COVID-19 conversations and beliefs.
Our youth are particularly vulnerable because they may not have strong digital and media literacy skills. We've been seeing digital hate grow online, often supported completely by fake facts.
All of this negativity and uncertainty is directly affecting our well-being.
What’s the Community Notes Model?
Meta hasn’t shared the specifics with users yet, but the community notes model has been utilized by other platforms like X in the past.
Here’s how it works on X: Approved contributors can flag content they find misleading, false, or suspect to be misinformation. These contributors can call for more context or information or provide it themselves. This is shown beneath the original post, but only other approved contributors can see it and they vote on whether the note is helpful. Then, once contributors have voted, an algorithm looks into the voters and assesses the diversity of the group. If it is deemed a diverse group of voters, the note is published for other users to view.
This system seems like a great opportunity to encourage collaboration and hear from diverse perspectives. But it doesn’t always work that smoothly. Read this CBS News article to learn more about the potential pitfalls of this model.
How to Identify Misinformation
It's getting harder and harder to spot fake news with AI and deep fakes coming into play. However, you’re not powerless as users digesting information. Here are clues you may be seeing misinformation:
It triggers an emotional response
It makes a bold statement on a controversial issue
It seems too good to be true
It leans on clickbait tactics like “You won’t believe this video”
If you find that it hits a nerve, do a quick face check on Snopes or see if it has been covered by several sources to see if it's correct.
As we watch this new community fact-checking model play out on Meta platforms, it will be interesting to see if we can unify and reveal the truth. I, for one, am hoping for the best!
*Feel free to check out my segment on CTV Morning Live!