The Paradox of Free Speech and Disinformation: Navigating a Modern Dilemma
Is there a solution to the problem of managing disinformation while preserving free speech?
Disinformation is Making People Sick
Western Texas is in the throes of a measles outbreak. As of the end of March there have been over 400 cases, mostly in children. The vast majority of these victims have been unvaccinated, and two have died. The outbreak in Texas is part of a larger problem, with the CDC reporting measles cases in 19 states. In a bizarre twist, some children are showing up in west Texas hospitals with vitamin A toxicity as well as measles, likely as a direct result of misinformation on the part of HHS Secretary Robert F. Kennedy Jr.
Preventable Problems Proliferate
According to the Mayo Clinic, measles is an easily transmissible disease that can cause fever, skin rash, mouth sores, coughing, and sore throat. In some cases, it can be severe or even fatal. Even in modern times, measles kills 200,000 people yearly worldwide. Most of those victims are children. Measles can also be prevented by vaccination, which makes the recent measles-related deaths all the more tragic.
There should not be a measles epidemic in the United States in the 21st century. Measles has been a preventable disease since 1963 when the first measles vaccine was introduced and is now part of the routine MMR vaccine given to children. Unfortunately, vaccine disinformation spread online has led to a hesitancy among parents to give their children the MMR vaccine. On top of this, fake treatments and prevention methods have proliferated and are even spread by the very people who are supposed to protect us from disease. To give an idea of the magnitude of the problem, I did a search for "measles AND vitamin A" on Open Measures for Truth Social. The results are telling:
The Bad Ideas Propagate Online
The measles outbreak is an example of how disinformation and misinformation can lead directly to widespread harm and even death. This isn't an isolated problem either: the COVID-19 pandemic was rife with questionable or outright dangerous narratives. Some, like using ivermectin as a disease preventative without medical supervision, are still causing problems. If you combine all of the misinformation and disinformation associated with disease outbreaks and vaccines with all of the other false or unverified content proliferating online, it is readily apparent that we are facing a major problem. With that problem comes an equally major dilemma: if our sources of information are deceiving, harming, or even killing us, how do we deal with that problem and preserve free speech while doing it?
Disinformation vs. Misinformation
Before I discuss the dilemma presented above, some terms need to be defined. So far, I have been using misinformation and disinformation like they are interchangeable, but they aren't actually the same. Both imply that the information is in error, but disinformation is purposeful while misinformation is not. Consider two scenarios. In the first, a major news outlet publishes a story about a terrible war crime. Unfortunately, they fail to vet their source and have to issue a retraction. That is misinformation. Alternatively, a Russian bot spreads a lie about a war crime supposedly perpetrated by their adversary, which then goes viral. That is disinformation -it is meant purely as propaganda.
Disinformation Laws and Accusations of Overreach
There is a major debate raging in Europe and the United Stated centered around the issue of combating disinformation while not infringing the right to free speech. At the Munich Security Conference, Vice-President JD Vance lectured European leaders, claiming that their attempts to moderate online content with the intent of suppressing disinformation amounted to censorship. Within that past few weeks, the U.S. House of Representatives held hearings on "The Censorship Industrial Complex", claiming that the Biden administration had colluded with industry and think tanks to suppress free speech. The administration had, with varying degrees of success, actively pressured companies like Meta and YouTube to moderate content around COVID-19 that questioned vaccine safety and promoted alternative treatments.
In Europe, attempts have been made to codify the regulation of disinformation into law. In 2022, the European Union adopted the Digital Services Act (DSA) to address disinformation. The act placed controls on large social media platforms. According to Wikipedia:
The DSA applies to online platforms and intermediaries such as social networks, marketplaces and app stores.[4] Key requirements include disclosing to regulators how their algorithms work, providing users with explanations for content moderation decisions, and implementing stricter controls on targeted advertising. It also imposes specific rules on "very large" online platforms and search engines (those having more than 45 million monthly active users in the EU).
Integrated into the DSA is the Code of Conduct on Disinformation, which sets standards for self-regulation agreed upon by the major players. The UK and Australia have their own versions of these laws, and Brazil is considering implementing one as well. Opponents of the regulations argue that they constitute overreach by the government and that they lead to suppression of free speech. As mentioned above, speakers at the recent congressional hearings implicated the Biden administration in a conspiracy to directly censor posts on major social media platforms. While proponents of content moderation point to the harm done by disinformation, critics say that the U.S. constitution precludes these methods in virtually every situation.
Social Media Users Become the Problem They Refuse to Fix
On the other hand, users of social media, especially on the right, have not gone out of their way to make the problem better. In the article Disinformation: Are We the Problem?, I questioned whether we as a people are mature enough to deserve free speech. (While some readers suggested I didn't understand free speech, they missed the point - free speech is a right, but it can be misused by irresponsible people.) Given a particular bit of misinformation, disinformation, or outright conspiracy theory, there's a pretty good chance you will find it spreading on Truth Social, Parler, Gab, or especially, X. After Elon Musk acquired Twitter(X), it has become a hotbed for bad ideas. TikTok doesn't seem to be far behind. Examples of disinformation/misinformation on these sites may include:
Vaccine conspiracies
Toxic Fog
QAnon
Election conspiracies
Ivermectin and other home remedies
Birtherism
The other usual conspiracies (chemtrails, the Illuminati, UFO's, Drones, etc.)
So, while JD Vance is questioning Europe's dedication to democracy, his constituents are propagating the very ideas that helped create the problem in the first place! Unwilling or unable to critically evaluate information, they just keep reposting and reposting.
Note: conspiracy theories and bad ideas are not exclusive to "The Right." Many on the left adhere to them as well.
To make the issue even more complicated, some narratives that were initially labeled as disinformation or misinformation had to be reconsidered. The best example is the COVID-19 Wuhan lab-leak theory. Initially, there was almost no agreement on whether the lab-leak theory was wild conspiracy or a reasonable possibility. Recently, additional CIA analysis forced a rethinking of the issue.
Artificial Intelligence Clouds the Issue
While Artificial Intelligence would seem like a good choice for monitoring and controlling problematic content, it's definitely a two-edged sword. AI allows automated evaluation of massive amounts of online data at high speeds, something humans could likely not accomplish. It's also scalable, in the sense that it can grow as the need for content moderation increases over time. While these factors are solutions for companies needing an answer for the management objectionable content on a massive scale, there is a price to be paid. The World Economic Forum points out that bad actors can easily use generative AI to spread disinformation, and can also make it spread further and faster. According to their report on the risks of misinformation and disinformation:
This technology has empowered state-sponsored actors, criminals, activists, and individuals to automate disinformation campaigns, reaching wider audiences and achieving greater impact. As more people rely on social media and the internet for information, distinguishing between credible and fabricated content becomes increasingly difficult. The report also warns about the potential for algorithms with hidden biases to exacerbate the negative impacts of misinformation, particularly in sensitive areas like hiring and predictive policing.
It seems like Artificial Intelligence is no different than the people who create and use it: it can be a tool for managing the problem or alternatively for making the problem worse. It certainly doesn't appear to be our salvation.
Is there a Solution?
So, is there a way out of this conundrum? Is there a way to both moderate online content and preserve free speech? A good start would be for people to moderate themselves. If we would apply critical thinking skills, evaluate sources, and fact check before we repost information It might make the issue of how to moderate disinformation easier to deal with (or at least help limit it to managing the actual bad actors, trolls, and bots). Unfortunately, we're not off to a great start. The current administration talks a good game on censorship and free speech, but it's really a case the pot calling the kettle black when they exclude legitimate news sources from the White House but allow conspiracy-theory hawkers (such as Laura Loomer) in the door. On top of this, Donald Trump creates a disinformation machine by giving a platform to people like RFK jr., Kash Patel, Dan Bongino, and any number of other problematic appointees.
Governments Shouldn't Be Responsible
Government suppression of disinformation isn't a good solution. Attempts by governments to coerce social media companies to block or delete content or attempts to criminalize disinformation and misinformation could begin to look a lot like "The Great Firewall of China," which is used to block any content the communist government of China sees as dangerous to their regime. In the United States in Particular, there doesn't seem to be a way to block online speech that isn't directly harmful (such as terrorist threats or outright sedition) without violating the constitutional right to free speech. And, even if a completely altruistic administration could find a way to do it successfully, the next one might use it as a tool of oppression. If there is a way around this paradox, I don't see it.
Companies and Users Could Be Responsible
I do think that there are some strategies that can be applied that could at least help mitigate the problem of misinformation and disinformation. Social media giants could make their algorithms comprehensible to the people who use their platforms. If the ways in which content is managed were at least clear, and if the rules were at least known in advance, users would not be as concerned that their rights were being violated (or could at least move to a different platform that better suited their needs and desires).
The social media users could play their part by being more responsible. I like this list of recommendations from factcheck.org:
Think before sharing
Consider the source
Evaluate the evidence
Consult the experts
All sources are not equal. There is a difference between evidence and opinion. Political parties and politicians on both sides of the aisle have agendas and can be influenced by dark money machines. If you share something online, at least keep these things in mind, and you're less likely to be part of the problem.
Online communities could also help regulate content by calling out content posted by their members that is obviously misinformation or disinformation. If it can't be done in public, it can always be done by direct messaging. This earlier post lists fact-checking sites that can be used to help critically evaluate sources. There are even online courses about recognizing disinformation.
Generative AI poses a major problem in that it can be hard to distinguish AI generated content from real sources. It is possible, though, to train yourself to recognize if a post is AI generated, at least within reason. According to disa.org:
In images, look for anatomical inconsistencies, such as extra limbs or distorted facial features. Examine the interaction between objects and individuals for anomalies and scrutinize shadows and reflections for irregularities. AI-generated text may contain nonsensical words or phrases. For audio, pay attention to unnatural pauses, intonation, and word choices. In video, scrutinize the quality, looking for blurred contours, unrealistic features, and poor audio-video synchronization. Utilize reverse image search engines to trace the origins of suspect images.
In the end, a little critical thinking and maturity, a few learned skills, and a lot of holding governments and social media companies responsible for being transparent and for protecting free speech could save us from the need to be moderated. We also need to recognize that it might not be possible to have our cake and eat it too. If we can't control ourselves, we invite control by other sources (such as governments). And we shouldn't demand help from government for problems we have the power to fix among ourselves.
If you found this post helpful, please give me a like or a share!
Sources
https://www.trustlab.com/post/free-speech-vs-misinformation-harmful-content
https://www.sciencedirect.com/science/article/pii/S2468696424000168
https://www.webpurify.com/blog/misinformation-vs-disinformation-guide/
https://www.bu.edu/articles/2025/americans-expect-social-media-content-moderation/
https://www.mayoclinic.org/diseases-conditions/measles/symptoms-causes/syc-20374857
https://abcnews.go.com/Health/500-cases-measles-reported-nationwide-19-states-cdc/story?id=120251851
https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation
https://en.wikipedia.org/wiki/Digital_Services_Act
https://apnews.com/article/covid-cia-trump-china-pandemic-lab-leak-9ab7e84c626fed68ca13c8d2e453dde1
https://www.brookings.edu/articles/how-to-combat-fake-news-and-disinformation/
https://www.factcheck.org/2025/04/how-to-combat-misinformation/