"Disinformation is no longer a political issue. It has become a strategic risk for businesses."
- amonniermihi
- Mar 17
- 5 min read

For a long time, disinformation seemed like someone else's problem. A concern for politicians, journalists, and platforms. Something important, certainly, but distant from the day-to-day reality of companies and communication teams.
Two studies published in 2025-2026 offer compelling reasons to rethink that assumption. The first, produced by the Tech & Global Affairs Innovation Hub at Sciences Po, proposes a new conceptual framework for understanding the information crisis. The second, published by Sopra Steria, quantifies its economic consequences on a global scale. Together, they advance a simple and unsettling idea: disinformation is no longer merely a threat to democracy. It has become a strategic risk for every organization.
From "fake news" to ecosystem: a shift in perspective
For years, the public debate revolved around "fake news" — identifying lies, correcting them, removing them. This approach, however useful, has shown its limits. The term itself ended up being weaponized, with each side accusing the other of being the source of falsehood.
The Sciences Po document proposes a shift in perspective. Rather than focusing solely on misleading content, it introduces the concept of information integrity. The idea is straightforward: the problem is not only the lie itself, but the environment that makes it possible, profitable, and sometimes indistinguishable from the truth.
Think of it like drinking water quality. You can always test individual glasses. But at some point, you need to look at the entire distribution network. Information integrity applies exactly that logic to the information space: not just hunting down falsehoods, but building the conditions for reliability.
Platforms and AI have changed the rules
To understand why this shift in framing is necessary, we need to look at what has happened over the past decade.
Major digital platforms — Meta, TikTok, YouTube, X — have become the central infrastructure through which information flows. They do not simply distribute content: their algorithms determine what is visible, what gets amplified, and what disappears. And their business models are built on capturing attention, not on information quality.
What drives engagement gets promoted. And what drives engagement is often content that triggers a strong emotional reaction. Truth has no structural advantage over falsehood in this system.
Generative artificial intelligence has added another layer. The Sopra Steria report documents this precisely: AI does not merely create new risks, it industrializes existing practices. Producing a fake testimonial, cloning a voice, generating thousands of variants of misleading content in multiple languages — all of this was once costly, slow, and technically demanding. Today, it is fast, cheap, and widely accessible. Disinformation has moved from craft production to mass manufacturing.
417 billion dollars. What that figure actually conceals
The Sopra Steria report estimates the global economic cost of disinformation at 417 billion dollars in 2024. That number commands attention. But what makes it even more significant is what it encompasses.
This is not only about direct fraud or online scams, even though these are massive in scale — so-called pig butchering schemes, those AI-powered romantic and financial manipulation operations, caused 5.5 billion dollars in losses in the cryptocurrency sector alone in 2024. It also captures the erosion of trust, which acts as a slow and diffuse economic drag.
According to the OECD, only 39% of citizens today say they trust their government. Research cited by Allianz Research estimates that political polarization caused between 157 and 318 billion dollars in consumption losses over four years across the United States and Europe. When trust collapses, people spend less, invest less, and decide less. Distrust is a slow-acting economic poison.
For businesses, this translates into concrete expenditure: the report estimates global spending on informational protection and defense at between 12 and 18 billion dollars in 2024. That budget category barely existed a decade ago.
Romania, or the true cost of an information shock
To make the threat tangible, both studies draw on the same recent case: the annulment of Romania's presidential election in December 2024.
Within a matter of weeks, a coordinated manipulation campaign, amplified by algorithms and AI tools, altered the political trajectory of an entire country. The cost of reorganizing the election is estimated at 280 million dollars — and that figure does not account for lasting political instability or the erosion of institutional trust that followed.
This example is useful precisely because it translates abstraction into concrete reality. Disinformation does not only produce noise and confusion. It disrupts, delays, and costs money. It turns a reputational problem into a governance problem.

What about the risk of censorship?
This is the legitimate objection that immediately comes to mind: in trying to fight disinformation, do we risk justifying censorship? Do we end up handing some authority the power to decide what is true?
The Sciences Po document takes this concern seriously. And this is precisely where the framework of information integrity proves its value: it does not rest on any notion of "state truth." It rests on conditions — algorithmic transparency, researcher access to platform data, independent media certification, guaranteed pluralism — that allow everyone to navigate a more reliable information environment, without any single entity holding a monopoly on truth.
Protecting against disinformation and protecting against censorship are not opposing goals. When properly constructed, they are two sides of the same democratic project.
What this means for communication professionals
This is where the subject becomes directly relevant to our field.
In an environment where plausible content can be generated at low cost and industrial scale, communication can no longer simply aim to be visible. It must be credible over time.
This has significant practical implications. It restores value to elements that had been pushed to the margins: message consistency over time, the quality of evidence, the credibility of spokespeople, the ability to explain complex issues clearly, and the cultivation of trusted third parties — journalists, experts, institutions — capable of contextualizing and validating what an organization says.
Media relations, in particular, recover a role that goes beyond mere visibility. In a space saturated with self-published content, journalistic mediation becomes a mechanism for external validation. A statement carried by a media outlet operating under editorial standards does not carry the same weight as a direct post. It is harder to attack, harder to discredit, harder to drown in noise.
More broadly, these two studies suggest that communication professionals will need to think at a different scale. The goal will no longer be simply to generate attention. It will be to organize the conditions for trust.
That is not a change of tools. It is a change of profession.
Sources: "Beyond Fake News — How Information Integrity Creates a Building Ground for Disinformation-Resilient Societies", Sciences Po Tech & Global Affairs Innovation Hub, March 2026. "The Global Economic Impact of Disinformation", Sopra Steria, 2025.
