In March 2023, a single doctored audio clip circulated on WhatsApp claiming that a major Nigerian bank was about to collapse. Within hours, customers queued outside branches in Lagos and Abuja trying to withdraw their savings. The bank was solvent. The audio was fake. Yet the financial and reputational damage was real.
We usually think of cybersecurity as firewalls, passwords and phishing emails. We consider disinformation to be a media problem, one that journalists and fact-checkers should address. That separation no longer works.
Disinformation today behaves like malware. It is engineered, deployed at scale, and designed to exploit vulnerabilities to achieve a specific objective. The only difference is that the target system is not a server. It is public trust, market stability, or your organisation’s reputation.
This is why security teams, business leaders, and even ordinary readers need to reclassify disinformation. It is not just noise. It is an attack vector. Understanding it through a cybersecurity lens gives us better tools to detect, respond to, and defend against it.
How Disinformation Mirrors a Cyber Attack
The cybersecurity industry uses the “kill chain” to describe how attackers breach a network. Surprisingly, disinformation campaigns follow the same pattern. The tools are different, but the logic is identical.
1. Reconnaissance and weaponisation
Attackers start by studying their target. In cyber terms, that means scanning for open ports. In disinformation, it means mapping a population’s fears, political divides, or trending topics. A bot network might monitor what Nigerians are saying about fuel prices, elections, or banking apps. Once a pressure point is found, the false content is created. That could be a deepfake video, a forged screenshot, or a fabricated news article. This is the weaponisation phase.
2. Delivery and exploitation
A phishing email delivers malware to an inbox. Disinformation is delivered through social feeds, WhatsApp broadcasts, and fake accounts. The exploitation phase relies on human psychology instead of software bugs. Our cognitive biases are the unpatched vulnerabilities. Confirmation bias makes us share stories that fit what we already believe. Authority bias makes us trust content that looks like it came from BBC News or Channels Television. The “liar’s dividend” is a particularly nasty exploit: as deepfakes get better, real footage can be dismissed as fake. The very existence of forgery technology creates doubt about authentic evidence.
3. Installation, command and control, and actions on objectives
In a network breach, malware installs itself and waits for commands. With disinformation, the “installation” happens when a narrative takes hold in a community. The command and control network is often a set of inauthentic accounts or coordinated groups that amplify the story on demand. The final actions can be devastating. In 2022, a fake press release claiming Lufthansa was buying a smaller airline caused a temporary stock swing before it was debunked. In cybersecurity, we call that market manipulation. In health, disinformation about vaccines directly impacts public safety. For businesses, coordinated rumours can trigger a bank run, sink a product launch, or force a costly PR response. These are not just PR crises. They are security incidents with measurable financial impact.
The key shift is this: we must stop asking “Is this true?” and start asking “Who benefits from this, and what system does it attack?”
Defence: Treating Information Integrity as Cyber Hygiene
If disinformation is a cybersecurity problem, then we need to defend against it like one. That means layered controls, incident response, and a focus on resilience rather than perfect prevention.
1. Technical controls are maturing
Just as antivirus software scans for malicious codes, new tools scan for synthetic media. AI models can now detect subtle artefacts in deepfake videos or in the linguistic patterns of bot accounts. Google, Adobe and others are backing the C2PA standard, which acts like a nutrition label for digital content. It cryptographically attaches information about how an image or video was created and edited. If widely adopted, it will make it much harder to pass off a fake as real. Platforms are also investing in network analysis to spot coordinated inauthentic behaviour, the telltale sign of a disinformation campaign rather than a single false post.
2. Human controls remain the most critical layer
No firewall can patch human trust. This is why digital literacy is now a security control. “Prebunking” is the equivalent of a vaccine: exposing people to weakened examples of manipulation techniques so they can recognise them later. Cambridge University research shows that prebunking games reduce susceptibility to false narratives across political lines. For organisations, this means training staff not just to spot phishing but to pause before sharing sensitive claims. It also means having a response plan for communication incidents. If your company is targeted by a fake story, who posts the rebuttal, on which channels, and how fast? In cybersecurity, speed matters. The same is true for information attacks. The first hour defines the narrative.
3. Policy and collaboration set the perimeter
No single company can solve this problem. In Nigeria, NITDA and the Central Bank of Nigeria have both issued warnings about financial disinformation. Globally, the EU’s Digital Services Act now requires platforms to assess and mitigate systemic risks, including disinformation. The most effective defence involves collaboration between security teams, communications teams, governments, and platforms. Threat intelligence should include narrative threats, not just IP addresses. Your next tabletop exercise should include a scenario where a deepfake of your CEO announcing resignations goes viral ten minutes before the market opens. How do you respond?
Conclusion: Secure the human layer
We’ve spent decades securing our networks, devices and data. The next decade will be about securing the information environment those systems operate in. Disinformation is not going away. Generational AI has made it cheaper, faster, and more personal. That sounds bleak, but it is also clarifying.
The goal is not to create a world where every lie is deleted. The goal is resilience. The goal is to build organisations and societies that can recognise a disinformation attack, limit its blast radius, and recover quickly. That starts by putting information integrity on the CISO’s agenda, not just the PR manager’s. It starts by teaching employees that clicking “forward” on an unverified claim can be as dangerous as clicking a dodgy link.
Cyber hygiene today means updating your software, using multi-factor authentication, and verifying before you amplify. The attack surface has expanded to include what we believe. Our defences need to be expanded, too.
