In the wake of the coronavirus pandemic and numerous election campaigns, governments around the world have called on major tech companies to combat the spread of fake news online. Understanding trends in disinformation (which is intentionally spread) and misinformation (which is spread regardless of intent) is crucial for an informed response. This helps minimize potential damage to democracies and public health and safety.
But is this cooperation enough to stay ahead of the threat? And are mainstream online networks the only piece of the puzzle?
While many platforms—including Google, Facebook, and Twitter—have pledged to double down on fake news, misleading content is still reaching thousands before detection. And even though misinformation has more viewers on mainstream networks, less-regulated networks should also be considered in any counter-misinformation strategy.
A small part of the bigger picture
It’s been over a year since the UK Government’s Rapid Response Unit cracked down on coronavirus misinformation – so what does the state of misinformation look like now?
Misinformation is recognised by the Government as a potential national security threat, but there are currently no laws regulating it. This means the Rapid Response Unit depends largely on the cooperation of technology companies like Facebook to curb the spread of misleading content.
But the reality is that even with big tech on board – it’s very hard to manage online misinformation with human intervention and detection algorithms. According to new data from Crowdtangle, a Facebook-owned content monitoring tool, it’s still possible for false COVID-19 vaccine information to clear 12,000 interactions before takedown. This doesn’t factor in privately-shared content and user exposures without interaction.
The misinformation problem also goes beyond mainstream social media and news networks. More covert online sources, like alt-tech platforms, deep web forums, imageboards, and dark websites, also play a significant role. These online spaces are far less regulated than mainstream sites. They often host fringe groups who instigate misleading content surrounding conspiracies and radical worldviews—whether it’s in response to a pandemic, political climate, or other events.
Throughout the pandemic, discussions and marketplaces on these sources have circulated fake news, solicited fake COVID-19 vaccines, and offered misinformation-based scamming tools like phishing pages—which target citizens and governments alike.
Beyond the challenge of addressing mainstream networks, these sources are valuable to counter-misinformation teams who need to:
Political and social impacts still unfolding
According to 2020 research by King’s College London, the impacts of mis- and disinformation are still not entirely clear or measurable. But we do know that misinformation can influence public safety risks (look no further than the 2021 Capitol Hill insurrection or the public health impacts of false coronavirus information). Misinformation can also affect financial markets, prompt cyber compromise, sow social unrest, and potentially influence democratic processes.
How can governments stay ahead of online misinformation as tech giants struggle to keep up with its spread?
The Rapid Response Unit’s goal is to find misleading content, assess risk and response, and if necessary, create and target a more balanced counter-narrative. In this process, early detection is critical to minimizing or avoiding damage caused by misinformation.
Early detection is only possible when unit personnel have real-time, comprehensive access to online data sources where misinformation exists. This includes both mainstream networks where misinformation often gains the most traction and exposure—and covert sources where it may originate or be used to target the data of citizens, enterprises, and governments.
Searching for misinformation on the surface web is already an overwhelming task without adding hidden sources to the mix. The good news is that this process can be streamlined with the right tools.
Commercial open-source intelligence (OSINT) software helps analysts, data scientists, and other professionals gather the information to support misinformation use cases. These solutions help users aggregate, search, and prioritize public online content more efficiently than navigating sources manually. As misinformation evolves rapidly online, emerging OSINT tools are equipped with certain features to enable early detection and minimize analyst fatigue and overwhelm. These include:
Keeping up with online misinformation is challenging enough for agile tech companies, let alone government agencies. Misleading narratives seem to emerge on a near-daily basis, often faster than detection algorithms or content regulation teams can keep up.
Misinformation legislation, public education, content warnings, and counter-misinformation technologies are all part of the solution. When it comes to the latter, teams like the Rapid Response Unit can benefit from software that enables early detection and data coverage for a range of surface, deep, and dark web networks.
The full impacts of mis- and disinformation will take time to unpack. But with the right OSINT solutions, governments are better equipped to understand and minimise harm to populations, companies, data, and other national security interests as the threat evolves online.
If you would like to join our community and read more articles like this then please click here