Misinformation About Misinformation?: Report Advises “Don’t Hit Delete”

It’s hard to remember a time before we spoke animatedly about fake news and misinformation. For years now there has been primetime public discussion about the divisiveness of online content, and the way social media platforms can effortlessly propagate harmful conspiracy theories, as well as other baseless assertions masquerading as facts.

In 2018, Dictionary.com announced that misinformation was its “word of the year,” and before that scholars like Caroline Jack made valiant efforts to define the many types of online deceit, as in her 2017 study, Lexicon of Lies.

With a certain amount of discomfort, we have come to accept the downstream effects of users being trapped in “echo chambers” and the “filter bubbles” that reinforce and amplify false and harmful dialogue (with potentially devastating real-world consequences).

Many organizations — from NGOs to Big Tech — have pledged to fight misinformation and the circumstances that catalyze it’s spread, and there have been loud calls to identify and remove misleading content. When COVID-19 came along, ensuring scientific information wasn’t drowned out by falsehoods became a matter of life and death, and many platforms did axe posts to protect users (see YouTube and Facebook).

It is curious, then, that a new report by The Royal Society named The Online Information Environment calls into question some popular assumptions about misinformation.

Published last week, the report’s top-line stresses that deleting misleading content, and particularly scientific misinformation, is ineffective. It argues that, “censoring or removing inaccurate, misleading and false content, whether it’s shared unwittingly or deliberately, is not a silver bullet and may undermine the scientific process and public trust.

The Royal Society defines scientific misinformation as information which is presented as factually true but directly counters, or is refuted by, established scientific consensus. This includes concepts such as “disinformation,” which relates to the deliberate sharing of misinformation content.

Lead author on the report, Professor Frank Kelly, adds: “Clamping down on claims outside the consensus may seem desirable, but it can hamper the scientific process and force genuinely malicious content underground.

So, perhaps counterintuitively, the act of eliminating bad science can beget bad science.

But that’s not all that’s unexpected in this study. Indeed, it’s pages doubt the very influence that we have, until now, attributed to misinformation and disinformation online. The executive summary stating:

Although misinformation content is prevalent online, the extent of its impact is questionable. For example, the Society’s survey of members of the British public found that the vast majority of respondents believe the COVID-19 vaccines are safe, that human activity is responsible for climate change, and that 5G technology is not harmful. The majority believe the internet has improved the public’s understanding of science, report that they are likely to fact-check suspicious scientific claims they read online and state that they feel confident to challenge their friends and family on scientific misinformation.

It continues…

“The existence of echo chambers (where people encounter information that reinforces their own beliefs, online and offline) is less widespread than may be commonly assumed and there is little evidence to support the filter bubble hypothesis (where algorithms cause people to only encounter information that reinforces their own beliefs).

With the claim that the pernicious effects of misinformation are typically overstated in public commentary, The Royal Society implores us to tread carefully when dealing with the problem. In bowing to overblown perception, we risk “throwing the baby out with the bathwater.”

Moreover, given the resource intensity of vetting and removing scientific information — quite apart from the fact that it can be impossible to agree on a scientific consensus or a trusted authority to consult — it’s unlikely to be an option for the many new and emerging social platforms that have become attractive to users driven to seek a new place for their less-than-orthodox views.

The problem still abounds, and would potentially intensify out of immediate sight.

This isn’t to say that The Royal Society contests that there is no problem with misinformation, but just that it is rather more nuanced than the way we tend to distill it, and needs to be understood as such. The study describes how terms like “anti vaxxer” are unhelpful because they imply a collective and mask the reality, which is that there’s a broad range of distinctive reasons the unvaccinated give for hesitancy which cannot be countered unless unpicked individually.

You can read the full report here, but I’ve picked out just some of the ways it proactively recommends the complex issue of misinformation can be tackled by tech companies, governments and citizens:

  • Mitigation, rather than removal of potential harmful content
    This could include demonetizing content, reducing amplification of those messages by preventing viral spread, regulating the use of algorithmic recommender systems, and/or annotating content with fact-check labels.
  • Bolstering support for fact-checking
    There are now approximately 290 dedicated fact-checking organizations across the world that could benefit from sustainable funding.
  • New interventions to prevent misinformation spreading via private messages
    Protecting privacy of direct messages by offering the option to forward a message to a fact-checker, the creation of official accounts for scientific authorities, and/or tech tools that allow a user to validate the provenance of content.
  • Best practices and shared datasets for new social platforms
    Recognizing that new platforms don’t have the data or experience to develop effective misinformation tools quickly.
  • Investing in widespread information literacy
    Education for all ages that helps future populations safely navigate the online information environment.

The message is pretty clear — we need to build a resiliency, rather than a reactive situation that stymies the free-flowing scientific knowledge exchange between industry, academia and the broader population. The report says: “Society benefits from honest and open discussion on the veracity of scientific claims. These discussions are an important part of the scientific process and should be protected.”

Therefore, a critical balance must be struck between protecting those discussions and protecting the individuals and wider communities that stand to be contaminated with bad science and its effects.

Part of this — at least for The Royal Society — is about trying not to simplify or overstate the way bad information shapes opinion, while at the same time re-engineering the mechanisms at play to correct and suppress bad scientific information so to ensure those vulnerable to persuasion don’t end up as collateral damage.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s