As we hit the halfway point of 2021, one trend that we have been particularly interested in at SLD is the continued rise of fake news and misinformation. While our recent DeepReal research elevated the conversation around real versus fake, the publishing of Facebook’s State of Influence Operations 2017-2020 Threat Report has helped validate our own findings and further showcases why this topic is something that brands should be taking seriously.
Facebook has been at the forefront of the fake news debate for years now, as it has become clear their platform is one of the main channels in which misinformation is spread. As such, they have been working with industry, government, and civil society to find meaningful solutions to influence operations (IO), which they define as “coordinated efforts to manipulate or corrupt public debate for a strategic goal.”
In their report, Facebook highlights how threat actors have evolved their techniques over the past few years. These threat trends show that as governments and social media platforms clamp down on deceptive internet behaviour, threat actors are finding new ways to spread misinformation:
- A shift from “wholesale” to “retail” IO: Threat actors pivot from widespread, noise deceptive campaigns to smaller, more targeted operations.
- Blurring of the lines between authentic public debate and manipulation: Both foreign and domestic campaigns attempt to mimic authentic voices and co-opt real people into amplifying their operations.
- Perception hacking: Threat actors seek to capitalize on the public’s fear of IO to create the false perception of widespread manipulation of electoral systems, even if there is no evidence.
- IO as service: Commercial actors offer their services to run influence operations both domestically and internationally, providing deniability to their customers and making IO available to a wider range of threat actors.
- Increased operational security: Sophisticated IO actors have significantly improved their ability at hiding their identity, using technical obfuscation and witting and unwitting proxies.
- Platform diversification: To evade detection and diversify risks, operations target multiple platforms (including smaller services) and the media, and rely on their own websites to carry on the campaign even when other parts of that campaign are shut down by one company.
While the primary focus of Facebook’s report is on political interference, these threat trends should also raise some red flags for brands. Targeted operations, mimicking real people, creating false perceptions of misinformation, selling influence operations as a service, hiding identities, and spreading messages across multiple channels are all issues that impact consumer’s willingness to accept what they are being shown online, regardless of who it is coming from.
One major concern for brands is highlighted in the chart below, which shows that in the USA, PR/consulting firms and media websites were among the top coordinated inauthentic behaviour networks taken down between 2017-2020. The idea that a brand can outsource its influence operations has wide-ranging implications and should be of concern to every industry.
Source: Facebook Threat Report
Along with presenting some key threat trends, the Facebook report also offers suggestions on how to mitigate these manipulators:
- A combination of automation and expert investigations to remove IO
- Product innovation and adversarial design
- Partnerships with industry, government and civil society
- Building deterrence
Though it won’t happen overnight, a collective effort is needed to limit the influence of misinformation. Brands are not exempt. To find out more about Facebook’s mitigation strategies, you can read the full report here. To find out how to drive greater consumer trust and brand loyalty, we encourage you to explore our extensive DeepReal content.