Human Lives Human Rights: In this age, the digital battlefield has become more sophisticated and widespread across the globe. False information about major events from the Covid-19 outbreak to the 2020 US election is jeopardizing public health and safety.
Here we dig into how modern warfare is being waged on the internet, and the steps being taken to stop it.
Where is this data coming from?
Since the revelations of Russia spreading false information via social media to interfere in the 2016 US election, the digital battlefield has expanded to include new players and more sophisticated techniques.
Disinformation threatens the globe’s ability to effectively combat a deadly virus and hold free and fair elections.
Meanwhile, emerging technologies like artificial intelligence are accelerating the threat — but they also offer hope for scalable solutions that protect the public.
Technologies and tactics behind the spread of disinformation
One new technology hampering efforts to combat disinformation is WhatsApp. The communications app encrypts messages to prevent others from reading or monitoring them. The downside of this security is that it creates an environment where disinformation can thrive:
- In 2018, false rumors of a roaming group of kidnappers spread on the app, resulting in the killing of more than 24 people in India.
- In Brazil, sophisticated disinformation campaigns linking virus vaccination to death spread on WhatsApp and thwarted the government’s efforts to vaccinate citizens against the spread of yellow fever in 2018.
- In 2020, Covid-19-related misinformation, from fake cures to false causes (such as 5G technology), spread across the platform.
Seventy countries used online platforms to spread disinformation in 2019 — an increase of 150% from 2017. Most of the efforts focused domestically on suppressing dissenting opinions and disparaging competing political parties. However, several countries — including China, Venezuela, Russia, Iran, and Saudi Arabia — attempted to influence the citizens of foreign countries.
The increase of countries involved in spreading disinformation and of domestically created content coincides with a deluge of false information online. In the case of Covid-19, an April 2020 poll found that nearly two-thirds of Americans saw news about the coronavirus that seemed completely made up.
Most Americans believe that fake news causes confusion about the basic facts of the Covid-19 outbreak. The World Health Organization has spoken about how crucial access to correct information is: “The antidote lies in making sure that science-backed facts and health guidance circulate even faster, and reach people wherever they access information.”
Key elements of the future of digital information warfare
“We live in an age of disinformation. Private correspondence gets stolen and leaked to the press for malicious effect; political passions are inflamed online in order to drive wedges into existing cracks in liberal democracies; perpetrators sow doubt and deny malicious activity in public, while covertly ramping up behind the scenes.” — Thomas Rid, “Active Measures: The Secret History of Disinformation and Political Warfare”
Key tactics include:
Diplomacy & reputational manipulation: using advanced digital deception technologies to incite unfounded diplomatic or military reactions in an adversary; or falsely impersonate and delegitimize an adversary’s leaders and influencers.
Automated laser phishing: the hyper-targeted use of malicious AI to mimic trustworthy entities, compelling targets to act in ways they otherwise would not, including by releasing secrets.
Computational propaganda: exploiting social media, human psychology, rumor, gossip, and algorithms to manipulate public opinion.
1. Diplomacy & reputational manipulation: Faking Video and Audio
Much of the code for creating convincing deepfakes is open-source and included in software packages like DeepFaceLab, which is available publicly for anyone to access. This lowers the barriers for adoption, making deepfakes a viable tool for more hackers, whether or not they are tech-savvy.
Unsurprisingly, the number of deepfakes online has exploded over the last few years. According to Sensity, a startup tracking deepfake activity, the number doubles roughly every 6 months.
The vast majority of deepfake videos focus on pornography. However, a small percentage of them have political aims.
For example, a 2018 video of Gabonese president Ali Bongo contributed to an attempted coup by the country’s military. The president’s appearance and the suspicious timing of the video, which was released after several months during which the president was absent receiving medical care, led many to claim it was a deepfake. This perceived act of deception cast further doubt on the president’s health and served as justification for his critics to act against the government.
Another instance occurred in June 2019, when a video depicting sexual acts by the Malaysian minister of economic affairs Azmin Ali created political controversy. In defense, Azmin Ali and his supporters delegitimized the video by calling it a deepfake.
In both cases, analyses of the videos to determine their authenticity were inconclusive — a fairly typical outcome for videos of lesser-known individuals or where only the manipulated version exists.
Whether a deepfake or shallowfake, the accessibility and potential virality of doctored videos threaten public figures’ reputation and governance.
2. Automated laser phishing: Malicious AI impersonating and manipulating people
The number of data points available online for any particular person at any given time range from 1,900 to 10,000. This information includes personal health, demographic characteristics, political views, and much more.
The users and applications of these personal data points vary. Advertising companies frequently use them to target individuals with personalized ads, while other companies use them to design new products and political campaigns use them to target voters.
Malicious actors also have uses for the information, and owing to frequent data breaches that resulted in the loss of nearly 500M personal records in 2018 alone, it’s often accessible.
Personal data plays a significant role in the early stages of a disinformation campaign. First, malicious actors can use the information to target individuals and groups sympathetic to their message.
Second, hackers may use personal data to craft sophisticated phishing attacks to collect sensitive information or hijack personal accounts.
Population targeting
Targeting audiences, whether through ads or online reconnaissance, is a crucial piece of the disinformation chain.
To exploit tensions within society and sow division, purveyors of disinformation target specific groups with content that supports their existing biases. This increases the likelihood that the content will be shared and that the foreign entity’s goal will be achieved.
Russia’s information warfare embodies this tactic. A congressional report found that the nation targeted race and related issues in its 2016 disinformation campaigns.
While online reconnaissance can identify the social media groups, pages, and forums most hospitable to a divisive or targeted message, buying online ads provides another useful tool for targeting individuals meeting a particular profile.
In the lead-up to the 2020 US presidential election, an unknown entity behind the website “Protect My Vote” purchased hundreds of ads that yielded hundreds of thousands of views on Facebook. Promoting fears of voter mail fraud, these ads targeted older voters in specific swing states that were more likely to be sympathetic to the message. The ads made unsubstantiated claims and, in one instance, misconstrued a quote by basketball star Lebron James.
Personalized phishing
The availability of personal data online also supercharges phishing attacks by enabling greater personalization.
While many phishing attempts are unsophisticated and target thousands of individuals with the hope that just a few take the bait, a portion of hyper-targeted attacks seek large payouts in the form of high-profile accounts and confidential data.
If a hacker’s phishing attack successfully steals credentials or installs malware, the victim may face reputational damage. For example, ahead of the 2016 US presidential election, Russian hackers used a spear-phishing campaign to infiltrate Hillary Clinton’s campaign chairman John Podesta’s email and release collected information to the public.
Selectively sharing personal or sensitive information provides disinformation campaigns with a sense of authenticity, and leaking the information ahead of significant events increases its impact.
Using phishing attacks to access high-profile individuals’ email or social media accounts also poses a reputational and diplomatic threat. For example, in 2020, hackers gained access to the Twitter accounts of Joe Biden, Elon Musk, and Barack Obama, among others.
This particular incident focused on monetary gain and specifically targeted Twitter audiences. However, it highlights the larger, more dangerous possibility that hackers could impersonate leaders for political ends.
3. Computational Propaganda: Digitizing the manipulation of Public Opinion
Nearly half of the world’s population is active on social media, spending an average of almost 2.5 hours on these platforms per day. Recent polls indicate Americans are more likely to receive political and election news from social media than cable television.
User engagement helps the corporate owners of these platforms — most notably Facebook and Google — generate sizable revenues from advertising.
To drive this engagement, companies employ continually changing algorithms with largely unknown mechanics. High-level details provided by TikTok and Facebook indicate that their algorithms surface the content most likely to appeal to a particular user to the top of their feed.
Discussion of algorithmic-induced bias in the media has increased over the past 5 years, largely tracking artificial intelligence’s application in products ranging from facial recognition to social media.
The use of algorithms has caused concern about social media perpetuating bias and creating “filter bubbles” — meaning that users develop tunnel vision from engaging predominantly with content and opinions that reinforce their existing beliefs.
Filter bubbles build an environment conducive for disinformation by reducing or blocking out alternative perspectives and conflicting evidence. Under these circumstances, false narratives can exert far more power and influence over their target population.
Because engagement through likes, comments, and shares plays a role in determining content’s visibility, malicious actors use fake accounts, or bots, to increase the reach of disinformation.
At a cost often less than $1 per bot, it’s not surprising that the number of bots on Facebook, Twitter, and Instagram totaled approximately 190M in 2017. In August 2020 alone, Facebook used automated tools and human reviewers to remove 507 accounts for coordinated inauthentic behavior.
The vast number of bots on social media platforms is deeply concerning, as bots and algorithms help disinformation spread much faster than the truth. On average, false stories reach 1,500 people 6 times faster than factual ones.
The future of computational propaganda
The technology and tactics used to wage information warfare are evolving at an alarming rate.
Over the past 2 years, the battlefield has swelled to include new governments and domestic organizations hoping to mold public opinion and influence domestic and international affairs.
For example, China expanded its disinformation operations to influence elections in Taiwan, discredit protests in Hong Kong, and deflect responsibility for its role in the outbreak of Covid-19. Meanwhile, Azerbaijan used disinformation tactics to suppress political dissent, with Facebook suspending account activity for a youth wing of the country’s ruling party in October 2020.
Participants in information warfare can easily find the tools to identify cleavages in society, target individuals and groups, develop compelling content, and promote it via bots and strategic placement — all that’s required is access to the internet.
Because of their low cost, even a small chance of success makes these activities worthwhile. Without a significant increase in the cost of participating, information warfare will likely continue on its upward trajectory.
While people are showing more skepticism about content online, they’re also facing increasingly sophisticated fakes. Together, these trends call into question the modern meaning of truth.
In 2016, the term “post-truth” was named “Word of the Year” by the Oxford Dictionary. It was defined as: “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”
At this point, it’s generally accepted that we have not entered a post-truth world — malicious actors exert a minor, but still significant, impact on the public’s trust of information. To avoid plunging headlong into a post-truth world, governments and companies must anticipate, prepare, and prevent further abuse of emerging technologies.
Emerging solutions in the fight against digital deception
The issue of disinformation is too large for any one entity to solve, considering the sheer volume of content that surfaces on the internet:
- 70M blog posts are published on WordPress each month
- 500M tweets are sent each day
- 300 hours of video are uploaded to YouTube every minute
Combating disinformation requires a coalition of governments, private companies, and academic and non-profit organizations. Fortunately, from identifying fake images and videos to detecting bots, efforts have come a long way.