article image

The Rising Threat of Deepfakes: How Governments and Law Enforcement Are Responding

In recent years, the rapid advancement of artificial intelligence (AI) has given rise to a troubling new phenomenon known as deepfakes. Deepfakes are highly realistic videos, images, or audio recordings that use AI to depict people saying or doing things they never actually said or did.

While some deepfakes are created for benign purposes like satire or entertainment, the technology is increasingly being weaponized by bad actors to spread disinformation, manipulate elections, and facilitate a wide range of criminal activities.

According to a 2022 report by Europol, the U.S. government revealed intelligence suggesting Russia was plotting to create a deepfake video depicting violence by Ukrainian troops against Russian civilians or troops. The fake video would have served as a pretext for Russia to invade Ukraine. While it remains unclear if Russia ever produced the video in question, the incident underscores the potential for deepfakes to destabilize international relations and even instigate military conflict dangerously.

As deepfakes grow more sophisticated and accessible, addressing this challenge is becoming a top priority for governments and law enforcement agencies worldwide. This article will delve into the deep fake threat, its criminal applications, and how policymakers and police are fighting back against this new form of high-tech deception.

Understanding Deepfakes

At their core, deepfakes are synthetic media generated or manipulated using artificial intelligence (AI). While the technology has some positive applications in areas like gaming and entertainment, deepfakes are increasingly being used for malicious purposes such as spreading disinformation.

According to a report by Europol, law enforcement experts are deeply concerned about the consequences of disinformation, fake news, and social media manipulation on political and social discourse. These trends are expected to become more pronounced as deepfake technology grows more sophisticated. Participants in Europol’s strategic foresight activities were especially alarmed by the potential weaponization of social media and the impact of misinformation on public discourse and social cohesion.

The challenge posed by deepfakes is compounded by a general lack of public awareness. A 2019 study by iProov in the UK found that almost 72% of people were unaware of deepfakes and their potential impact.7 This is particularly worrying because if people don’t know that such technology exists, they are less likely to question the authenticity of the media they consume. Even more concerning, recent experiments suggest that simply increasing awareness of deepfakes may not improve people’s ability to spot them.

Criminals are expected to exploit this knowledge gap. Researchers anticipate that bad actors will ramp up their use of deepfakes in the coming years for a variety of nefarious purposes, from harassment and extortion to fraud and disinformation.9 As technology advances, it will become increasingly difficult for the average person to distinguish real media from AI-generated fakes.

This underscores the urgent need for greater public education about the deepfake threat and robust strategies from law enforcement and policymakers to counter the malicious use of this technology. Staying ahead of the curve will require a proactive, collaborative approach that combines expertise from the tech sector, government, academia, and law enforcement.

The Technology Behind Deepfakes

Deepfake technology leverages the power of deep learning to manipulate or generate audio and audio-visual content. When used properly, these AI models can produce highly realistic content showing people saying or doing things they never actually said or did or even create entirely fictional personas. The rise of AI-generated deepfakes already has profound implications for how people perceive recorded media, and this impact is only expected to grow as technology advances.

Deepfake Technology’s Impact on Crime

The criminal applications of deepfakes are vast and constantly evolving. Europol’s report provides a helpful framework for understanding the main categories of deepfake-related crime, which include:

  • Harassing or humiliating individuals online
  • Perpetrating extortion and fraud
  • Facilitating document fraud Falsifying online identities and fooling “know your customer” mechanisms
  • Producing non-consensual pornographyEnabling online child sexual exploitation
  • Falsifying or manipulating electronic evidence for criminal justice investigations
  • Disrupting financial markets
  • Distributing disinformation and manipulating public opinion
  • Supporting the narratives of extremist or terrorist groups
  • Stoking social unrest and political polarization

However, this list is by no means exhaustive. As deepfake technology advances and becomes more accessible, bad actors find new and creative ways to exploit it for criminal gain.

One of the most high-profile criminal uses of deepfakes is non-consensual pornography. A 2019 report by Deeptrace Labs found nearly 15,000 deepfake porn videos online, with over 90% of them targeting female celebrities without their consent. The report also noted the emergence of deepfake porn forums where users share and request explicit videos featuring both public figures and private individuals. This trend has troubling implications for online privacy and sexual abuse.

Deepfakes are also increasingly being used for financial fraud. In 2019, criminals used AI voice cloning to impersonate an energy firm’s CEO and trick a UK subsidiary into transferring €220,000 to a fraudulent account. Similar cases have been reported in the UAE, where deepfake audio was used to scam a bank manager into transferring $35 million. As these examples illustrate, deepfakes can be highly effective at deceiving even cautious and sophisticated targets.

The use of deepfakes for political disinformation is another major concern. A 2021 report by the Brookings Institution warns that deepfakes could be used to undermine trust in institutions, exacerbate social divisions, and even provoke violence or war. The report cites several hypothetical scenarios, such as a deepfake video showing a political leader declaring war on another country or a fake news anchor spreading false information about election results. While no incidents on this scale have been reported, the potential for harm is clear.

Finally, the proliferation of “deepfakes as a service” on underground marketplaces is putting this technology into the hands of a growing number of criminals.

As these examples illustrate, the criminal threat posed by deepfakes is multifaceted and rapidly evolving. Law enforcement agencies must stay vigilant and proactive to keep pace with criminals’ use of this technology.

Deepfake Detection

As the threat of deepfakes grows, so does the need for reliable methods to detect them. While no perfect solution exists, researchers and technology companies are developing various techniques to help identify synthetic media. These methods fall into two main categories: manual detection and automated detection.

Manual Detection

Manual detection involves human analysts carefully examining a piece of media for signs of manipulation. While this approach is time-consuming and requires specialized training, it can be effective at spotting certain telltale signs of deepfakes, such as:

  • Blurring or misalignment around the edges of the face
  • Unnatural or inconsistent blinking patterns
  • Odd lighting or reflections in the eyes
  • Inconsistencies in hair, skin texture, or facial features
  • Discrepancies in the background or camera angle

However, manual detection has several limitations. It is not scalable to the vast amount of online media and relies on human judgment, which can be fallible. As deepfake technology improves, the signs of manipulation are becoming more subtle and harder to spot with the naked eye.

Automated Detection

Automated detection uses AI algorithms to analyze media and flag potential deepfakes. These systems are trained on large datasets of real and synthetic media to learn the distinguishing features of each. Some common approaches include:

  • Convolutional neural networks (CNNs) that analyze individual frames of a video for signs of manipulation
  • Recurrent neural networks (RNNs) that look for inconsistencies or anomalies across a sequence of video frames
  • Biological signal analysis that detects unnatural patterns in facial movements, such as pulse or breathing
  • Identify discrepancies between the audio and the lip movements in a video

Automated detection has the advantage of processing large volumes of media quickly and consistently. However, these systems are not foolproof. They can be vulnerable to adversarial attacks, where deepfakes are specifically designed to evade detection. They may also struggle with compressed or low-quality media or detecting partial or subtle manipulations.

Challenges and Limitations

Both manual and automated deepfake detection face several challenges and limitations. One major issue is the arms race between deepfake creators and detectors. As detection methods improve, deepfake techniques evolve to become more sophisticated and harder to spot. This creates a constant need for detectors to adapt and stay ahead of the latest threats.

Another challenge is the lack of large, diverse, and up-to-date datasets for training and testing detection algorithms. Creating these datasets is time-consuming and expensive, and they quickly become outdated as deepfake technology advances. There are also ethical concerns around using real people’s images for training without their consent.

Finally, there are limitations to what current detection methods can achieve. They may be able to flag a piece of media as potentially manipulated, but they cannot always pinpoint what exactly has been changed or provide definitive proof of authenticity. In high-stakes situations like criminal investigations or legal proceedings, this ambiguity can be problematic.

Preventive Measures

Given the limitations of deepfake detection, experts also recommend a range of preventive measures to help mitigate the risk of deepfakes. These include:

  • Securing the content creation and distribution pipeline to prevent tampering
  • Using digital watermarking or blockchain-based authentication to prove the origin and integrity of media
  • Promoting media literacy and critical thinking skills to help people spot and resist disinformation
  • Developing legal and ethical frameworks to govern the use of synthetic media and punish malicious actors

Ultimately, addressing the challenge of deepfakes will require a multi-faceted approach that combines technological solutions, institutional safeguards, and societal resilience. By working on all these fronts, we can help create a future where the benefits of synthetic media can be realized while minimizing its potential for harm.

How Are Other Actors Responding to Deepfakes?

The threat of deepfakes has prompted a range of responses from different actors, including technology companies, governments, and civil society organizations. These responses aim to mitigate the harmful impacts of deepfakes through a combination of technological solutions, policy interventions, and public awareness campaigns.

Technology Companies

Many major technology companies are investing in deepfake detection and prevention tools to help combat the spread of synthetic media on their platforms. Some notable examples include:

  • Facebook: In 2020, Facebook announced a new policy banning deepfakes that are likely to mislead viewers. The company also partnered with academia and industry to create the Deepfake Detection Challenge, which aimed to spur the development of new detection algorithms.
  • Microsoft: Microsoft has developed a tool called Video Authenticator, which can analyze a still photo or video to provide a percentage chance that the media is artificially manipulated. The company has also launched an educational campaign to help people spot and resist disinformation.
  • Google: Google has contributed to the FaceForensics++ dataset, which contains over 1,000 real and manipulated videos for training and testing deepfake detection models. The company has also funded research into new detection methods and partnered with fact-checking organizations to combat disinformation.

These efforts show that technology companies are taking the deepfake threat seriously and working to develop solutions. However, critics argue that these measures are insufficient and that companies need to do more to proactively identify and remove malicious deepfakes from their platforms.

Governments and Policymakers

Governments around the world are also grappling with how to address the challenges posed by deepfakes. In the United States, several states have passed laws criminalizing the use of deepfakes for political disinformation, non-consensual pornography, or fraud. At the federal level, there have been calls for new regulations on deepfakes, such as requiring clear labeling of synthetic media or holding platforms liable for failing to remove malicious content.

In the European Union, the proposed AI Act would regulate the use of AI systems, including those used to create deepfakes. Under the Act, deepfakes would be subject to transparency requirements, such as clearly indicating that the content is artificially generated. The Act also proposes stricter rules for “high-risk” AI systems, which could include deepfake detection tools used by law enforcement.

Other countries, such as China and Australia, have also introduced or proposed new laws and regulations related to deepfakes. However, there are concerns that some of these measures could be used to stifle free speech or legitimate uses of synthetic media.

Civil Society and Academia

Civil society organizations and academic institutions are also playing an important role in responding to the deepfake threat. Some key initiatives include:

  • The Partnership on AI’s Media Integrity Steering Committee, which brings together experts from industry, academia, and civil society to develop best practices and tools for detecting and mitigating the impact of manipulated media.
  • The Witness Media Lab, which conducts research and advocacy on the ethical implications of synthetic media and provides resources for journalists and human rights defenders to authenticate digital content.
  • The MIT Media Lab’s Center for Advanced Virtuality, which explores the social and cultural implications of virtual and augmented reality technologies, including deepfakes.

These efforts aim to foster a more nuanced understanding of the risks and benefits of synthetic media, and to develop ethical frameworks for their use.

Public Awareness and Media Literacy

Finally, there is a growing recognition of the need for public awareness and media literacy initiatives to help people navigate the challenges posed by deepfakes. These initiatives aim to teach critical thinking skills, such as how to spot the signs of manipulation, verify sources, and resist disinformation.

Examples of media literacy resources include the Washington Post’s guide to spotting deepfakes, the University of Washington’s Calling Bullshit course, and the News Literacy Project’s Checkology platform. By empowering individuals to be more discerning consumers of media, these efforts can help build societal resilience against the malicious use of deepfakes.

Conclusion

The rise of deepfakes presents a complex and evolving challenge for our society. As the technology becomes more sophisticated and accessible, the potential for misuse grows, posing risks to individuals, organizations, and democratic institutions. From non-consensual pornography to political disinformation, the malicious applications of deepfakes are vast and constantly evolving.

For law enforcement agencies, deepfakes represent a particularly acute threat. They can be used to disrupt investigations, undermine the credibility of evidence, and erode public trust in the justice system. As criminals become more adept at using deepfakes, law enforcement will need to adapt by developing new tools, skills, and partnerships to stay ahead of the curve.

Detecting deepfakes is a critical part of this response, but it is not a panacea. While manual and automated detection methods can help flag suspicious content, they are not foolproof and may struggle to keep pace with the rapid advancement of deepfake technology. Moreover, detection alone does not address the underlying social and political factors that make deepfakes such a potent tool for disinformation and manipulation.

To effectively combat the threat of deepfakes, a multi-faceted approach is needed. This should include:

  • Technological solutions: Continued investment in research and development of deepfake detection and authentication tools, as well as secure content creation and distribution pipelines.
  • Policy interventions: Clear and enforceable legal frameworks governing the use of synthetic media, including penalties for malicious actors and protections for victims. International cooperation will be essential to address the cross-border nature of the threat.
  • Institutional safeguards: Robust procedures and standards for handling digital evidence in law enforcement and legal proceedings, as well as fact-checking and content moderation practices by media and technology companies.
  • Public awareness and education: Sustained efforts to promote media literacy, critical thinking skills, and responsible use of synthetic media. Empowering individuals to spot and resist disinformation is a key part of building societal resilience.
  • Collaborative partnerships: Close coordination between law enforcement, technology companies, academia, and civil society to share knowledge, develop best practices, and respond quickly to emerging threats.

By pursuing these strategies in tandem, we can help to mitigate the risks of deepfakes while harnessing the potential benefits of synthetic media for creativity, education, and innovation.

However, the challenges posed by deepfakes are not static. As the technology continues to evolve, so too will the threats and the responses needed to counter them. Staying ahead of the curve will require ongoing vigilance, adaptation, and collaboration from all stakeholders.

The Europol Innovation Lab’s report on deepfakes and law enforcement is a valuable contribution to this effort. By providing a comprehensive overview of the current state of the technology, its criminal applications, and the impact on law enforcement, the report helps to raise awareness and stimulate discussion about this critical issue.

But the report is also a call to action. It underscores the urgent need for law enforcement agencies to invest in their capabilities, partnerships, and preparedness to deal with the deepfake threat. It highlights the importance of proactive, forward-looking approaches that anticipate future risks and opportunities.

Ultimately, the challenge of deepfakes is not just a technological or legal one, but a societal one. It requires us to think critically about the nature of truth, trust, and accountability in the digital age. It demands that we develop new norms and mechanisms for distinguishing reality from fiction, and for holding those who seek to deceive us to account.

By working together to address these challenges, we can help to create a future in which the transformative potential of synthetic media is realized, while its risks are effectively managed. The Europol Innovation Lab’s report is an important step in this direction, and a reminder of the vital role that law enforcement must play in shaping this future.

Discover our digital trust solutions
Find out more