Editorial

AI face swapping posing a challenge to identity verification systems

704 percent increase in face swap attacks reported as use of deepfakes skyrockets.

Posted 8 February 2024 by Christine Horton


Face swaps are created using generative AI tools and present a huge challenge to identity verification systems.

That’s according to the iProov Threat Intelligence Report 2024: The Impact of Generative AI on Remote Identity Verification.

A face swap can easily be generated by off-the-shelf video face-swapping software and is harnessed by feeding the manipulated or synthetic output to a virtual camera. While biometric systems can be made resilient to this type of attack, malicious actors are exploiting a loophole in some systems by using cyber tools, such as emulators, to conceal the existence of virtual cameras, making it harder for biometric solution providers to detect.

Generative AI has provided a huge boost to threat actors’ productivity levels: these tools are relatively low cost, easily accessed, and can be used to create highly convincing synthesized media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions. This only serves to heighten the need for highly secure remote identity verification,” said Andrew Newell, chief scientific officer, iProov.  

“While the data in our report highlights that face swaps are currently the deepfake of choice for threat actors, we don’t know what’s next. The only way to stay one step ahead is to constantly monitor and identify their attacks, the attack frequency, who they’re targeting, the methods they’re using, and form a set of hypotheses as to what motivates them.”

The evolution of digital injection attacks

iProov first observed the use of emulators and metadata spoofing by threat actors to launch digital injection attacks across different platforms in 2022 but continued to dominate in 2023  growing by 353 percent from H1 to H2 2023. An emulator is a software tool used to mimic a user’s device, such as a mobile phone. These attacks are rapidly evolving and pose significant new threats to mobile platforms: injection attacks against mobile web surged by 255 percent from H1 to H2 2023  

Advances in collaboration and sophistication

Across 2022 and 2023, indiscriminate attack levels ranged from 50,000 to 100,000 times per month. There was also a considerable increase in the number of actors and an improvement in the sophistication of the tools used. 

A significant growth in the number of groups engaged in exchanging information related to attacks against biometric and remote human identification or “video identification” systems was also observed, evidencing the collaborative approach now being adopted by threat actors. Of the groups identified by iProov’s analysts, almost half (47 percent) were created in 2023.

New trends for 2023

iProov said there are two primary attack types: presentation attacks and digital injection attacks. Among the new trends discovered for 2023 are: 

  • A significant increase in packaged AI imagery tools deployed which make it far easier and quicker to launch an attack and this is only expected to advance. 
  • There was a 672 percent increase from H1 2023 to H2 2023 in the use of deepfake media such as face swaps being deployed alongside metadata spoofing tools. Presentation and digital injection attacks may have different levels of impact, but they can pose a significant threat when combined with traditional cyberattack tools like metadata manipulation.

Event Logo

If you are interested in this article, why not register to attend our Think Digital Government conference, where digital leaders tackle the most pressing issues facing government today.


Register Now