Deepfake fraud
Deepfake technology adds a new dimension to synthetic identity, making this costly fraud even more accessible to cybercriminals. Before AI-enhanced fake ID, conventional synthetic identity had already cost banking around $6 billion in 2023. But it isn’t just banking that needs to take on identity fraud. Governments are at risk of services being compromised by identity fraudsters. Governments are at the coalface of document identity, issuing IDs and requiring them to verify citizens. A recent article by Think Digital Partners shares findings from GB Group that estimate that around 8.6 million people in the UK have used fake or fraudulent identities or someone else’s identity to gain access to goods, services, or credit.

Gen AI-enabled identity fraud is only making this worse. Research shows that over half a million video and voice deepfakes were shared on social media in 2023. AI is heralding a new era in automated synthetic identity. This proliferation of deepfakes has been driven by ‘cheapfakes,’ which cost just a few dollars to generate a convincing AI-faked voice or video. These deepfake IDs can be used for various scams, including accessing government services using fake identity credentials. The latter will bring chaos to online (and offline) government service access unless systems are implemented to mitigate the risk of deepfake ID.
Deepfakes and static identity verification
Identity verification is typically a static process, and fraudsters take advantage of this fact. For example, if you wish to use government services, you will likely be asked to provide a passport or driver’s license when setting up an account. The COVID-19 pandemic normalised remote verification. Remote checks require the presentation of identity documents and are often augmented by credit reference agency (CRA) checks. Sometimes, liveliness checks are also requested, but deepfakes are also finding ways to compromise this process. “Liveliness bypass” uses face spoofing to trick facial recognition by hijacking cameras and inserting deepfake videos. Alternatively, fraudsters can compromise a server and modify or swap biometric data: fraudsters are masters of manipulation, both of human and digital targets. If there is a way around a barrier, they will find it.
Next-gen automated ID fraud is leveraging the increase in remote ID document-based checks. Gen AI service sites such as OnlyFake offer Fraud-as-a-Service to “democratise fraud”, making it cheap and easy for anyone to get in on the fraud act. An investigation by reporter Joseph Cox of 404 Media shows how cheap, quick, and easy it is to use sites like OnlyFake to create spoof ID documents needed to set up online accounts.
The automation of synthetic identity fraud is escalating the war of attrition between fraudsters and governments. However, mitigating AI-enabled fraud is not easy. The use of AI-enabled anti-fraud solutions is just part of the answer. Step-up, orchestrated risk-based verification is a way to meet AI-enabled fraud head-on while delivering an inclusive, usable identity system.
How can orchestrated identity verification help reduce the threat of deepfake identity?
Orchestrated identity verification uses risk-based verification (RBV); this system can be thought of as analogous to risk-based authentication (RBA). In the case of RBA, rules drive the level of authentication required to access an account. For example, if you log in to an account from an unrecognised location, a risk-based approach to authentication may ask for an additional credential to allow login to proceed. Other methods of risk-based sign-in may involve behavioural biometrics. RBV is analogous, but orchestration makes it ideal for verification, where citizens’ needs are as important as security.
Orchestrated risk-based verification (oRBV) is rules-driven; verification decisions occur at the point of registration or during a re-verification event after creating an account. Rules determine the levels of verification required to ensure that the individual is who they say they are and that they meet the needs of resource access. A rule will initiate further verification checks if the system recognises a suspicious verification event, say a deep fake is spotted. The rule may even require that the person uses F2F or identity vouching to complete their identity checks. The key to using orchestrated risk-based verification is to ensure it is flexible and dynamic in its execution.
If you liked this content…
Orchestrated risk-based verification (oRBV) is an ideal way to help mitigate the impact of deep fakes on identity services. An oRBV approach can begin the verifying journey of a citizen by connecting the government service to one or more deep fake detectors. If a deep fake is suspected, the rule can stop the journey or request further verification of information. In this way, false positives are reduced, and mistakes that could anger citizens are avoided. At the same time, the risk of deep fake identity account creation is minimised.
oRBV services modify user journeys dynamically using an identity orchestration and decisioning engine (ODE). The extensibility offered by an orchestration service allows governments to leverage a variety of identity checks, including deep fake detection. An ODE provides the much-needed scope to handle the array of verification needed for the diversity of individuals creating online accounts. Governments must ensure that citizens are not impacted by the need to mitigate deepfake IDs. Instead, the service must be able to offer verification choices to individuals that maintain a great customer experience while preventing the use of fake identity documents. In practice, this means designing a service that is a system. Systematic thinking is the only way to add cyber-reliance and cyber-resilience into a service when designing identity-related user journeys: verifying each registration or resource access event using a risk-based approach. Identity orchestration ensures that the human experience of government service is optimised by adjusting to the needs of the service and balancing security and usability.
How to use rules to spot a deepfake then stop it in its tracks
Deepfakes, currently, are not perfect, and spotting a deepfake can be done in real time. Colour-matching between images and irregular shadowing can be tell-tale signs of deep fakery. There is an increasing number of solutions in the market that will help identify deepfake identity documents, but is it enough to spot deepfakes? A system must mitigate the impact of deep fake ID and realise ease of deployment and maintenance while maintaining excellent customer service.
Hardened, robust, and usable identity-enabled services come down to offering choices:
- Choices in deep fake detection solutions
- Choices in anti-fraud checks
- Choices in KYC and other identity checks
- Choice in the use of vouching and other OOB channels
Whether conventional or AI-enabled, synthetic identity account creation is prevented using identity orchestration and decisioning. When an account registration presents suspicious credentials, the system will modify the user journey to request further verification, even F2F checks, stepping up verification, or even stepping down the checks needed if the use case requires. This will stop even the most ardent of fraudsters from creating an account.
Think Digital Partners is pleased to announce a new event for 2024. Think Digital Identity and Cybersecurity for Government takes place in London on May 8. Find out more and get your ticket here.