Resources

What happens to KYC when generative AI can spoof biometrics?

What happens to KYC when generative AI can spoof biometrics? Remember that scene from the 2002 sci-fi movie “Minority Report,” when Tom Cruise’s character has his eyes replaced to avoid being identified? He then accidentally drops his original eyeballs down a ramp and his panic is palpable as he realises that his very identity is rolling away.

Well, we’re not there yet, but Steven Spielberg’s movie, set just 30 years away in 2054, sits a little too close to home. 

With generative AI coming of age, and biometrics being used heavily in KYC via EIV checks, it’s time to consider the profound impact of spoofs and deep fakes on biometric identifiers used in Anti Money Laundering (AML).

In this paper we investigate the rise of biometrics as a verification technology and we'll run a thought experiment: what if AI becomes powerful enough to make biometrics near useless? We’ll wrap it up as we consider what can be done to address these risks in the compliance space. 

A brief history of biometrics and AML.

Public acceptance

From criminal fingerprints in the 1880s to Face ID on iPhones being launched in 2017 (and re-worked for face masks in 2020), the broad acceptance of biometrics as a means of authentication is nearly complete. Voice, retina, face, even full body scans are all used daily.

The public accepts this intimate invasion of privacy because it’s convenient. Companies push for this technology because it’s cheap, reliable (mostly) and vastly more secure than a badly chosen password. 

Business insistence

In 2013 Trulioo introduced their first production grade version of an EIV solution. In 2016 Onfido, brought their fully fledged document and biometric verification solution to market. In 2018 Jumio came roaring back after bankruptcy and acquisition with their own EIV solution. The EIV market is on a growth trajectory and everyone is getting in on it. 

Industry impetus

Then, in 2020, The FATF all but openly recommended biometrics as a form of EIV when they released guidance on digital ID stating, “Reliable digital ID can make it easier, cheaper and more secure to identify individuals… It can also help with transaction monitoring requirements and minimise weaknesses in human control measures.”

So what are the risks?

Two eyes and 10 fingers

As Bruce Schneier, the internationally renowned security technologist, recently discussed with The Economist, “One of the biggest risks we don't talk about a lot is that you can't recover from a [biometric] failure. If my password gets stolen I can create a new [one]. If I'm using my thumbprint and it gets stolen I can't get another thumb. I mean, yes, you have two eyes and 10 fingers, but that really misses the point. Biometrics are not something you can create on the fly. They are singular and they're all you've got.”

Smart devices = smart targets

And as biometrics are used more widely on more inane things touted as ‘smart devices’ (doors, toys, lights, fridges…) the risk for hacking and the loss of biometric identity vastly increases. 

Schneider explains, “Those [smart devices] have much less security and much less well designed software, so they are more vulnerable. Apple has hundreds of engineers working full time on their phones and dozens of security engineers who are doing security. You have something like a door lock or a light and it’s often designed offshore, ad hoc by a third-party team that comes together to write the code and then disperses. They're not paying attention to security as much because the market doesn't reward it. They're not sticking around. So there are no patches that can be written, it’s ad hoc, and less secure.”

A thought experiment.

Biometric databases everywhere

And so we arrive at our thought experiment. Generative AI can not currently fake fingerprints or retina scans, they’re too unique and there aren’t underlying data sets available to learn off (yet). But AI can create synthetic voices (audio), faces (photo) and proof of liveness (video). 

So what happens if, in a world where biometric authentication is being used for both the inane and the important, that data is hacked and generative AI is used to create synthetic biometrics? 

Given that the pass or fail of a biometric check comes down to statistics (how closely the original biometric matches the one being presented) there is ample room for error and it’s already happening. In fact, just this year a tech journalist used a synthetic voice to hack into a bank account.   

The scenario

If synthetic voices are already being used to hack bank accounts, imagine five years from now. The internet is overrun with deep fakes. No one knows what’s real or not, so nobody trusts anything. Legislation and industry bodies are trying to create ‘authentication badges’ but they’re slow and not widely adopted. Instead it’s an arms race between human minds and generative AI fakes. 

The fight for an ID
In one corner, we have our bad guy

A bad actor wants to launder money through a London property, however he is on a sanctions list and can’t buy directly. Much like our opening scenario with Tom Cruise, this bad actor simply needs to change his biometrics in order to present as an upstanding citizen and fool the system. In this world of indistinguishable sights and sounds, it’s a lot easier, he simply needs a set of synthetic-metrics, not physical ones, in order to gain a new identity.  

The bad actor pays some other nefarious group to create his synthetic identity.  A passport data leak occurred a few years back and the information (identity documents) are freely available on the dark web. The group chooses an identity, then uses generative AI to make a face that matches the chosen stolen identity. Remember, EIV checks are based on statistics. The generated face needs to be well over 50% similar to pass, so duping the system is possible. 

In the other corner, we have our good guys

In this thought experiment we’re in an AI arms race, so EIV providers are working to make their systems better than the 2023 versions which used document texture analysis, pixel compression patterns, spoofing heatmaps and more. 

But we’re also up against complacency and the enduring tug of war between compliance and commercial drivers. The London real estate agency wants a quick deal so runs the EIV without liveness detection. There goes the spoofing heat map safeguard. 

In this case the London real estate agent also needs a source of funds. No problem for that either. In this future world, generative AI can create a fake, much like education qualifications were back in the 2000s. What’s obviously a fake now was indistinguishable back then. The future could be the same. 

Believable? Sure is.

So our bad actor passes the tests and the deal is done. A human barely engaged. In almost every country liveness EIV checks are not required in legislation. Now imagine a future where humans can’t keep up with the AI arms race.   

The shift to zero trust.

So how does a compliance professional address this all-fake future? Or even a lesser version of it? It’s a two-sided approach, from both the public and business.  

A 2-factor mindshift for the public

Firstly, the public will need to move their 20th century mindset to the 21st. No longer should people blindly accept that what they see or hear is real. Instead, they should be applying a 2-factor authentication approach to requests for an EIV check. 

In the case of EIV service providers, they will need to build in more than just a SMS or email request to complete EIV. 

Generative AI is already exceptionally good at creating phishing emails. In a zero-trust world a client should no longer accept an anonymous request for their identity.  

A 3-factor approach to identity

What could this look like on the business side? Perhaps a message sent to which the client responds via a video call and confirms the requestor’s validity by showing the company letter head or perhaps the day’s newspaper.

Or perhaps government legislation mandates a 3-factor authentication EIV process which includes liveness, voice and face recognition. Spoofing one is easy, two is hard, but three? Nigh-on impossible. 

OCDD now the norm

In a world in the midst of an AI arms race there’s no room for complacency. Robust processes will be vital to ensure that verifications are accepted at a point in time only. Any time a transaction occurs compliance professionals will have to revert to a position of zero trust and identify again.

Humans for the win

As is the case for general cyber security, the biggest defence against AI-spoofed biometrics is people. 

When humans are given a solid process, supportive technology and the space to focus on true value defining work, they can achieve exceptional things. 

Technology can remove the mundane (document collection, report collation, customer chasing and workflow optimisation) but people, given the chance, can apply human-intelligence across a number of compliance risk vectors and present our best bet for catching money launderers using weaponised AI.


About First AML

First AML simplifies the entire anti-money laundering onboarding and compliance process. Its SaaS platform, Source, stands out as a leading solution for organisations with complex or international onboarding needs. It provides streamlined collaboration and ensures uniformity in all AML practices.

First AML transforms an otherwise complex and manual process into one that is simple, cost-effective, and compliant for businesses. By delivering efficiency and time savings, it protects reputations and enables companies to stay on the right side of history in the face of global threats.

Keen to find out more? Book a demo today! No time for a long demo? No problem. See what Source by First AML can do for your business in 2 minutes – watch the short demo here.

Related