Document and identification forgery are sought-after services on underground cybercriminal forums. Impersonating someone else – or even creating an entirely fictitious person – opens a door to a variety of fraudulent schemes, from taking over a mobile account number to taking out loans in another person’s name. Verifying identity is important for compliance, trust and evaluating risk, and in many places, it’s also a binding obligation. Financial institutions are bound by “know your customer,” or KYC regulations, where customers are required to present identification. KYC rules are intended to prevent money mules, money launderers, terrorists and other criminals from opening accounts. Physical document forgery has always posed a risk. But increasingly, organizations rely on online identity verification where an organization never lays hands on a driver’s license or utility bill or sees someone in person.
This has become a necessity for some businesses, such as cryptocurrency exchanges, which have only a virtual presence and no bricks-and-mortar retail outlets. Many banks are closing retail branches due to declining demand and more capable online services. For verification, a person may be asked to take a photo of the front and back of their driver’s license or government-issued ID card. They may be required to take a selfie while holding up a piece of paper with a specific phrase or take video of themselves. The latter is sometimes referred to as “liveness” detection, and is one of the primary ways that identity verification services assess whether the person presented is indeed the person on the displayed identity document. But as physical document forgeries may be difficult to ascertain, digital forgeries can likewise be difficult to detect. The leaps in the last few years in generative artificial intelligence models are being applied to creating bogus or fraudulent identities. While we assessed last year that some cybercriminal offerings – particularly in the video sphere – were not mature and showed obvious flaws, we predicted this may not last long as the quality of generative AI improves.
On Feb. 5, 2024, the 404 Media technology publication reported on an underground service known as OnlyFake that claimed to use “neural networks” to generate realistic looking photos of identification cards. The journalist leveraged the service to create a convincing image of a driver’s license from the U.S. state of California that had been seemingly placed on a carpeted background. The low-cost fake ID service is offered with a variety of drop-down menu selections. Customers can select their own name, biographical information, address, expiration date and signature, as well as alter metadata to fabricate the device, date, time and GPS coordinates. 404 Media later used the artificially generated identification card to bypass KYC requirements on OKX, a cryptocurrency exchange tied to cybercriminal activity. The owner of the OnlyFake service allegedly claimed the generated identity cards could be used on other exchanges, such as Binance and Coinbase. Researchers questioned whether the identification cards were using artificial intelligence (AI) image generation tools or simply using a template.
We’ve been watching this threat actor for several years as this service has developed. What follows is a summary of our research with an assessment of the risks to organizations using online identity verification.
Forgery Factory
In August 2023 and November 2023, we reported on the GENERATOR 3.0, which is a component of the actor ProstoOtrisovka’s Passport Cloud aka Cloud Passport automated document forgery service, which was available via Telegram channels and at the onlyfake[dot]net and onlyfake[dot]org domains. Based on the domain names’ similarity to the OnlyFake service 404 Media covered and the visual similarities of the panels, we assess these likely are the same service.
We first reported on ProstoOtrisovka in February 2021. The actor ran a document forgery service dubbed Document forgery by Гаврюша (Eng. Gavryusha) at that time. The service allegedly employed six people with years of experience in document forgery. ProstoOtrisovka claimed to have more than 300 document templates on file. The team would edit a document template, print the forged document, laminate it and take a photo of it. The service used scanned copies of identity documents with high dots per inch (DPI) resolutions as templates. The actor specifically sought scanned images of IDs greater than 1,200 DPI on several underground forums as early as January 2021. In the scope of our early research into ProstoOtrisovka, we also discovered multiple Photoshop document (PSD) templates the Passport Cloud service possibly used to generate fake documents.
In 2023, we reported on the third version of the tool, which was a fully automated analog of the second version and no longer required Photoshop, but rather generated documents based on the data provided. The actor claimed the third version was developed over a year and a half and included random data for a document generation feature.
According to 404 Media, OnlyFake’s owner, John Wick (John Wick is likely ProstoOtrisovka's Telegram username or a persona run by OnlyFake’s customer support) claimed to have started creating document templates three years ago. In the article, John Wick stated the generator service itself had been in development for over a year and a half and was created by feeding it a large collection of images of identity documents. The owner also allegedly preferred to find high resolution scans of ID cards and hologram photos from several angles, offering to purchase ID cards scanned at a resolution of 1,200 to 2,400 DPI. We observed the actor ProstoOtrisovka post in forums offering to pay for scans of U.S. driver’s licenses from “rare” states. We assess ProstoOtrisovka is likely John Wick from the 404 Media report.
404 Media briefly quoted Hany Farid, a professor at the University of California, Berkeley, who stated he suspected the OnlyFake service was not using generative AI tools, but rather was placing user-submitted images into a pre-existing template of an ID. This aligns with activity we observed from ProstoOtrisovka since 2021. More traditional methods of KYC verification bypass, such as fake documentation and mules, are far from new. While the OnlyFake service is not likely leveraging true deepfake technology, we still increasingly are seeing KYC verification bypass service offerings claiming to leverage generative AI.
Assessment
The capabilities and quality of AI image and video generators will continue to climb but so will the capabilities to detect fakes. Threat actors almost certainly will continue to try and implement the technology in their fraud schemes, whether from illicit services or by repurposing legitimate tools and software for illegal activity. In January 2024, at least two notable instances emerged on social media of users using Midjourney and Stable Diffusion — legitimate AI image generation software — to create verification images of artificial humans holding up an ID to a camera (see here and here).
ID verification that requires a photo of a person and a specified item have posed a hurdle to threat actors, as do photos that contain writing. Several underground services exist where users can purchase photos of people holding up items to the camera, upon which fraudulent documents can be superimposed. So far, threat actors have struggled with circumventing detections of live video and certain verification images. But in the latter quarter of 2023, we observed a threat actor with a credible reputation offering deepfake ID verification videos and manipulated ID photos. These services are purportedly used to create and unlock cryptocurrency accounts and accounts on other websites, such as food-delivery services, using false identity documents. Although we cannot reveal more information about this threat actor, the offers appear to be credible with few complaints from buyers. The fraudulent videos and images would appear at first glance to be challenging to detect, although it was not possible to determine if these samples were actually people who were matched to their correct IDs and might be intentionally misleading examples in order to draw buyers.
Because of the risks that AI poses for misinformation and disinformation as well as fraud, there are numerous academic projects and commercial products to detect videos that have been created or modified with AI. Those include a deepfake project at MIT, Microsoft’s Video Authenticator and Intel’s FakeCatcher. ID verification has become a competitive market space because online verification offers vast savings for organizations to onboard new customers. Vendors in the space include Onfido, Sensity AI, Jumio, Sumsub, Regula and Veriff. Some of these vendors claim to be using their own AI solutions to detect fraud in combination with human verification and external data sources, such as government records, to score whether an ID verification attempt may be fraudulent. Research analyst firm Gartner says that one of the biggest risks to the digital verification market space is whether advancing AI attacks will undermine their integrity. Gartner also predicts that this product category will decline in the coming years as “portable” digital identity solutions are created. Portable identities are systems where identity credentials that have already been vetted can be offered to other entities and accepted as legitimate without those entities having to vet those credentials. One example would be government administered digital ID programs.
We expect threat actors to continually test ID verification systems to find weaknesses, and the success of those efforts may be vendor or procedure-specific rather than a systemic failing of all types of digital ID verifications system. We would expect some systems to be circumvented some of the time, but it will be an ever shifting battle. When fraud is occurring at a later point in time on an account, security analysts will need to conduct a post-mortem on why the account was initially allowed to be created. Ultimately, it will be up to the service provider using an identity verification service to determine what is the lowest amount of confidence that can be considered acceptable in accepting an online customer application.