Cybercriminals and AI: Not Just Better Phishing | Intel471 Skip to content

Cybercriminals and AI: Not Just Better Phishing

Jun 12, 2024
Adobe Stock 298363329

The frenetic pace at which large language models (LLMs) are being released and their potential capabilities are driving strong interest among cybercriminals. Although there are questions about the accuracy and utility of artificial intelligence (AI) systems, threat actors are closely watching. Discussions about AI among cybercriminals are becoming far more ubiquitous in 2024 versus one year prior. Development and experimentation also continues. Some threat actors are claiming to employ AI to bypass facial recognition technology, create deepfake videos and summarize large batches of data stolen in data breaches. Others are offering products that claim to incorporate AI, such as chatbots that can be used to write malicious scripts. Still others are exploring how AI could be used to write malicious software based on vulnerability reports, a capability that could allow vulnerability exploitation at a much faster pace. What follows is a recap of what Intel 471 analysts have observed through the first half of this year with a view to what we could expect to see ahead.

Cybercrime Observations

Perhaps the most observed impact AI has had on cybercrime has been an increase in scams, particularly those leveraging deepfake technology. One prominent example of this is the “Yahoo Boys,” a group of cybercriminals primarily based in Nigeria who use deepfakes to conduct romance and sextortion scams. The tricksters convince their victims using fake personas and exploit their ill-gotten trust for financial gain. One of the primary ways they achieve this is by coercing their victims into sharing compromising photos and then threatening to release them publicly unless the victim pays. Most notably, many of the targeted victims are minors, and in extreme cases the scams have resulted in underage victims committing suicide.

We have reported on numerous deepfake services offered in the underground and observed a significant increase in the number of offers since January 2023. In November 2023, an actor joined multiple forums and advertised synthetic audio and video deepfake services. The actor claimed to have been providing the services on a regular basis for a year and a half and allegedly was able to generate voices in any language using an AI tool with retrieval-based voice conversion (RVC) technology. The actor offered to create or edit graphic advertisements, animated profile photos, banners and promotional videos starting at US $5. Other services included producing synthetic audio and video deepfakes with prices ranging from US $60 to US $400 per minute based on the project complexity, which is notably cheaper than the services we identified in 2023.

At least five other actors we reported this year so far are offering similar services. Their prices ranged from US $10 to US $200. One subscription offers prices ranging from US $19 for a “pay as you go” plan to US $999 for an annual subscription that allows 300 face swaps per day for images and videos. That plan also includes a “fast mode” feature for live streaming.

One of these threat actors sold a tool for US $10,000 that could purportedly bypass know-your-customer (KYC) verification mechanisms by altering a phone camera feed. However, this actor’s reputation is questionable as the person had been accused of prior scams and refused to conduct deals via a guarantor.

Fig 1 deepfake service offers
These graphics depict the number of deepfake service offers we observed by year and the number of AI-related Information Reports (IRs) Intel 471 published by quarter in 2023 and 2024.

Business Email Compromise, Document Fraud

We also observed threat actors leverage AI technologies to support document fraud and business email compromise (BEC) scams. One example is the actor DocuCoder (name has been changed), who allegedly developed a tool using AI to manipulate invoices for use in BEC attacks. BEC attacks take a variety of forms but often revolve around intercepting communications between two parties involved in a transaction and modifying invoices or bills to reflect the bank account numbers of the scammers. If the legitimate parties do not notice these changes, payments go to the wrong accounts and can be difficult to recover. By developing this type of tool, the threat actor is offering what is potentially a gain in productivity for other scammers who would need to spend less time modifying invoices.

The invoice manipulation tool allegedly has a range of functionality, including the ability to detect and edit all portable document file (PDF) documents and swap international bank account numbers (IBANs) and bank identification codes (BICs). The tool is offered on a subscription basis for US $5,000 per month or US $15,000 for lifetime access. If this tool works as promised, this fulfills an often cited use case of AI for productivity gains, albeit here in a criminal context. The actor also offered to provide custom software on request and other AI-based software for underground call centers and call services.

In another type of LLM-related productivity gain, we observed a threat actor claim to use Meta’s Llama AI to search through breach data. This could be appealing to extortionists who have exfiltrated a large amount of data from a victim organization. Often, ransomware groups will try to extract the most sensitive records and publish those on a data leak site in order to put more pressure on the victim to pay. Sometimes groups exfiltrate terabytes of data, most of which may be mundane, so isolating sensitive data could be a viable AI use case.

New Frontiers: Malware, Exploitation

One concern in the AI space is whether LLMs can successfully write malware and software exploits. While the models don’t seem capable of writing complex malware from scratch, some can provide assistance in writing code and scripts. Researchers continue to probe these capabilities, and one study out of the research community has drawn attention.

Four University of Illinois Urbana-Champaign (UIUC) computer scientists claim to have used OpenAI's GPT-4 LLM to autonomously exploit vulnerabilities in real-world systems by feeding the LLM common vulnerabilities and exposures (CVE) advisories describing flaws. In the preprint of their paper, “LLM Agents can Autonomously Exploit One-day Vulnerabilities,” the researchers described how they created an LLM agent consisting of 91 lines of code that was capable of exploiting 87% of 15 one-day vulnerabilities, some categorized as critical, based on the CVE descriptions of those flaws. They also tested GPT-3.5, open source LLMs and the MetaSploit and ZAP vulnerability scanners, which collectively had a success rate of 0%. This highlights the improvements of OpenAI’s currently active LLM compared to previous versions and indicates future models will likely be much more powerful. The research, however, fell under skepticism. One critic found that the paper fell short of its promise that GPT-4 could be used for autonomous exploitation but rather was demonstrating “its value as a key component of software automation by seamlessly joining existing content and code snippets.” Because many of the key elements of the study were not published — such as the agent code, prompts or the output of the model — it can’t be accurately reproduced by other researchers, again inviting skepticism.

We see underground actors similarly interested in how LLMs can be leveraged for vulnerability research, reconnaissance and exploit writing. In April 2024, we observed one threat actor offering a tool purportedly powered by AI that can analyze, scrape and summarize CVE data. In another example, we observed a threat actor offering to sell what’s best described as a multipurpose hacking tool. The code and the video demonstrations the actor provided reveal the tool’s capabilities allegedly include an information-stealer malware builder; network scanning functions; a vulnerability scanner for different content management systems (CMSs), e-commerce and web development platforms; account-checking tools; and a black-hat AI chatbot that can be used to code malicious scripts automatically. This particular tool integrated a well-known AI model.

We also observed open source reports of a new underground malicious AI service in March 2024. Further research revealed the AI utility is leased as a subscription-based service and was advertised as one of the most capable AIs for malicious purposes. Prices ranged from US $90 per month to US $900 for lifetime access and alleged features include:

  • The ability to communicate in multiple languages.

  • The ability to trace hidden information from any photo or image.

  • The ability to provide information and responses from the latest surface and dark web data.

  • The ability to generate illegal information and malicious code without any Google AI restrictions.

  • The ability to plan social-engineering attacks and crime activities.

  • The ability to draft and generate phishing emails.

  • The ability to draft and generate fake news articles.

  • The ability to impersonate any public figure and generate social media posting information.

Rapid Adoption, New Risks

The rapid growth of and focus on AI led to new advancements as well as unintended consequences. For example, Google’s new AI-powered Search Generative Experience (SGE) algorithms recommended scam sites that redirected visitors to unwanted Chrome extensions, fake iPhone giveaways, browser spam subscriptions and technology support scams. Cybersecurity researchers also discovered vulnerabilities in AI applications. One of those impacted the Hugging Face Safetensors conversion service, a popular collaboration platform that helps users host pretrained machine learning models and datasets. The vulnerability reportedly allowed attackers to conduct supply chain attacks by sending malicious pull requests with attacker-controlled data from the service to any repository on the platform and hijacking the models submitted by users. The discovery occurred about a month after the cybersecurity company Trail of Bits disclosed CVE-2023-4969 aka LeftoverLocals, a vulnerability that allows recovery of data from Apple, Qualcomm, Advanced Micro Devices (AMD) and Imagination general-purpose graphics processing units (GPGPUs). It was possible for a local attacker to read memory from other processes including another user’s interactive session with an LLM.

In addition to security flaws, AI providers also faced legal troubles related to content ingestion for training LLMs. One example occurred in January 2024 when the U.S. business news channel CNBC reported two nonfiction book authors filed a would-be class action lawsuit against Microsoft and OpenAI. The authors essentially claimed their copyrighted material was stolen because it was used to help build a billion-dollar AI system without providing compensation to the authors. This report came about a week after the U.S. daily newspaper The New York Times sued the same companies in a similar copyright infringement case, which alleged the organizations used the newspaper’s content to train LLMs.

Nation-State Interest, Regulation

Governments are cautiously approaching AI and the concerns come from many directions. There are worries over the privacy and data security impacts of large-scale commercial use. AI also has potential to be misused by adversaries in propaganda and disinformation campaigns. Nation-state use of AI in offensive and defensive systems could reshape balances of powers. Notable events since Jan. 1, 2024, include:

– In January 2024, the Federal Communications Commission (FCC) chairwoman proposed the FCC recognize calls made with AI-generated voices as “artificial” voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocall scams targeting consumers illegal.

– In March 2024, Time Magazine cited a report commissioned by the U.S. government that stated “the U.S. government must move quickly and decisively to avert substantial national security risks stemming from AI which could, in the worst case, cause an extinction-level threat to the human species.”

– In April 2024, the U.K. and U.S. signed an agreement to “work together on developing robust methods for evaluating the safety of AI tools and the systems that underpin them.”

— In April 2024, the U.S. Department of Homeland Security (DHS) announced a new Artificial Intelligence Safety and Security Board designed to guide the usage of AI within U.S. critical infrastructure. The board allegedly will advise the DHS on how to govern the way AI is deployed across 16 critical infrastructure sectors including defense,

energy, transportation, information technology (IT), financial services and agriculture. Members of the board include CEOs from Adobe, Alphabet, AMD, Amazon Web Services (AWS), Delta Airlines, Humane Intelligence, IBM, Northrop Grumman, NVIDIA and OpenAI, as well as select government officials, civil rights leaders and academics.

Nation-states and advanced persistent threat (APT) groups are exploring how AI could aid their operations. In two public blog posts in 2024, Microsoft and OpenAI described how the companies detected and terminated accounts belonging to known state-affiliated actors. Microsoft said it had not detected significant attacks perpetrated by these groups but that it was important to identify “early-stage, incremental moves.” OpenAI’s take on the risk was similar, writing that its latest LLM, GPT-4, “offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.”

Microsoft and OpenAI identified five groups: Forest Blizzard, which is a Russian actor linked to the country’s military cyber unit GRU Unit 26165; Emerald Sleet, a North Korean actor; Crimson Sandstorm, connected to Iran’s Islamic Revolutionary Guard Corps (IRGC); and Charcoal Typhoon and Salmon Typhoon, both Chinese state-affiliated actors.

How these nation-state-linked groups used the LLMs varied. The North Korea-linked group used the LLMs to better understand publicly reported vulnerabilities, for basic scripting tasks, for assistance with content for likely spear-phishing emails and for reconnaissance, such as researching think tanks and government organizations. The Iran-linked group was seen generating phishing emails and using LLMs for tasks such as web scraping. The group was also seen using LLMs to develop undetectable code and for searching how to disable antivirus software. Likewise, Charcoal Typhoon — the Chinese state actor — also used LLMs for advanced commands representative of post-compromise behavior, Microsoft wrote. Other uses included translation and finding coding errors.

Assessment

We previously reported that despite increasing AI adoption within the cybercriminal underground, AI capabilities so far played a small supporting role. Improvements since then and the details provided in this report suggest the role of AI in cybercrime is shifting. Deepfake technology is becoming more convincing and less expensive, which almost certainly will lead to an increase in social-engineering activity and misinformation campaigns. We also expect phishing and BEC activity to increase due to AI’s ability to easily draft phishing pages, social media content and email copy. There’s also the disinformation angle: LLMs excel at creating written content, so LLMs could be a force multiplier for countries such as Russia that have long leveraged disinformation operations and run human-powered content farms (such as the infamous Internet Research Agency).

The academic study looking at GPT-4’s ability to write exploit code based on CVE descriptions deserves more scrutiny, but undoubtedly this type of adversarial use case for LLMs would be desired. This capability would be sought-after by everyone from cybercriminals to nation-states. The security landscape will dramatically change when an LLM can find a vulnerability, write and test the exploit code and then autonomously exploit vulnerabilities in the wild. This would mean that vulnerability hunting and exploit writing will no longer be the domain of a few with those skills but rather available to anyone with access to an advanced AI model. Vulnerabilities also could be exploited at a much faster pace, as threat actors could independently generate their own exploits. This would also impact the cyber underground, in that those vulnerability finding and exploit writing skills could become commoditized, and actors would not have to purchase them on forums. For nation-states developing their own AI, this type of capability could be highly sought. It’s a game changer and national security nightmare if an AI tool can perform these tasks at a speed that’s hard for security ops teams to defend against. The advancements in AI over the last 18 months have even left experts in the field astonished, so these unpredictable leaps in capabilities could fully be in the cards.