Fighting AI with AI.

Artificial Intelligence (AI) has revolutionised countless industries, offering groundbreaking solutions to complex challenges. However, this powerful technology comes with a dark side – its capacity to be exploited for criminal and fraudulent purposes. From deepfake scams to sophisticated phishing schemes, AI has become a weapon of choice for many cyber criminals. Yet, the same technology that fuels these threats, holds the key to combatting them. This article explores how AI is both a threat and a solution in the fight against fraud, shaping the landscape of modern cybersecurity.

The Rising Threat of AI in Fraud: Deepfakes and Beyond

As AI continues to evolve, it is increasingly being used as a tool for fraud, with alarming consequences. One of the most insidious applications of AI in crime is the use of deepfakes—hyper-realistic, AI-generated images, videos, or audio recordings that are nearly indistinguishable from reality.

In Hong Kong, a finance employee was tricked into transferring $25.6 million after fraudsters used deepfake technology to impersonate the company’s chief financial officer. Initially suspicious of the request, the employee was later convinced during a videocall where AI-generated video and voice made the fraudster appear as the CEO. This case highlights how AI can be weaponised to undermine trust and manipulate individuals, making it harder to distinguish between real and fake content. As deepfakes become more sophisticated, the risk of financial and reputational damage grows.

Another rising threat is dubbed “pig butchering”, where fraudsters build emotional connections with victims,  using AI tools to create fake identities and automate interactions. These scams, which prey on victims through long-term deception, promise high returns—often in cryptocurrency—before the fraudsters vanish with the victim’s money. The use of AI makes these scams harder to detect and more emotionally damaging, as victims are manipulated over prolonged periods of time.

AI is also transforming phishing attacks, making them harder to recognise. Scammers now use AI to personalise messages, scrape social media for information, and even create fake websites that mirror legitimate ones. Traditional security methods are no longer enough, so using AI-powered detection tools and staying informed about AI-driven scams is essential for protection.

AI Literacy: The Key to Combating AI-Driven Cybercrime

The use of AI in scams like deepfakes and pig butchering is only the beginning of a broader trend that will likely see even more sophisticated forms of cybercrime emerge. However, by fostering AI literacy, we can empower individuals to recognise and defend against these threats, while also encouraging the development of AI-powered defences. Only by understanding the technology that criminals are using and countering it with our own AI-driven solutions can we hope to turn the tide in this escalating battle.

Deepfake detection technologies

Deepfake detection technologies use AI algorithms to analyse videos, images, and audio for signs of manipulation, helping to identify fake content before it can cause harm. Similarly, document fraud detection leverages AI to spot forged or altered documents, using pattern recognition and anomaly detection to flag suspicious activity.

Natural Language Processing in Phishing Prevention

Email fraud, cyber-crime and phishing attempts are an ever-present threat to individuals and organisations alike. AI’s Natural Language Processing (NLP) tools play a significant role in mitigating these risks by analysing the language and structure of communications. NLP can detect phishing emails by identifying unusual language patterns or terms that convey urgency. For example, an email asking an employee to make an immediate wire transfer “to avoid severe consequences” would be flagged as high risk, prompting further verification measures. From a legal perspective, these tools not only protect an organisation’s data but also help meet compliance obligations under data protection regulations. They minimise exposure to risks that could lead to data breaches, with significant financial and reputational consequences.

Behavioural Biometrics and Predictive Modelling

Behavioural biometrics is another powerful AI tool, that tracks and analyses unique patterns in user behaviour—such as typing speed, mouse movements, and browsing habits—to detect fraudulent activities in real time.

Predictive Modelling is also a crucial aspect of AI’s functionality. It involves analysing historical data to assign risk scores to transactions, users, or accounts. This analysis looks at factors like previous transaction behaviour, device information, and IP address reputation. For instance, if a new account suddenly engages in high-value transfers inconsistent with its history, AI can generate alerts. The ability to develop individual risk profiles is particularly valuable for financial institutions, who must adhere to Anti-Money Laundering (AML) and Know Your Customer (KYC) regulations.

Final thoughts

As AI continues to advance, its potential to be used for malicious purposes, like fraud and deception, cannot be ignored. Whilst these threats are alarming, the same technology that fuels these crimes also holds the key to defending against them. AI-driven solutions like deepfake detection, behavioural biometrics, and fraud prevention tools are already helping to combat these growing risks. With the right investment in AI literacy, awareness, and innovation, we can turn the tide, using AI as a powerful ally in the fight against cybercrime. The future may be uncertain, but with the proper safeguards in place, AI can empower us to build a safer, more secure digital world.

Nicole Jennings, Nov 24.