Interviews

Fraud predictions: AI-powered will proliferate

by Mark Rowe

Simon Horswell, Senior Fraud Specialist at the AI-powered digital identity product company Onfido, offers some fraud predictions for 2024.

There was one technology that dominated discussions last year – Generative AI. An evolution of artificial intelligence that can produce content based on learned information – such as a response to a question – it released AI from its hype cycle and into the mainstream. Now, more than half of US employees report using Gen AI to support day-to-day activities, while no less than four million in the UK have embraced platforms, like ChatGPT, for work.

While this technology has created opportunities for some, it can also be used for malicious purposes, and we’ve seen AI become part and parcel of fraud – enabling fraudsters to create scams at scale, all from the click of a button. Lawmakers and regulators around the world have been racing to catch up, and that’s why we’ve seen the creation of AI Safety Summits alongside new restrictions and rules to protect both individuals and businesses.

But as we look ahead to 2024, how will the fraud landscape evolve around AI?

1. Deepfakes will continue to proliferate
In 2023, we witnessed a surge in AI-manipulated and synthesised media, with deepfake fraud attempts jumping 3000 per cent year-on-year. Driven by the availability of simple-to-use AI tools, fraudsters scaled sophisticated attacks without needing technical skills or heavy resources.

This is already impacting high-profile public figures, like politicians, with Sir Keir Starmer and Sadiq Khan falling victim to deepfake attacks, while it has also impacted celebrities and business leaders. With a record-breaking 40-plus countries – representing more than 40pc of the world’s population – due to hold elections this year, deepfakes have the potential to inflict significant reputational damage, destabilise trust in public leaders and spread misinformation. What’s more, as AI continues to attract public interest, an election provides an opportune moment for a fraudster to double down on deepfakes to cause disruption.

In 2024, businesses and individuals alike will need to remain vigilant. Onfido’s Identity Fraud Report shows that 80pc of attacks on biometric systems, like those used in e-voting, were videos of videos displayed on a screen. This suggests fraudsters will always opt for the easiest, most cost-effective route. Should they go mainstream, deepfakes will be a major threat to contend with.

2. Scaling convincing smishing and phishing attacks
As AI becomes ever more accessible, malicious actors are taking advantage to orchestrate large-scale and highly convincing smishing (SMS phishing) and phishing attacks. The widespread availability of Gen AI tools has lowered the barriers to entry, enabling cybercriminals to craft deceptive messages that appear more authentic than ever. This means the typical signs of a scam we all look for, such as checking spelling and grammar mistakes, will be harder to spot.

Attackers will continue to use Gen AI and large language models in the likes of phishing and smishing to make the content and images appear more legitimate. But hope is not lost – as scams become more advanced, so does the AI used to keep businesses and individuals safe. We are seeing an AI vs AI battleground emerge, and it’s crucial that businesses deploy systems that have been trained on the very latest attack vectors so they can stay on top of the evolving threat landscape in 2024.

3. Social engineering scams will ramp up
Contrary to using AI to create convincing spoofs, in 2024 we’ll likely see the technology used for a much simpler attack – eliciting people’s private information through social engineering scams.

Cybercriminals have shown over the past year that sophisticated technology is not always necessary to carry out successful cyberattacks at scale. For example, Caesars Entertainment recently confirmed that a social engineering attack had stolen data from members of its customer rewards.

In identity verification, we’ve already seen a number of scams employed by fraudsters to get unsuspecting genuine people to help create fake accounts on their behalf. Fraudsters have used several different scenarios to get people to complete the application process using their authentic documents and matching biometric images. These range from fake job ads to delivery drivers asking for proof of identity on people’s doorsteps. The difficulties around detecting these seemingly genuine applications mean they pose a real threat to many of the standard practices in remote identity verification, so businesses need to be ready with the right defences to combat these threats.

4. 2024 is the year of tighter regulation
From the EU AI Act to the Online Fraud Charter, 2023 saw policymakers take significant steps to protect the public from fraud. Notably, the UK introduced a new Anti-Fraud Champion who is responsible for driving collaboration between the government, law enforcement, and the private sector to help block fraud which accounts for 40pcof all crime.

In 2024, we are likely to see tighter regulations come into place as governments try to help businesses keep one step ahead of bad actors. Failure to comply with these laws and regulations can have serious consequences, including fines, legal action, and damage to reputation. For instance, from 2024, UK banks will be required to refund customers who have been tricked by scammers in a phenomenon known as authorised push payment (APP) fraud. With losses to APP fraud reaching almost £500m in 2022, this will no doubt raise significant concerns for banks this year.

New legislation will play a pivotal role in reducing fraud cases, but there is a delicate balance to be maintained. While it is imperative to address the risks posed by AI, we need to be careful not to demonise the technology completely, as this would detract from its role in innovation and provide robust safeguarding against new and emerging threats.

In 2024, the intersections of technology, regulation, and criminal innovation will shape the trajectory of fraud. Businesses and individuals alike must remain vigilant, adopt proactive cybersecurity measures, and collaborate with regulators to navigate the delicate balance between harnessing the potential of AI and providing protection from bad actors. There’s no doubt that this year will be a critical juncture in the ongoing battle against AI-driven fraud, so businesses need to make sure they are on the right side of the fight.

See also https://onfido.com/blog/.

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing