Artificial Intelligence (AI), large language models and cryptocurrencies combined with phishing- and ransomware-as-a-service business models have resulted in more sophisticated and professional fraud campaigns without the need for advanced technical skills, and at relatively little cost, according to the international police body Interpol.
Interpol Secretary General Jürgen Stock launched the Global Financial Fraud Assessment at a fraud summit, arranged by the UK government in London. The assessment points to human trafficking for the purpose of forced criminality in call centres, particularly to carry out ‘pig-butchering’ scams – mainly based in Southeast Asia but affecting Europe; a hybrid scheme combining romance and investment frauds, using cryptocurrencies.
Mr Stock said: “We are facing an epidemic in the growth of financial fraud, leading to individuals, often vulnerable people, and companies being defrauded on a massive and global scale.
“Changes in technology and the rapid increase in the scale and volume of organized crime has driven the creation of a range of new ways to defraud innocent people, business and even governments. With the development of AI and Cryptocurrencies, the situation is only going to get worse without urgent action.
“It is important that there are no safe havens for financial fraudsters to operate. We must close existing gaps and ensure information sharing between sectors and across borders is the norm, not the exception.
“We also need to encourage greater reporting of financial crime as well as invest in capacity building and training for law enforcement to develop a more effective and truly global response.”
Comment
Oliver Spence, CEO of Cybaverse, says criminals are relying on generative AI to create sophisticated phishing scams and these are frequently tied together with ransomware. He says: “Clearly financially motivated cybercrime is on the rise, and generative AI tools are heightening the problem, while lowering the technical barrier of entry into cybercrime. None of the traditional red flags are evident in these emails. The spelling perfect, there are no mentions of Nigerian Princes, and the spoofed domains which are linked to within the phishing emails look legitimate as well.
“Criminals follow money, and AI is awarding them well today. This will continue in the future, and things will get worse as generative AI tools continue to develop. To counter the threat, employees need to be trained about AI generated phishing scams and taught to question emails, even when they seem realistic. Organisations must bolster this with email security solutions that can detect malicious code embedded into emails, so they can be stopped before reaching user inboxes.
“Furthermore, organisations need to manage and monitor their security efforts more efficiently, so weaknesses can be more easily spotted. This can be achieved using tools that make cyber security more simple, streamlined and easier to manage.”




