Interviews

A question for ChatGPT

by Mark Rowe

Is ChatGPT capable of perpetrating identity fraud? asks Simon Horswell, Fraud Specialist at the digital identity product company Onfido.

Generative AI, specifically ChatGPT, has burst onto the world stage and captured the attention of millions. When users register, they can input a prompt or question – anything, from philosophy, literature and history, to developing a business strategy or creating poetry – and the AI returns complete, grammatically correct, and often referenced answers. ChatGPT has even recently passed Google’s coding interview for Level 3 engineer and passed exams from law and business schools.

Within the first two months of its launch, the tool amassed over 100 million users – making it the fastest-growing consumer app ever. Since then, Microsoft has pledged to invest $10 billion in its parent company – OpenAI, and has refreshed its search engine Bing with ChatGPT integrations.

But, as use cases for ChatGPT and other generative AI platforms such as DALL-E are explored, concerns are growing about if these tools could be used for nefarious reasons. In the field of digital identity, for instance, questions are being raised as to whether generative AI tools could be used to assist fraudsters, perpetrate crime and challenge businesses’ KYC (Know Your Customer) checks and cyber security defences.

The identity fraud landscape

ChatGPT has emerged at a time of heightened identity fraud risk. Following spikes in online fraud during the covid-19 pandemic, rates are yet to drop back to pre-pandemic levels, with an estimated global cost of £4.37m. Fraudsters are exercising tactics with efficiency and scalability. In recent years, sophisticated crime jumped with new approaches like synthetic identity fraud and replay attacks. However, in the last 12 months, what we classify as ‘easy’ or less sophisticated fraud – such as easily spotted doctored IDs – has risen dramatically by 37 per cent. This shift in behaviour indicates that fraudsters are opting to use low-effort attempts to scale attacks en masse in a bid to flood an organisation’s defences and hope one slips through.

Could generative AI be used to perpetrate identity fraud?

While OpenAI has trained its algorithms to reject unethical requests, several serious security vulnerabilities have already been exposed. Just weeks after its launch, an Israeli cybersecurity company demonstrated how the chatbot, when used in tandem with OpenAI’s code-writing system Codex, could create a phishing email capable of carrying a malicious payload.

Our core question remained, so we asked ChatGPT, “How could ChatGPT be used to perpetrate identity fraud?”. This is how it responded:

It is important to note that ChatGPT is a language model and does not have the ability to perpetrate identity fraud on its own. However, if someone were to use ChatGPT to generate fake information or documents, they could potentially use that information to commit identity fraud. For example, they could use ChatGPT to generate a fake identity document, such as a driver’s license or passport, and then use that document to pose as someone else. This could allow them to gain access to sensitive information or financial accounts. It is important to always verify the authenticity of any documents or information that you receive, and to never share personal information with anyone unless you are certain of their identity.

It’s a response that predictably focuses on hypotheticals, but there’s the knowledge that fraudsters could turn to Generative AI to generate fraudulent information and identity documents (IDs). However, personal information isn’t the only element that makes up an ID; a photo of the ID owner often accompanies it. While ChatGPT only supports text generation, other AI generative tools, such as OpenAI’s DALL-E, generate images from text descriptions.

Could this enable cybercriminals to match deepfakes with stolen or fake information to create counterfeit IDs? Not quite. DALL-E is still a nascent technology with limitations. It lacks the ability to accurately render human faces and returns confused results. With critical limitations, these systems are still some years away from allowing a harmful actor to commit identity fraud on a single system. But what we could see is amateur fraudsters be given the tools to learn and train in identity fraud, developing convincing spoofs of real documents on-demand and at scale.

Fraud-as-a-service

In recent years we’ve seen the explosion of fraud-as-a-service whereby custom-built tools are sold to fraudsters. These services range from two-factor authentication interception bots to full-suite packages that include duplicating entire websites on look-a-like URLs. As a result, it won’t be surprising to see fraud-focused AI generation tools on the price list one day.

But there’s no need to batten down the hatches. As fraud gets smarter, so does detection – and naturally, technology is already developing to identify content generated by ChatGPT. For instance, AI-powered identity verification technology is continually advancing to spot identity fraud attempts and passive fraud detection signals, such as device intelligence, can give businesses a fuller view of the end-user without adding friction. Similarly, biometric verification technology which protects businesses from stolen IDs and impersonation fraud is also evolving. AI-powered biometric verification can stay ahead of complex spoofs, detecting manipulated photos, submissions of a photo taken from another screen, a print-out of an image, as well as 2D and 3D masks. 

So, will generative AI tools, like ChatGPT, bring about a new era of fraud? No. But they will influence the way we build fraud detection tools.

See also the Onfido blog.

Related News

  • Interviews

    Online behaviour

    by Mark Rowe

    Nearly half of employees use social media and read news during work; a third even shops online. A study by Safetica about…

  • Interviews

    Mentoring the way

    by Mark Rowe

    Put your hand up if you were at school in the 1980s and remember the careers officer whose role it was to…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing