According to experts in cybersecurity, the traditional warning sign of a scam – broken English – may no longer be a reliable indicator. This is because scammers are now using generative artificial intelligence (AI) chatbots such as ChatGPT to create messages that are written in nearly flawless language.
Recent observations by cybersecurity experts have noted a rise in the quality of language used in phishing scams, which coincides with the increasing use of ChatGPT. This development means that individuals must be extra cautious and vigilant in identifying other telltale signs of scams.
A special-issue report released in March by Norton, a security software firm, classified the risks associated with ChatGPT as an emerging threat. The report warns that scammers will exploit large language models like ChatGPT to create deepfake content, launch phishing campaigns and develop malware.
Darktrace, a British cybersecurity firm, also reported in March that email attacks have become more sophisticated since the release of ChatGPT. Scammers are now using the tool to create messages with greater linguistic complexity, suggesting that their focus has shifted to crafting more advanced social engineering scams.
ChatGPT is capable of correcting imperfect English and rewriting blocks of text for specific audiences, such as corporate or children’s language. It is now powering the revamped Microsoft Bing, which has over 100 million active users and is poised to rival Google in the search-engine market.
For more insight on protecting your organisation from cyber security threats, read our archives here.