Author: Perry Carpenter
ISBN No: 978-1-394-29988-1
Review date: 17/12/2025
No of pages: 288
Publisher: Wiley
Year of publication: 01/10/2024
Brief:
A new book sets out ‘chilling’ truths about the power of generative AI – how humans can make malicious use of it, and some ways humans can defend themselves against it.
If someone can think it, they can write a prompt for it, using an AI-powered product, ‘and if they can prompt for it, they can cast a new deceptive ‘reality’ into the world’, whether to scam people, destabilise confidence in banks, create pornography to blackmail people or 101 things (using a recording of your child’s voice to pretend that they’ve been kidnapped, and a caller demands a ransom?); whatever people want to do, to deceive others. An appendix offers a simple and even amusing way for you and close family to guard against being scammed. A relative calls suddenly, asking for money urgently. If you have agreed a family code word and gesture, ask for it.
Numerous books (including previous ones by the author, Perry Carpenter) have already warned of deepfakes and how (to quote from the foreword by ‘friendly hacker’ Rachel Tobac) we’re past the point where most of us can tell apart the real from the machine-made, and all its implications; how ‘crafty attackers’ can misuse the same products that the law-abiding use for good, or innocent fun. “In the end,” as Carpenter notes, “it comes down to humans, with human motivations and ingenuity, exploiting the biases and blind spots of other humans.”
Carpenter, as that line shows, has an easy style (he’s not afraid to reference Scooby Doo) and closes each chapter with ‘take-aways’. ‘Bad actors’ can exploit AI systems to generate realistic content, whether text, images, audio or video, ‘synthetic media’ that is already enough to cause some to doubt what they read and view online. What’s changed, Carpenter points out, ‘is the ease and scale at which we can fool each other and the global impact a single deception can have’, to alter an election result or reputation of a business. AI may have done to knowledge work what the steam shovel did for mining – ‘making everyone’s digging skills almost irrelevant’. AI is giving scammers ‘a whole new bag of tricks for social engineering’ (the author works for KnowBe4, the phishing awareness platform). A scammer, even a fairly unskilled one, can clone your mum’s voice, your line manager’s, the anonymous IT help desk worker’s, anyone’s; to exploit that trust you have in those people, to get you to pass over information, or money. As Carpenter says, the fundamentals about scams and human nature remain true; except that, ‘the stakes are higher than ever’.
Yet Carpenter in a chapter on ‘media literacy’ describes himself as, despite everything, optimistic about tech in general and AI in particular. We can protect ourselves (and our loved ones) ‘from this storm of digital deceit’. Sadly, a list of tips won’t work, at least not necessarily for long, and might only give you a dangerously false sense of security as the tech evolves. Rather, the author advises that we know our enemy; that we understand the motive of the scammer and faker; trying to manipulate you with an offer ‘too good to be true’, to click on and share something that confirms your biases. Instead, Carpenter advises that you adopt the SIFT method (stop – count to ten; investigate – ‘unleash your inner Sherlock Holmes’; find trusted coverage – something big will be on a reputable website; and trace to the original, which may well have been distorted since). Scams and disinformation spread because people we know ‘like’ something, whether an outrageous video or an advert for a cryptocurrency. Here I would say that Carpenter makes all the right noises – that we should ‘verify before you amplify’, report misinformation to the platform you’ve read it on; and he lists ten fact-checking websites – except that online consumers. I’d argue, will have to do more than share ‘solid, fact-based information’ and ‘be a signal booster for truth in the noisy world of online chatter’. They might have to pay for online content (including the fact-checking, unless fact-checkers live on air?), or be far more discriminating in what they read. I note that one of the ten fact-checking websites is the BBC’s, which presumably I as a television licence holder am paying for and Carpenter, an American, is not.
Maybe it’s the necessary other side of the coin to the wonders of tech – that besides making our lives easier and quicker, tech also requires us to stay informed and vigilant. Endearingly, Carpenter opens the book with a conversation between himself and AI; and ends by feeding the book to AI, and giving it a couple of pages to comment (‘I lack the uniquely human ability to truly understand context, to feel empathy, or to make nuanced ethical judgments. These are your strengths, they are what make you resilient in the face of digital deception’).
Carpenter offers (also to us human readers!) an excellent – and above all trustworthy – ‘map and a compass’ of the playing field that we’re already facing and only going to face more pressingly. In two phrases, we should keep up our digital hygiene, and practice critical thinking, although Carpenter also reminds us of the more technical good practices such as using multi-factor authentication on online accounts. For ‘the threats we’re facing are getting nastier and more complex by the day’.
Visit https://www.thisbookisfaik.com/.





