AI-powered technology could enhance the way that the police are able to serve the public, according to a report by the think-tank the Police Foundation. Policing is at its heart a complex information business, but it has struggled to make full use of the data stored on its many often outdated systems, write Dr Rick Muir, Director of the Police Foundation, a charity, and researcher Felicity O’Connell.
Examples of how artificial intelligence (AI) could aid the work of the police is that a software check could throw up that the partner of a woman making a 999 call has a firearms licence, which would affect how police responded; and a translation app. Or, as later stated in the report, police could conduct searches of their entire dataset, based on features such as a suspect name, an address, or a number plate, in seconds, rather than make a manual search across databases.
From speaking to those in the field, the authors heard that possibilities include, in call handling, ‘sentiment analysis’ could be used to detect emotion in someone’s voice; ‘automated case file production’, to save police investigators time; sensors and other devices into an officer’s uniform and kit, to offer live language translation; a large language model could generate a read out for officers on a situation they are being deployed to about risks.
The authors point out technical, organisational and cultural questions. AI could be ‘transformative in policing because it can turn this wealth of data into actionable intelligence at the touch of a button. However, the AI revolution poses a whole set of legal and ethical questions for the police and society. How far should the police go in using AI to keep communities safe? Could these technologies make the police too effective, in that they may be able to know much more about us and pry into our private lives’.
As the 21-page report sets out, machine learning, deep learning and neural networks are subsets of AI. Live Facial Recognition (LFR) is now deployed routinely by the Metropolitan Police, the report says. (Separately, Suffolk Police announced this month that they are to trial use of Live Facial Recognition). While LFR is in public and contentious, ‘transformative areas in which AI could play a role are much more mundane’, the authors argue; such as chatbots, ‘being used to provide automated and semi-automated responses to many simple or transactional enquiries’, such as about animal welfare or lost property; one in five queries to Bedfordshire Police ‘are now answered by chatbots’.
The authors admit a lack of information in the public domain about how much, and how, police in the UK are using AI; or considering using it, whether to automate and take the human work out of digital redaction of names; risk management of warrants, back office functions, or uncovering hidden links or better mapping intelligence; for automating ‘triage’ by call handlers, or lie detection; or ‘predictive policing’, whereby ‘algorithms and data analytics …. predict the geographical areas where future crimes may occur’.
Challenges
Among the challenges to such take-up of tech, the authors identify that the police service is forever ‘fighting fires’, and ‘the culture and mindset of policing is not geared towards embracing technological change’. If machine learning tools are trained on data, police data across the territorial forces is not standard, and it has ‘a lot of erroneous entries’. Or as one interviewee put it to the Foundation, a lot of police data is ‘really bad quality’. Further, forces have data locked into legacy systems, that don’t talk to each other, and each force is using its own AI tools. Yet, if forces must wait for big, national tech programmes, ‘innovation may slow’.
Another risk, as with AI generally is bias, such as ‘the potential for algorithmic amplification of latent data biases’, and ‘the risks of an exponential expansion of errors’. Or as the researchers were told, AI ‘goes wrong faster’. As for the law, while the UK has laws about intellectual property and data protection, among others, it lacks ‘specific legislation or law regulating AI’. And as for the workforce, it lacks ‘data literacy’; while technology costs money, and police will need to hire (and keep, by paying ‘competitively’) specialist staff such as data engineers, analysts and scientists.
Among the report’s suggestions are secondments for police leaders in industry, a ‘stronger national framework’ for delivery of police technology; and likewise a ‘single national framework for the ethical use of AI’.
The report concludes that AI has potential, to enable police to do a lot more, and more quickly, as policing is a ‘complex information business and makes countless routine decisions based on the intelligence it has received and the incidents it has recorded’. A ‘foundation for AI powered policing’ needs laying, the authors say.
For the report visit https://www.police-foundation.org.uk/publications/.
Photo by Mark Rowe: CCTV on-street outside New Scotland Yard, central London.




