TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Education

AI for intelligence analysis

by Mark Rowe

AiTASHA stands for AI Intelligence Triage & Acquisition Support for Human-centred Analysis. It’s a British post-doctoral research project, funded by the Engineering and Physical Sciences Research Council (EPSRC). The aim: to improve the speed and confidence of intelligence analysts’ assessments by building new AI tools that can work alongside human analysts.

Intelligence analysts routinely have to make high-consequence, defensible assessments from vast, complex and uncertain datasets, to identify indicators and warnings of hostile or malicious activities. Analysts may make these assessments rapidly under pressure; they face difficult choices of what data to analyse first, and whether to gather more intelligence, potentially at the cost of delay or increased risk. Meanwhile the scale and complexity of intelligence datasets, as well as the threat posed to UK safety, are growing.

A consortium from British universities; Warwick, Southampton, Cardiff and Dundee and the Alan Turing Institute, will work with UK government defence and national security partners. The researchers will build explainable, defensible AI to complement, rather than replace, the work of intelligence analysts, by recommending which data should be prioritised for human review and which potential new data should be prioritised for acquisition.

Prof Jon Gillard, Cardiff University’s lead on the grant from School of Mathematics, said: “Intelligence analysts are working at the limits of what is humanly possible. The volume, velocity and diversity of data they must interpret has grown dramatically, yet their judgements remain central to safeguarding the UK. This project brings together decades of UK leadership in structured expert judgement, statistical modelling and explainable AI to build tools that genuinely enhance, not replace, human expertise.

“Our goal is to create defensible, ethical AI systems that can help analysts focus on what matters most: identifying early signs of emerging threats and making robust, timely assessments in high-pressure environments. ”

 

Background

This research is part of the Defence and National Security Grand Challenge at the Turing. More about the Defence AI Research Centre (DARe) at the Turing on this link.

Separately, the Institute has outlined an AI model could aid maritime surveillance by enabling satellites to detect and classify vessels faster than is currently possible, passing information to Earth within minutes rather than hours, and helping to monitor illicit activity including illegal fishing, smuggling, trafficking, and piracy.  More on the Institute website.

Related News

  • Education

    AUCSO 2026 awards open

    by Mark Rowe

    Entries are open for the annual awards 2026 for the Association of University Chief Security Officers (AUCSO), the group for security managers…