TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Commercial

AI round-up

by Mark Rowe

The specialist insurer Hiscox says it’s gone live with the London insurance market’s first lead underwriting model, enhanced by generative AI. Initially developed as a proof of concept in December 2023 in collaboration with Google Cloud, the new underwriting model covers the insurance firm’s sabotage and terrorism line of business.

Risks that are in scope are assessed using Google Cloud’s Gemini large language model, and a broker can have an insurance quote back in minutes, the firm says. While the new underwriting model is available to all brokers, the first risk was written with insurance broker WTW. Hiscox adds that it’s looking at other lines of business and AI.

Kate Markham, Hiscox London Market CEO, said: “We were really excited by the potential shown by the proof of concept, so to see it now making a tangible impact on our business – starting with sabotage and terrorism – is fantastic. The efficiency delivered is testament to the outstanding collaboration between Hiscox and Google Cloud. It proves that by bringing teams together and harnessing technology, we can deliver tangible benefits for customers, while freeing up our underwriters from manual tasks and allowing them to focus on more complex risks where human expertise is critical.”

Infosec behaviours

The phishing training platform KnowBe4 released results from a survey at Infosecurity Europe 2024 among 201 cybersecurity professionals to better understand security behaviours in the workplace. The findings of the survey showed three-quarters (75pc) of security professionals have witnessed employees displaying risky security behaviours at work and almost two-thirds (62pc) admit to performing risky behaviours themselves. The top risky behaviours security professionals admitted to doing included: using entertainment or streaming services themselves (33pc); using GenAI within the organisation (31pc); and sharing personal information (14pc). Javvad Malik, lead security awareness advocate at KnowBe4, saw this as evidence of a lack of security culture. “Cultivating a strong security culture means going beyond just educating staff on threats. Teach them how to respond and identify them as this will help with prevention,” he said.

Comments

Usman Choudhary, Chief Product and Technology Officer, VIPRE Security Group, says: “As AI technology advances, the potential for BEC attacks grows exponentially. Malefactors are now leveraging sophisticated AI algorithms to craft compelling phishing emails, mimicking the tone and style of legitimate communications. The next wave of BEC attacks could see attackers using AI to dynamically analyse and exploit real-time information, creating tailored and contextually accurate scams nearly indistinguishable from genuine correspondence.

Regulation

In the UK, a voluntary Code of Practice was developed by the Department for Science, Innovation & Technology (DSIT) and is based on the National Cyber Security Centre’s (NCSC) Guidelines for secure AI system development which were published in November 2023.

Similar to GDPR (for data protection), any UK business that sells into the EU market will need to concern themselves with the new EU AI Act, said Curtis Wilson, staff data engineer at the Synopsys Software Integrity Group. “However, even those that don’t can’t ignore it. Certain parts of the AI Act, particularly those in relation to AI as a safety component in consumer goods, might also apply in Northern Ireland automatically as a consequence of the Windsor Framework. The UK government is moving to regulate AI as well, and a white paper released by the government highlighted the importance of inter-operability with EU (and US) AI regulation. UK companies aligning themselves to the EU AI Act will not only maintain access to the EU market, but hopefully get ahead of the curve for the upcoming UK regulation.

“From software licensing to data privacy regulations, UK businesses are already used to having to deal with EU regulatory frameworks. Many of the obligations laid out in the act are simply data science best practices and things companies should already be doing. There are some additional obligations around registration and certification, which will probably lead to some friction. Small companies and start-ups will experience issues more strongly; the regulation acknowledges this and has included provisions for sandboxes to foster AI innovation for these smaller businesses. However, these sandboxes are to be set up on the national level by individual member states, and so UK businesses may not have access.”

More on AI in the September edition of Professional Security Magazine.