TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
Interviews

Data Privacy Week comments

by Mark Rowe

It’s Data Privacy Week. Privacy really boils down to choice and trust around how personal data is being used, according to Patrick Harding, Chief Product Architect at the vendor Ping Identity.

 

He says: “Data privacy is no longer a passing concern for consumers – it has become a defining factor in how they judge brands, with three-quarters now more worried about the safety of their personal data than they were five years ago, and a mere 14 per cent trusting major organisations to handle identity data responsibly.

“Whether it’s social engineering, state-sponsored impersonation or account takeover risks, AI will continue to test what we know to be true. As threats advance and AI agents increasingly act on behalf of humans, only the continuously verified should be trusted as authentic. For businesses, the path forward is clear: trust must be earned through transparency, verification, and restraint in how personal data is collected and used. The businesses that adopt a “verify everything” approach that puts privacy at the centre and builds confidence across every identity, every interaction and every decision, will have the competitive edge.”

Zero Trust must be at the core of a security strategy, according to Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University. He says: “The traditional security model, which assumes that everything inside the network can be trusted, is no longer fit for purpose. The growing use of IoT devices and cloud services means organisations now have far more endpoints exposed to potential attack. A basic rule of cybersecurity is that the more connected devices you have, the greater the risk. Many of these devices effectively act as back doors into corporate systems, often without organisations fully realising the level of exposure they’ve created.

“Zero Trust works on the principle that no user or device should be trusted by default, regardless of where they sit in the network. Every request must be verified, reflecting the reality that data is now spread across multiple platforms and services. But that requires strong governance, clear policies and senior-level oversight to be effective. Many organisations struggle with this because it means enforcing new behaviours, tighter controls and, in some cases, reduced access. That can be uncomfortable, but it’s essential.”

Monica Landen, CISO at Diligent comments that the gap between AI adoption and AI governance has never been wider. She says: “Business leaders are doubling down on AI investments, yet many organizations are racing to implement AI tools without putting the right data governance frameworks in place.  In some instances, companies have deployed generative AI solutions only to discover too late that they have inadvertently exposed sensitive customer data or violated compliance requirements. The aftermath isn’t pretty, leading to reputational damage, regulatory penalties, and considerable loss of revenue.”

“Physical security data can be highly sensitive, and protecting it requires more than basic safeguards or vague assurances,” said Mathieu Chevalier, Principal Security Architect at Genetec Inc, the access and security management product firm. “Some approaches in the market treat data as an asset to be exploited or shared beyond its original purpose. That creates real privacy risks. Organizations should expect clear limits on how their data is used, strong controls throughout its lifecycle, and technology that is designed to respect privacy by default, not as an after-thought.”

And Jimmy Astle, Director, Machine Learning at Red Canary says that Agentic AI is moving out of the lab and into real-world corporate systems – used for scanning documents, augmenting workflows, and taking actions once reserved for humans. He says: “That shift has significant ramifications for data privacy, especially if AI tools are deployed without clear governance, strong access controls, and careful oversight.

“The risk stems from the increasing volumes of information that organisations need to grant their agents access to for them to act autonomously. That data is often sensitive or personal, relating to employees and customers – who expect the business to keep it secure. This is why guardrails around data access must come first in any AI initiative.

“Data privacy in the agentic era starts with treating AI like any other user that accesses corporate systems – it must be secured at the identity layer. Organisations should keep their access privileges tight, maintain clear visibility into which data AI agents can retrieve and act on, and control which users are able to prompt them. From there, employees need clear usage policies and security teams should regularly review how their AI systems behave in practice. Privacy checks should also be built directly into user workflows from day one to ensure consistent and widespread compliance. With robust data privacy controls, AI will remain a force for efficiency and insight, rather than a source of unintentional exposure.”

More in the March edition of Professional Security Magazine.

Related News