This week of January 27 to 31 is Data Privacy Week, as set in the United States. The cybersecurity landscape is always evolving, but the threat remains at large, says Prof Kevin Curran, IEEE senior member and cybersecurity academic at Ulster University.
He said: “Cybercriminals are starting to employ more advanced tactics, such as artificial intelligence (AI) to enhance attack efficiency. New ransomware groups continue to emerge, reflecting the ongoing profitability of these operations. Attackers are employing multi-pronged extortion techniques, combining data encryption with threats to leak sensitive information and launch distributed denial-of-service (DDoS) attacks. Critical sectors like healthcare and manufacturing will likely remain primary targets, as they cause the most disruption.
“The integration of (AI) into malware development is also an emerging trend. AI enables cybercriminals to create more sophisticated and adaptive malicious software; threat actors can better evade detection and exploit vulnerabilities. For instance, AI can automate the generation of polymorphic malware, which alters its code to bypass traditional security measures. AI-driven tools can also craft highly convincing phishing emails and social engineering attacks, increasing the likelihood of successful breaches.
“This year, as AI technology becomes more accessible, the volume of AI-generated malware will increase, posing significant challenges for organisations and security teams. Moving forwards, CISOs should have a holistic understanding and approach to cybersecurity. Adopting a ‘secure by design’ framework or zero trust policy will be key. This includes identifying which risks to avoid, accept, mitigate, as well as specific plans in each case. As well as establishing adept protocols for employee access, data storage, data backups, network security, compliance and recovery procedures.”
Akhil Mittal, senior security consulting manager at Black Duck, said: “High-profile breaches and stricter regulations like GDPR, CCPA, and emerging AI-related privacy laws are pushing companies to make data privacy a fundamental part of their operations. This shift means privacy is no longer just a regulatory requirement but also a way for companies to differentiate themselves in the market.”
David Shepherd, SVP of EMEA at the network security product firm Ivanti, said: “Managing data in the workplace isn’t just about compliance, it’s about creating a culture where privacy is understood, prioritised and actively protected. Making this happen requires employees to understand how their actions affect data security, from recognising phishing attempts to handling sensitive information responsibly. All of which requires tailored staff training, including phishing simulation, data handling and classification training, as well as overall security awareness training.
“Furthermore, to put privacy first, then it’s crucial to address the workarounds employees sometimes act on to bypass security measures. These workarounds, such as password fatigue or disabling additional security measures, significantly jeopardise data security.”
Ravi Bindra, CISO at SoftwareOne, said: “Tech advancement, namely the AI boom, continues to change the data privacy game. As AI evolves, the threat landscape grows increasingly complex, equipping malicious actors with advanced tools to compromise confidential data. As threats grow in scale and severity, compliance with new regulations like the EU’s DORA and NIS2 is business critical, but this must be paired with continued investment into AI and more importantly how to use it responsibly.
“The core challenge is that the speed of technology evolution is outpacing the development and implementation of data governance frameworks and security protocols for businesses to rollout. As such, a priority focus for Data Privacy Day must be on ways to balance AI investment with secure integration. Ensuring that security protocols are baked into all processes to provide employees with clear direction on accepted AI use is key. This should be met with increased AI training for staff, so employees understand their key role in keeping organisational data secure.
“Going one step further, hybrid cloud models can be set up to keep secondary and tertiary backups in other locations, keeping data isolated from threats within internal networks. With so much at stake, from reputational damage to customer and financial loss, protecting sensitive data through AI and cloud investment should be top of the business agenda in 2025.”
Jimmy Astle, Senior Director of Detection Enablement at the cyber threat detection company Red Canary, said that the rise of generative AI has brought data privacy to the forefront of global conversations. He said: “These AI models, trained on vast amounts of internet-scraped data, have ignited concerns about consent and transparency. Questions are being asked about whether individuals and organizations should be informed if their data is being used in this way.
“It’s clear our current privacy laws are struggling to keep pace with the evolution of technology. However, while generative AI adds complexity, it doesn’t eclipse existing data privacy concerns that we’re already grappling with. In fact, the most pressing challenges still stem from widespread data breaches and apps that exploit personal data for profit.
“What GenAI has done though is introduce new dimensions to these existing challenges. For example, we’re seeing a rise in AI-driven SaaS tools that collect and process user data. Technology vendors are increasingly offering opt-out options for their AI features to safeguard user privacy, but this underscores a larger need for more clarity around how data is being used.
“The path forward demands a balance of adaptability, transparency, and regulation. Organizations must take proactive steps to safeguard privacy, including clear communication around data practices and investment in privacy-preserving technologies. Regulators must also work closely with the technology industry to craft policies that protect individuals without hindering progress.”
This year’s set theme is ‘take control of your data’. Miia Hytonen, privacy risk and compliance manager at Laserfiche, commented: “So much of our personal information is constantly being collected, shared, and analysed across websites, apps, devices, and services, but we all can take control on how much of this data we want to share across these various mediums. Each one of us can limit what personal data and how much of it we allow it to be collected and processed by updating privacy and security settings on mobile apps, IoT devices and browsing online to our preferences. It is extremely important that we all mobilize the significant privacy tools available to us in our online toolkits – through web browsers, applications, software, etc!”




