Are you prepared, asks Charles Southwood, Regional Vice President, Northern Europe and Africa, at the data management software firm Denodo, for generative artificial intelligence (GenAI) cyber threats?
Experience is the best teacher when it comes to understanding the risks associated with new technologies. History shows that as technology becomes more popular and widely adopted, it also becomes a target for cybercriminals. Active Directory attacks became prevalent when Windows Server gained dominance, business-email scams surged with the rise of Microsoft 365, and crypto-mining exploits emerged with the growing popularity of Infrastructure-as-a-Service (IaaS).
Generative artificial intelligence (GenAI) is now at the forefront of technological transformation. McKinsey projects that AI will automate half of all work between 2040 and 2060 and expects that GenAI will accelerate this timeline by a decade. However, a concerning 96% of executives believe that adopting GenAI will increase the likelihood of a security breach within the next three years.
This is a valid concern as 63 per cent of IT professionals acknowledge the technologies potential to create new cyber risks. The potential increase in phishing attacks due to the sophistication of generative AI. New models can be used to create highly personalised and convincing emails that make it more difficult for recipients to identify as disingenuous. Additionally, the ability for GenAI to evolve and learn creates the opportunity for self-evolving malware that can increasingly avoid detection. As cyberattacks increasingly target data, hackers can alter information to mislead or influence business decisions, introducing new risks for CEOs across multiple departments. This makes data security a fundamental pillar for ensuring the reliability and trustworthiness of generative AI systems.
What Are the Emerging Threats?
Prior to GenAI entering the mainstream market, in 2020, researchers discovered GenAI-powered malware like DeepLocker, which used advanced obfuscation techniques to evade detection. ChatGPT has also been linked to new threats; a study from the University of Illinois Urbana-Champaign found that a GPT-4 – based agent successfully exploited 87 per cent of “one-day” vulnerabilities – publicly disclosed vulnerabilities for which patches are not yet available. This indicates that GenAI is fostering a new wave of cyber threats, providing hackers with even more opportunities to exploit vulnerabilities and execute attacks. Hackers now have the capabilities to impersonate voices, faces, and personalities, making their attacks more convincing. For example, a video from your manager requesting specific actions or a call from a bank asking for urgent payments are all scenarios that GenAI can make alarmingly plausible.
Protecting Against GenAI-Driven Threats
To safeguard against these threats, organisations should adopt a robust framework for securing AI systems, starting with updates to governance, risk, and compliance (GRC) strategies. As AI regulations, like the EU AI Act, become more significant, embedding GRC principles at the outset of eve-ry project can accelerate innovation while ensuring a strong security foundation. Data security is crucial for trustworthy GenAI. Since training data is foundational for GenAI models, it becomes a prime target for cyber-attacks. Hackers may seek to tamper with data to misdirect or manipulate business decisions, presenting CEOs with new legal, security, and privacy challenges.
A Logical Data Fabric
Organisations often start GenAI projects with a single data source, such as a vector database. However, to harness the full potential of GenAI, it is essential for the GenAI application to be able to access data across multiple distributed systems, and in a variety of formats. For example, for a customer-service chatbot to provide accurate responses, it would need to pull information from ERP systems, support ticketing systems, CRMs, and internal APIs.
Robust access controls are necessary to protect sensitive data. The EU AI Act, which many countries may use as a model, requires stringent risk management and user transparency for high-risk applications. For these reasons it is crucial to be able to Implement role-based access control (RBAC) or attribute-based access control (ABAC), and to leverage data tags indicating sensitivity. Traditional security measures such as authentication, encryption, and masking will remain important.
A logical data management approach, surmounts these challenges. The right platform will consolidate disparate data through metadata, offering a unified view while maintaining security and governance. Ideally, it should support user and role-based authentication and authorisation, with row-based and column-based security options, including data masking. This enables stakeholders to track the lineage of data and queries, aiding regulatory compliance.
It would serve businesses well to remember that secure GenAI begins with trusted data. Incorporating a logical data management approach early in your GenAI projects can help mitigate security threats and ensure robust data governance.
About the author
Charles Southwood has been the Regional VP for Denodo Technologies for the last four years and is responsible for the company’s business revenues in northern Europe and Africa. Born and raised in Ascot, Berkshire with a degree in engineering, from Imperial College London, Charles has over 25 years of experience in technology sales and sales leadership where he has been involved in start-up ventures, and the expansion of businesses. He has a background in data integration, big data, IT infrastructure/IT operations and Business Analytics gained with Splunk, Oracle, BEA systems and BMC software. His hobbies include flying (he has a private pilot’s license), yacht sailing and road cycling. He has climbed and trekked the mountains of the Andes, the Alps, the Himalayas and Kilimanjaro. Visit www.denodo.com.





