TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
IT Security

Growing security risk of AI vendor insolvency

by Mark Rowe

AI regulation is becoming an increasingly urgent topic – and for good reason, says Sam Kirkman, Director of Services for EMEA at the cyber firm NetSPI.

 

Adoption of third-party AI tools has surged across industries, embedding themselves into everything from security operations to customer systems. Yet, many CIOs and senior decision makers are overlooking a critical and fast emerging risk that comes with this rapid, widespread adoption: what happens when your AI provider goes bankrupt? Despite the vast capital being injected into AI startups – about $32.9 billion in the first half of 2025 – many are operating at a loss. Combine this with the growing speculation of an ‘AI bubble’ burst and you end up with a data network under threat.

 

Valuable data thrown into the yard sale

In bankruptcy proceedings, everything has a price tag, including your data. Any information shared with a vendor, from logs to fine-tuned datasets, may be treated as an asset that can be sold to pay creditors. The implications are rough to say the least: customer data, proprietary telemetry, and even model training materials could end up in the hands of an unknown buyer.

We’ve seen this before. When Cambridge Analytica folded in 2018, the data it had amassed on millions of users was listed among its key assets. In healthcare, CloudMine’s bankruptcy forced hospitals to scramble to retrieve or delete sensitive health records. These examples show that once data enters a distressed company’s system, control over it can disappear overnight.

CISOs should treat all AI data sharing as a calculated risk. If you wouldn’t give a dataset to a competitor, don’t hand it to an unproven startup. Every contract should define data ownership, deletion procedures, and post-termination handling, but leaders must also accept that contracts offer limited protection once insolvency proceedings begin.

 

APIs: from secure interface to an open door

A faltering AI vendor doesn’t just raise legal questions; it raises immediate security ones. As a company’s finances collapse, so does its ability to maintain defences. Security staff are laid off, monitoring stops, and systems go unpatched. Meanwhile, your organisation may still have active API keys, service tokens, or integrations linked to that environment, potentially leaving you connected to a breached or abandoned network.

In the chaos of a shutdown, those connections become prime targets. If an attacker gains control of the vendor’s domain or cloud assets, they could hijack API traffic, intercept data, or deliver false responses. Due to the fact many AI systems are deeply embedded in workflows, those calls might continue long after the vendor disappears.

You need to treat an insolvent provider as you would a compromised one. Revoke access, rotate credentials, and isolate integrations the moment you see signs of trouble. Your incident-response playbook should include procedures for vendor failure, not just breaches.

 

Orphaned AI models

When a vendor collapses, its models may not die, but they do become orphaned. Proprietary AI systems require regular updates and security patches. If the development team vanishes, vulnerabilities in the model and its platform will go unaddressed. Each passing month increases the chance that attackers will exploit an unmaintained platform.

This problem isn’t unique to AI. Unpatched plugins, abandoned applications, and outdated software have long been common attack surfaces. But AI raises the stakes because models often encapsulate fragments of sensitive or proprietary data. A fine-tuned LLM that contains traces of internal documents or customer interactions is effectively a data repository.

The danger grows when those models are sold off in liquidation. A buyer, potentially even a competitor, could acquire the intellectual property, reverse-engineer it, and uncover insights about your data or processes. In some cases, years of legal wrangling may follow over ownership rights, leaving customers without updates or support while attackers exploit unpatched systems.

CISOs must treat AI dependencies as living assets. Maintain visibility over where your data sits, ensure your teams can patch or replace vendor models if needed, and monitor for new vulnerabilities affecting the AI stack.

 

Overreliance on contracts

Most supplier agreements include reassuring clauses about data return, deletion, and continuity in case of bankruptcy. Unfortunately, these provisions often collapse under legal and operational realities.

Bankruptcy courts prioritise creditors, not cybersecurity. They may allow the sale of assets “free and clear” of previous obligations, meaning your contract’s promise of data deletion could be meaningless. Even if the law remains on your side, an insolvent vendor may lack the resources to follow through. Staff will have left, systems may already be offline, and no one will be around to certify that your information has been erased.

By the time a legal dispute is resolved, the security damage is usually done. CISOs should therefore act in real time, not legal time. The moment a provider looks unstable, plan for self-reliance: revoke access, recover what data you can, and transition critical services elsewhere. Legal teams can argue ownership later, but security teams must act immediately.

 

Ensuring continuity of your data

Few organisations appreciate how dependent they’ve become on AI vendors until those vendors disappear. Many modern workflows, from chatbots to analytics engines, rely on third-party models hosted in the provider’s environment. If that platform vanishes, so does your capability.

Past technology failures offer cautionary lessons. When the cloud storage firm Nirvanix shut down in 2013, customers had just two weeks to move petabytes of data. More recently, the collapse of Builder.ai highlighted how even seemingly successful AI startups can fail abruptly. In each case, customers faced the same question: how fast can we migrate?

For AI services, the challenge is even greater. Models are often proprietary and non-portable. Replacing them means retraining or re-engineering core functions, which can degrade performance and disrupt business operations. Regulators are beginning to take note. Financial and healthcare authorities now expect “exit plans” for critical third-party technology providers, a sensible standard that all sectors should adopt.

CISOs should identify single points of failure within their AI ecosystem and prepare fallback options. That might mean retaining periodic data exports, maintaining internal alternatives, or ensuring integration with open-standard models. Testing those plans, before a crisis, can turn a potential disaster into a manageable transition.

 

Building resilience for what may come

The next wave of AI vendor failures is inevitable. Some will fade quietly, others will implode spectacularly. Either way, CISOs can mitigate the fallout through preparation rather than panic.

Start by expanding your definition of third-party risk to include financial stability. Ask tough questions about funding, continuity, and data deletion – demand proof of when the contract ends.

Build continuity and exit strategies well before you need them. Regularly back up critical data, test transitions to alternative tools, and run simulations where a key AI API goes offline. Regulatory frameworks such as Europe’s Digital Operational Resilience Act (DORA) already encourage this discipline.

AI provider insolvency may sound like a commercial or legal issue, but it’s fundamentally a security one. The CISOs who fare best will treat vendor failure as another form of breach, demanding transparency, maintaining independence, and ensuring their systems can stand on their own.

 

A security first approach

While the bankruptcy of an AI vendor may seem like a purely commercial or financial issue, it is fundamentally a security one. Embracing AI can unleash huge potential for your organisation, but adopting a security first approach means your business builds resiliency against the risks and fallout from the bankruptcy of an external provider.

Ensuring your organisation is prepared doesn’t just protect data and customers, it protects profits, workers and reputation. Where CIOs proactively plan for instability, they can ensure long-lasting resilience.