Vertical Markets

AI and enterprise risks

by Mark Rowe

Artificial Intelligence (AI) could change the world as we know it, writes Josh Breaker-Rolfe. From healthcare to communications to manufacturing, thoughtful use of AI has the potential to drive revenues, cut costs, and even save lives. But every innovation brings new risks, and AI is no different.

As is the case with all new ideas, AI governance is playing catch-up. With a lack of regulation and standardized best practices, organizations must think critically about how and why they should incorporate the new technology into their business models.

AI has enormous security and privacy implications that organizations hungry to reap the equally enormous benefits could easily overlook. Months after OpenAI launched ChatGPT, many of the world’s largest corporations, including Walmart, Microsoft, and Samsung, issued warnings to their employees, imploring them to stop sharing company data with the chatbot. It doesn’t take much digging to figure out why.

ChatGPT is a machine-learning tool. Unless the user opts out, it uses all conversations to train the model and improve its services. OpenAI makes no bones about this, explicitly stating in its privacy agreement that ChatGPT “may aggregate or de-identify Personal Information and use the aggregated information to analyze the effectiveness of [its] Services.” In theory, any company information an employee inputs into ChatGPT could come up in any other user’s conversation.

But that’s not all. ChatGPT already boasts a staggering 1.8 billion monthly visitors. Users have already inputted an unfathomable amount of information into the chatbot, and one can only assume that cybercriminals are positively drooling at the prospect of getting their hands on that data. In fact, in March, researchers discovered a bug that exposed titles, the first message of new conversations, and the payment information of ChatGPT Plus users. Cybercriminals are renowned for their persistence, meaning it’s likely only a matter of time before ChatGPT suffers a more significant breach. If organizations want to prevent their sensitive data from being exposed, they must think carefully about what data they input into the chatbot.

However, the risks associated with AI aren’t limited to privacy and security. It’s easy to view AI as an omniscient entity, a magic machine that spits answers to complex problems in seconds. But this isn’t necessarily the case. AI tools such as ChatGPT are essentially super-powered information scrapers, trawling the internet to find answers to users’ questions. Not every answer ChatGPT comes up with is strictly correct, and assuming as such could have dire consequences, potentially bringing digital transformation projects to a grinding halt, massively disrupting business operations, or even landing organizations in legal trouble.

Managing AI enterprise risk

Developing a solid enterprise risk management strategy is crucial if organizations want to use AI safely. But this is more complex than it sounds; AI risk management is uncharted territory. There’s no roadmap that organizations can refer to when developing a risk management strategy for using AI tools such as ChatGPT.

The first step to implementing a solid risk management strategy is to assume that none of the information submitted to AI tools is private. While companies might be tempted to charge ahead with their digital transformation projects, using AI with reckless abandon, they must restrain themselves. Organizations should only use AI tools with specific intent, and employees should input as little data as possible. Ideally, organizations would figure out how to achieve their AI goals without inputting any information, but that might be a way off yet.

We’ve established that AI tools like ChatGPT are limited in that their answers will only be as good as users’ questions. Garbage in, garbage out. But what does that look like? And how can organizations avoid this?

It’s important to remember that machine learning tools regurgitate publicly available information; AI reflects our biases and prejudices.

In March, the Connecticut Advisory Committee to the US Commission on Civil Rights released a report titled: “The Civil Rights Implications of Algorithms.” It explained how specific training data can lead to biased results. The UCCR gave an example that “in New York City, police officers stopped and frisked over five million people over the past decade. During that time, Black and Latino people were nine times more likely to be stopped than their White counterparts. As a result, predictive policing algorithms trained on data from that jurisdiction will over-predict criminality in neighbourhoods with predominantly Black and Latino residents.” Organizations must use AI thoughtfully, avoiding situations where the tool could produce biased results.

The key takeaway here is that AI, while undoubtedly exciting, must be used with caution. Organizations must oversee and govern how their employees use machine-learning tools and what information they input to avoid potentially devastating security and privacy implications. While the danger of AI is often overblown in cheesy science fiction, we do need to tread carefully to avoid (albeit not apocalyptic) disaster.

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing