Shadow IT has long been an issue for organisations everywhere, writes Jay Henderson, Senior Vice President, Product Management, at the analytics platform Alteryx.
The use by employees or departments of often familiar and favoured tools and solutions without corporate approval or oversight can have serious implications for the privacy and security of a company’s data and systems.
Now, though, a new issue has arisen – Shadow AI. Recently, the explosion in AI across the enterprise ecosystem, encouraged by the hype around the potential productivity benefits of generative AI, has led to different business units buying a mass of disparate point solution AI tools to solve specific departmental challenges. Often untraceable by IT teams, the growing decentralised adoption of these tools across an organisation can lead to a real risk of shadow IT and all that it entails in terms of adding friction to the existing ecosystem or not aligning with more extensive business strategies.
These silos and pockets of different AI solutions amplify data and regulations while making it difficult for IT departments to keep track or offer critical advice. Collaboration between business units and IT is vital to prevent and mitigate the possibility of threats to the privacy, compliance and security of their organisation and its data.
Enthusiasm for generative AI
Research shows that 80 per cent of UK businesses report that AI is already impacting their organisation’s ability to achieve, with 47pc citing they will invest in advanced technologies such as AI to respond to the changing market environment. However, according to recent research, half of those workers use generative AI without the formal approval of their employers. This enthusiasm for generative AI is hardly surprising but can be highly problematic.
Organisations have unprecedented access to a wealth of data and untapped business intelligence stored in a range of formats – both structured and unstructured – across various internal and external systems. With data privacy and cyber security concerns continuing to rise, this form of shadow AI introduces several risks around data usage, privacy, and compliance, among other concerns.
For instance, generative AI is incredibly data hungry. OpenAI’s GPT-3 uses around 175 billion parameters to perform a task and draws on vast amounts of text from websites, books, and other resources to generate human-like responses. And each iteration is set to become ever more intensive: its successor, GPT-4, uses 1.8 trillion parameters, with a dataset of more than a petabyte per task. Visual and audio applications, like Dall-E, Midjourney, and Microsoft’s Copilot, can be even more data-intensive.
Silos of generative AI across the business can be insecure, too. Large sets of training and testing data are copied, imported, shared, and stored in various formats and places. This can make that data difficult to keep track of and manage, and personal or sensitive information can be at risk of exposure. To address such concerns, the Information Commissioner’s Office recently published a review of how data protection laws should apply to generative AI applications.
Mitigating risks through caution and governance
Ultimately, the right use of generative AI can empower business experts, equipped with domain-specific knowledge but no data science skills with newfound flexibility to conduct data-driven analysis and deliver real-time insights via a natural language prompt. However, while many businesses re-main optimistic about the potential of generative AI to advance their efficiency and productivity, many still lack a clear strategy for its safe, responsible use.
Unsurprisingly, the use of AI is becoming increasingly highly regulated. In addition to the EU’s AI Act, the UK’s National Cyber Security Centre, in association with the US’s Cybersecurity and Infrastructure Security Agency, recently published the first global guidelines for the secure use of AI. Global regulations like these serve as essential reminders of the risks associated with the use of generative AI and are the reason that many organisations are deploying it with appropriate caution and governance.
As businesses plan their AI strategies, they must also contemplate if their ability to take full ad-vantage of available data and analytics proficiency aligns with the business priorities, to determine how and where to leverage the capabilities of generative AI for maximum impact. Rather than launch-ing various shadow solutions and projects across different departments, the successful introduction of AI into any organisation requires alignment on a shared vision of its business value and impact. Success involves taking incremental steps to improve organizational readiness. This requires collaboration between senior management, business units, IT departments, and the selected teams leading projects and a structured approach to data strategies that aligns AI initiatives with the broader organisational culture, values, and overall business objectives.
AI strategies that empower employees
The beauty of AI is that, with the right data, its capabilities can be tailored to most vertical use cases. However, ethical, compliance and legal limitations must not be overlooked when managing data access. Overcoming and mitigating risks requires balancing empowering employees with AI and provid-ing a centralised approach to data management.
As with tackling any other Shadow IT issue, it’s important, however, that in curtailing the spread of Shadow AI, IT teams don’t dampen the enthusiasm for generative AI, or hinder its use across the organisation. Generative AI has huge potential for delivering valuable business intelligence. With the right approach to data democratisation, data culture and literacy, an AI-enabled workforce will be more aware of significant regulatory and compliance considerations like data quality, privacy, security, and GDPR requirements.
Used well, with the right guardrails in place to include practical checks on data quality, privacy, and governance, AI can provide the competitive edge that many companies need. Harnessing analytic platforms that combine a centralised approach to AI with flexibility, scalability, speed, and self-service capabilities will support multiple personas, unlimited use cases, and future feature expan-sion. This centralised approach to AI will not only significantly compress the time required to glean insights from data but also avoid data and IT problems before they evolve from being an issue to be-coming a company-wide crisis.




