Interviews

Copilot properly

by Mark Rowe

Everyone wants it, but how do you implement it without causing harm? Mike Bellido, Cloud Solution Architect, at the cloud services, cyber and data protection company CSI Ltd, gives tips for implementing Microsoft Copilot successfully without exposing your organisation to more risk.

You’ve read the articles about how incredible Copilot is at boosting productivity and efficiency, and now everyone in your organisation is clambering for it. Some of them have tried it, others are keen to, and it makes sense that everyone wants a go. In Microsoft’s recent Future of Work Report, they found in a Teams Meeting Study, participants with access to Copilot found the task to be 58 per cent less draining than participants without access, and among enterprise Copilot users, 72pc agreed that Copilot helped them spend less mental effort on mundane or repetitive tasks.

Meanwhile, 80pcof executives think automation like AI can be applied to any business decision, but only 18pc of organisations believe they are digitally thriving. So, what about the practicalities of implementing Copilot? Most of the people I talk to lack the confidence about what the next steps are and how to roll out Copilot successfully – and securely – across their organisation.

There are lots of AI versions, from Dynamics, Salesforce, Azure, and Service Now to name a few. These new AI tools help you mine data, summarise information, and work smarter. You can also build your own AI tools with Open AI to then use specific company data in the best way for your company.

We expect certain things to come out when we use Copilot or similar AI tools. For example, we expect deeper and more relevant search results, and personalised, relevant, and actionable responses. We also expect tenant, group, and user protections, and AI-assisted creativity, productivity, and automation. The key to success is helping users understand how to use data properly – which all starts with policy, governance, and classification to know what you have, then training and follow-through for user adoption.

So, let’s dive deeper into how you can be better prepared to implement AI tools and make them work well for your organisation.

Data quality

Making sure the data that AI trawls for information is of a high quality, relevant, and not out of date is critical to success. You need to understand first how the AI tools use data and the potential consequences of using old or wrong data. AI is only as good at the information you put into it and the questions you ask it.

First, you will need to gain visibility of all your data sources and start to assess their quality and what can be discarded, what needs updating, and what can be kept. For example, there is no point in AI trawling through PowerPoints to create a new one if the figures in the previous ones are out of date, or through customer ordering information that’s old as it will produce inaccurate data for you.

On average, globally, every human creates at least 1.7 MB of data every second, so it’s no wonder 47% of digital workers struggle to find the information needed to effectively perform their jobs. We waste so much time looking for what we need, so, we must all ensure that we only keep what is up-to-date, of use, and relevant. Your company data policies should address these issues.

IT teams will need to assess the quality of your data across functions and with various departments in the business. Potentially, many businesses will need to spend considerable time assessing and ‘cleaning up’ their data before they can start using AI effectively.

Organisations should also be aware of ‘dark data’ – the data that is not necessarily immediately visible to an employee or department, but which is still accessible by AI. It could be data that has come across with a migration but is often out of date. Some organisations will have a policy to delete all data that is five or ten years old. Putting in place the right permissions and reviewing your data constantly will be critical to the successful adoption of AI.

By working out what data is relevant and strengthening your data quality you will improve your AI results. A data assessment will allow you to ultimately make good decisions on what content to keep, remove, or archive.

Storage

While reviewing your data it’s important to match your storage capabilities and platforms with your data needs to ensure you are not wasting money on storage, but also that you can scale up if needed to allow for the extra compute that running AI requires.

It’s also important to ensure you know where your data is stored in M365 and other databases, how they interconnect, what data you need from them, and whether they have the right security in place. For example, archived data can be stored outside Microsoft 365 and Copilot can’t access it, but it can be brought back online should people request it.

Within this companies need to ask whether they are using and storing their data in the right way to meet regulatory obligations. All this needs to be set out in an AI strategy document. By having a clear AI strategy in place adoption will become much easier. Without it, the wrong permissions could be granted to employees, and potentially cause a data leak and result in catastrophic damage and fines from regulatory bodies.

Data privacy, compliance, and security

Organisations need the right data privacy controls – internally and externally if sharing content. Understanding your regulatory obligations and meeting all GDPR legislations require extra vigilance with Copilot. It will ‘farm’ information from a variety of different sources and databases so it’s key that anyone accessing data has the right permissions in place to meet all your regulatory obligations.

Think about the sensitivity of data and consider which data sets need to be locked down and what compliance needs you have as an organisation. For example, identify sensitive data, external users, and links and how items are shared internally. All information can be given a ‘category’ of risk and identified by audience, then IT admin can run assessments and work out to prevent oversharing of sensitive data.

Alongside this, it’s important to clean up permissions and enforce policies. For example, removing shadow users who have access but haven’t used the data because they have moved department. Who has access to what and how people can set up project teams that have access to certain data sets can all be reviewed and given permissions from the IT team. For example, ‘leases’ can be put on workspaces to allow access to data sets for a certain period only.

Once your data and environment are clean and secure you can then use AI and automation to manage and govern your data.

Adoption

Training people how to use Copilot properly and understanding what prompts to use for it to come back with useful information is the final part of a successful rollout. With the latest management tools, the IT team can have a clear overview of the people who are licensed to use Copilot and how they are using it. If they are not making the most of it, do they need more training?

Employees must also understand the risks around AI-created information and how to ‘check’ its veracity. For example, if employees are using Chat GPT across internal data are they asking the right questions to retrieve the best answers, and are they checking the data is recent and relevant?

In principle, your data should be up to date if you have started your journey with a data assessment so that your AI tools only have access to high-quality data. However, we all know data goes out of date very quickly, so all employees need to be aware through appropriate training not only how to use the tools, but also how to review and assess whether the information that it spits out is useable. We still need to use our intelligence to assess whether AI has given us useful, useable information or not.

The right strategy, policies, and security combined with good training will enable your workforce to make the most of AI and its amazing capabilities.

Related News

  • Interviews

    What CERT-UK means

    by Mark Rowe

    What does the launch of CERT-UK mean for UK businesses? writes EJ Hilbert, Head of Cyber Investigations at Kroll EMEA. Having only…

  • Interviews

    IoT impact

    by Mark Rowe

    With an expected 20.8 billion connected things to be in existence by 2020, these devices are producing data at an astonishing rate…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing