TESTIMONIALS

“Received the latest edition of Professional Security Magazine, once again a very enjoyable magazine to read, interesting content keeps me reading from front to back. Keep up the good work on such an informative magazine.”

Graham Penn
ALL TESTIMONIALS
FIND A BUSINESS

Would you like your business to be added to this list?

ADD LISTING
FEATURED COMPANY
IT Security

Rolling out Microsoft Copilot

by Mark Rowe

Microsoft Copilot is an AI productivity tool within Microsoft 365 applications. It differs from the likes of ChatGPT because it has the ability to dive deep into a company’s Microsoft 365 content, writes Farrah Gamboa, Senior Director of Product Management at the cyber product firm Netwrix, pictured.

It is designed to enhance daily workflows by drafting documents and presentations, recording action items during Teams meetings, analysing data in Excel, and streamlining many other day-to-day tasks. Copilot is essentially the equivalent of having a full-time assistant who can reliably remember every aspect of your work and handle many of your asks in a timely and efficient manner.

With these benefits, however, Copilot can bring additional risks, since it uses access controls within Microsoft 365. This article highlights the main security concerns introduced by Copilot and provides effective strategies for mitigating them.

The risks

To help ensure a safe Copilot launch, an organisation should understand the key risk factors. These include:

• Excessive permissions — Copilot operates according to the permissions granted in Microsoft 365. If users or groups are allowed access to the wrong content, sensitive information can quickly spiral out of control and become hard to manage.

• Incorrect data classification — Copilot is also governed by sensitivity labels. If labels are not accurate or do not exist at all, Copilot can put sensitive data at risk. Inaccurate labels can occur as a result of manual labelling processes, which are highly prone to human error and not scalable for the vast volumes of data kept by organisations. They can also arise due to limitations in labelling technologies, such as Microsoft file type limitations.

• Content generated by Copilot — New documents drafted by Copilot do not inherit sensitivity labels from the source documents. If the new documents include sensitive information, that data could be exposed to users who are restricted from viewing it in the source documents. Ensuring appropriate sensitivity labelling of Copilot-generated documents can itself be a challenge, owing to the volume of content that the tool can produce.

How to limit the risk of breaches

Addressing these threats is critical because they put the organisation at risk of both data breaches and compliance penalties. Establishing a strong data access governance programme is therefore vital to maintaining security before, during and after implementation of Copilot.
Specific best practices and capabilities that empower organisations to address the risks that come with using Copilot include:

• Strict least-privilege model — A strict least-privilege model limits the risk of data breaches by ensuring that each user has only the permissions they need to do their daily tasks. For this to happen, the process needs to have easy entitlement reviews by data owners, access request and approval workflows, and deep visibility into effective access rights.

• Automated data discovery and classification — By implementing automated data discovery and classification, organisations can ensure consistent and accurate labelling of both existing data and new content crafted by Copilot. This makes integrating the necessary security controls around all the content easier. Moreover, correct data labelling is vital in ensuring an organisation has an effective data loss prevention (DLP) strategy.

• Automated risk remediation — To prevent valuable and regulated information from being leaked, it’s also vital to detect conditions that put sensitive data at risk and immediately remediate them. Examples include removing excessive permissions and disabling user accounts that are behaving suspiciously.

• Intelligent alerts on threats — Alerts empower the security team to promptly investigate truly suspicious activity and respond effectively to reduce or even prevent damage. Common alerts include attempts to change the permissions of a group or user, as well as attempts to read a sensitive document.

Adopting these capabilities will help your organisation securely and confidently utilise Microsoft Copilot while limiting the risk of data breaches.