How to Discover, protect, and govern AI usage

One common question I’m being asked is how to govern AI usage. Risks are real and include everything that falls under the term “Shadow AI”. Examples of shadow AI are users enabling AI within Browers, tools, etc., building applications with AI capabilities, or even sharing data using AI based tools. Without controls, sensitive data can easily be leaked, and vulnerabilities can be introduced if the AI tools are not properly vetted. You may think AI technology protects against this but in many cases, you are unfortunately not correct assuming that.

To oversimplify an example of this risk, let’s look at what many SaaS providers are doing with AI. They are slapping a large language model (LLM) on top of their technology allowing customers to talk to the tool with the goal of the AI providing an action. In this model, the LLM is a shared resource hence with specific prompt engineering, a threat actor or competitor could prompt the AI to divulge communications from a targeted organization. Many companies are not including responsible AI concepts within their AI offering meaning they are not checking for copyright violations, hate, bias, etc. when customers submit requests to AI leading to many areas of risk. Another example is requesting an image, and the AI pulls a copyright protected diagram. These and other situations are driving companies to pump the breaks are permitting AI.

One key fact you need to understand is AI is WAYYYY to popular to prevent. Your employees will find aways around your “no AI” policies hence a better approach is to allow AI is a controlled and responsible manner. One common situation is employees using personal devices and accounts to work around no AI policies.

Microsoft just launched its annual Secure conference yesterday, which hit this topic directly. Specifically, the AI hub in Microsoft Purview provides a dashboard to view how your organization is using AI from many popular providers. The Secure event also spoke about how to control AI usage to only trusted service providers, using secure manners (example only using MFA accounts vs personal accounts) and control who within your organization can use AI.

It’s very powerful to allow AI in a responsible manner. Learn more about these announcements HERE. This post includes screenshots and specific details on how to control different aspects of Shadow AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.