U.S. Government Releases New AI Security Guidelines for Critical Infrastructure – And my thoughts on AI risk

Artificial intelligence (A)I has rapidly hit the market allowing for huge innovation buttttttt …. also AI has introduced huge risks. Industry guidelines are racing to catch up to help organizations reduce the risk associated with AI. NIST recently published an AI RISK framework (see more HERE). The U.S. government also just unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence related threats. The hacker news speaks to those recently published guidelines in a post found HERE.

Guidelines are great but shouldn’t be your absolute goal for how you implement your security strategy. I’m often asked about Shadow AI, which is the fear of your users running AI applications without your knowledge. There is no silver bullet for this problem. Like with other similar threats such as shadow IT (users running other applications you don’t know about), you will need a mature security program. Technologies should include a Cloud Access Security Broker (CASB) to view how your users are leveraging SaaS offerings. You should also have a Data Life Cycle Management program that includes tools such as an eDiscovery tool to find your data as well as a data classification tool to tag the sensitivity of your data. Data in motion and at rest tools should be used. All of this is what modern SOCs use to protect your data from shadow IT including shadow AI. Technology providers understand the shadow IT threat and there is a trend towards “dashboard” unification around this topic. For example, Microsoft announced the AI Hub, which is essentially a dashboard used to view and control how AI is used with your environment, which this dashboard is built on the plumbing I just covered.

Regarding the specific risks of AI, it is common to be concerned about AI training on your data, potential copyright, bias, hate or other unwanted violations occurring, and the risk of unwanted parties viewing how you are using the AI. This all falls under the need for vendors to enforce responsible AI functions, which is a topic I covered HERE.

To summarize, AI guidelines are a good starting point for you to reduce the risk associated with AI. You however, also need to consider implementing security tools to help with AI risk as well as evaluate any AI vendor for their policies for responsible AI to truely address the risk of using AI within your organization.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.