Navigating cyberthreatsand strengthening defensesin the era of AI

Everybody is racing to develop an AI strategy, and many do not understand the risks associated with AI. Microsoft recently posted an article speaking to this concern found HERE. I found this section of the post very interesting …

Recommendations for AI security strengthening:


Apply vendor AI controls and continually assess their fit:

For any AI introduced into your enterprise, look for respective vendors’ built-in features to scope AI access to employees and teams using the technology to foster secure and compliant AI adoption. Bring cyber risk stakeholders across an organization together to align on defined AI employee use cases and access controls. Risk leaders and CISOs should regularly determine whether use cases and policies are adequate, or if they must change as objectives and learnings evolve.

Protect against prompt injections:

Implement strict input validation and sanitization for user-provided prompts. Use context-aware filtering and output encoding to prevent prompt manipulation. Regularly update and fine-tune LLMs to improve its understanding of malicious inputs and edge cases. Monitor and log LLM interactions to detect and analyze potential prompt injection attempts.

Mandate transparency across the AI supply chain:

Through clear and open practices, assess all areas where AI can come in contact with your organization’s data, including through third-party partners and suppliers. Use partner relationships and cross-functional cyber risk teams to explore learnings and close any resulting gaps. Maintaining current Zero Trust and data governance programs is more important than ever in the AI era.

Stay focused on communications:

Cyber risk leaders must recognize that employees are witnessing AI’s impact and benefits in their personal lives and will naturally want to explore applying similar technologies across hybrid work environments. CISOs and other leaders managing cyberrisk can proactively share and amplify their organizations’ policies on the use and risks of AI, including which designated AI tools are approved for enterprise and points of contact for access and information. Proactive communications help keep employees informed and empowered, while reducing their risk of bringing unmanaged AI into contact with enterprise IT assets.

See the full article HERE.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.