Microsoft just announced a huge connection between the Artificial Intelligence hype and what Microsoft plans to do with it. A post from Jared Spataro found HERE introduces Copilot. In summary, imagine a AI copilot built into your Microsoft 365 tools. If you join a meeting, you can ask the AI to catch you up on what you missed. If you need to craft a email, you can ask the AI to help build a template email so you just tune it saving massive time. If you need to understand about a topic during a meeting, you can ask the AI to pull all of the bullet points covered and summarize it for you. The following video pretty much summarizes the idea.
The detailed blog post found HERE is worth the read. How does this relate to security? I posted my thoughts on AI HERE a few weeks ago with a focus on ChatGPT. I’ve been hearing people say this type of technology will change the cyber security landscape however, I argue that its already a continuously changing landscape. For example, some could say BitCoin changed the threat landscape allowing Ransomware to have a method to collect payment. There are many situations that change and both attack and defense continue to adapt. AI is just another change.
Copilot will be interesting from both an Attack and Defense viewpoint. On the attacker side, this could be a easier way to collect details about a target, craft most impactful phishing and other messaging, and possibly use has a reconnaissance to allowing AI to do the “digging” on a target. Im sure Microsoft and others leveraging AI are taking into account the risk of tools being abused however, there is no 100% way to ensure a threat actor can’t leverage an AI tool for malicious purposes.
On the defender side, security vendors could learn when AI is pulled in allowing the attacker behavior to lead to defense detection. Improving the work experience can give people more time to focus on items that require manual efforts hopefully allowing smarter work including more secure / thoughtful work. For example, if corporate approved tools offer this AI type tech, it should draw employees to use it over 3rd non approved tools, which could lead to shadow IT and other risks. Lastly, it will be interesting to see how AI collaboration start to blend with security context tools. UEBA is the concept of monitoring user trends and identifying anomalies. If an organization is levering Copilot, it would be interesting to develop baseline trends on how conversations, data, and other content is shared. It would be even more interesting to see if anomalies could be detected and converted into potential security incidents essentially creating a tool for eDiscovery, data classification, and data lifecycle management. If data is limited to the Copilot environment, that could be seen as a security tool.
All of these security concepts are not part of the Microsoft announcement, but some are mentioned such as data security. My guess is there will be more overlap between AI collaboration and security tools as such technology is adopted and matures. Now go read the announcement.