I saw this post from Miggo about hiding attacks within Calendar Invites. What is interesting is the use case may seem a nitch situation, but its not. Its more about the challenges with detection with AI-native threats since this is a AI to calendar vulnerability. There are probably many other examples of AI + SOMETHING vulnerabilities that traditional detection tools will miss.
Here is a link to that post as well as the introduction to the post.
As application security professionals, we’re trained to spot malicious patterns. But what happens when an attack doesn’t look like an attack at all?
Our team recently discovered a vulnerability in Google’s ecosystem that allowed us to bypass Google Calendar’s privacy controls using a dormant payload hidden inside a standard calendar invite. This bypass enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction.
This is a powerful example of Indirect Prompt Injection leading to a critical Authorization Bypass. We responsibly disclosed the issue to Google’s security team, who confirmed the findings and mitigated the vulnerability.
What makes this discovery notable isn’t simply the exploit itself. The vulnerability shows a structural limitation in how AI-integrated products reason about intent. Google has already deployed a separate language model to detect malicious prompts, and yet the path still existed, driven solely through natural language.
The takeaway is clear. AI native features introduce a new class of exploitability. AI applications can be manipulated through the very language they’re designed to understand. Vulnerabilities are no longer confined to code. They now live in language, context, and AI behavior at runtime.
In this article, we walk through the exploit flow and highlight the broader implications for anyone building application security controls in the age of language-first interfaces.
Check out the full post HERE.