Google's Gemini AI contains a newly disclosed vulnerability. Attackers can manipulate the model using routine calendar invitations, a technique known as prompt injection. Application security firm Miggo identified the flaw, which underscores emerging risks as enterprises integrate generative AI into daily workflows.

The risk is significant in corporate environments where AI copilots connect to sensitive data across emails, documents, and calendars. Security experts note this moves prompt injection threats from theoretical to operational. A single compromised account could embed malicious instructions, potentially exposing sensitive information during routine employee queries.