Google Dismisses 'ASCII Smuggling' Attack in Gemini
Google has declined to fix a security vulnerability in its Gemini AI assistant that allows attackers to embed hidden instructions in emails and calendar invites. The flaw, known as ASCII smuggling, uses invisible characters that users cannot see but that artificial intelligence systems can read and process.
Security researcher Viktor Markopoulos from cybersecurity firm FireTail discovered the vulnerability and reported it to Google in September. The company dismissed the issue, stating it only constitutes social engineering rather than a technical security bug.
How the Attack Works
ASCII smuggling exploits special characters from the Tags Unicode block to create payloads invisible to human eyes. The Tags Unicode block contains control characters originally designed for technical purposes and not intended for visual display. These hidden instructions can manipulate AI behavior and alter the information Gemini provides to users.
The technique poses particular risks given Gemini's integration with Google Workspace. Attackers could embed hidden text in calendar invitations or emails that Gemini processes when summarizing content.
Markopoulos demonstrated several attack scenarios. In one test, he successfully hid instructions in a calendar invite title and overwrote organizer details. In another example, an invisible instruction tricked Gemini into recommending a potentially malicious website for purchasing discounted phones.
For users who connect AI tools to their inboxes, hidden commands in emails could instruct Gemini to search for sensitive information or extract contact details. According to FireTail, this transforms ordinary phishing attempts into an "autonomous data extraction tool".
Mixed Response from Tech Companies
FireTail tested six major AI systems against ASCII smuggling attacks. OpenAI's ChatGPT, Microsoft Copilot, and Anthropic's Claude successfully blocked the attacks through input sanitization. However, Gemini, DeepSeek, and Grok all proved vulnerable. (Source: androidauthority.com)
FireTail CEO Jeremy Snider recommended that organizations consider disabling Gemini's automatic access to Gmail and Google Calendars until the vulnerability is addressed. He emphasized that eliminating social engineering risks does improve user safety, directly contradicting Google's position.
Other technology companies have taken the threat more seriously. Amazon has published detailed security guidance addressing Unicode character smuggling. (Source: csoonline.com)
What's Your Opinion?
Do you think Google should reconsider its decision not to fix this vulnerability? Would you continue using Gemini with your email and calendar after learning about this flaw? Should companies be held more accountable for AI security issues even when they involve social engineering tactics?
My name is Dennis Faas and I am a senior systems administrator and IT technical analyst specializing in cyber crimes (sextortion / blackmail / tech support scams) with over 30 years experience; I also run this website! If you need technical assistance , I can help. Click here to email me now; optionally, you can review my resume here. You can also read how I can fix your computer over the Internet (also includes user reviews).
We are BBB Accredited
We are BBB accredited (A+ rating), celebrating 21 years of excellence! Click to view our rating on the BBB.


Comments
Google should fix ascii smuggling
By all means, Google (and other companies) should fix this exploit. I'm not a big fan of AI to begin with. My personal email provider (not Google) includes AI summaries of most of the emails I receive, and I shudder to think what could be happening if the emails I receive are full of ascii garbage infecting the AI summaries. Google's claim of ascii smuggling being just social engineering is bogus.
ASCII smuggling is evil
We all know Google is TOMA for just about everyone.
What if Google itself quietly has been using this exploit?
For the life of me I can't think of a good reason explaining why Google would pooh-pooh the announcement of this exploit.
Just another reason I am not a fan of AI summaries of my emails.
Thanks for the heads up, John Lister!