Google AI Chatbot and Gemini Flaw Enable New Phishing Attacks

Edward Zhou
Edward Zhou

CEO & Founder

 
July 16, 2025 3 min read

Google AI Chatbot Target of Potential Phishing Attacks

Researchers discovered a security threat in Google's artificial intelligence chatbot. AI security company 0din flagged the problem after a vulnerability in Google Gemini was reported by cybersecurity publication Dark Reading. The issue involves a prompt-injection flaw that allows cybercriminals to design phishing or vishing campaigns by embedding malicious instructions into emails that appear to be legitimate Google security warnings.

Google Debuts Improved Version of Gemini AI Tool
Image courtesy of PYMNTS

According to 0din researcher Marco Figueroa, if a recipient clicks “Summarize this email,” Gemini treats the hidden admin prompt as its top priority. This means the victim may only see a fabricated ‘security alert’ in the AI-generated summary. For instance, a proof of concept demonstrated an invisible prompt in an email warning that the reader's Gmail password had been compromised, urging them to call a specific number, potentially leading to credential harvesting.

Google has discussed some defenses against these types of attacks in a company blog post. A spokesperson mentioned that Google is in “mid-deployment on several of these updated defenses.”

Google Gemini Flaw Hijacks Email Summaries for Phishing

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites. Such attacks utilize indirect prompt injections that are hidden inside an email and executed by Gemini when generating message summaries.

Gmail
Image courtesy of BleepingComputer

A prompt-injection attack was disclosed through 0din, demonstrating how attackers can manipulate Gemini's output. The malicious instruction can be hidden in an email's body using HTML and CSS to set the font size to zero and color to white, rendering it invisible.

When a user requests a summary, Gemini parses the hidden directive and executes it. An example from the report showed Gemini including a security warning about a compromised Gmail password in its output, misleading users into believing the danger was real.

To counteract these attacks, security teams can remove or neutralize content styled to be hidden and implement post-processing filters to flag messages containing urgent alerts, URLs, or phone numbers for review. Users should also exercise caution and not consider Gemini summaries as authoritative security alerts.

Google Gemini Bug Turns Gmail Summaries into Phishing Attack

A security researcher uncovered a method to trick the AI-generated email summary feature of Google Gemini into promoting harmful instructions. The technology, which can automatically post email summaries, can be exploited to deliver phishing messages.

Gemini Gmail
Image courtesy of PCMag

The flaw allows malicious emails with hidden instructions to mislead Gemini into displaying fake warnings in email summaries, such as claiming a user's Gmail password has been compromised. This can result in users being directed to call a fraudulent number for assistance.

Mozilla's 0DIN program disclosed this vulnerability, illustrating how attackers can embed hidden prompts in emails. Google is actively working to strengthen its defenses against such attacks, as noted in a blog post.

Investigation Reveals Google Gemini for Workspace Flaw

Mozilla's 0-Day Investigative Network disclosed that Google Gemini for Workspace could be exploited by embedding malicious prompts in email summaries. This attack enables the AI to communicate false alerts to users regarding their accounts.

Google Gemini logo
Image courtesy of Tom's Hardware

The attack requires an email with a hidden malicious prompt. When users ask Gemini to summarize the email, the AI outputs the false security alert. The hidden text can be styled to be invisible, making it more likely that users will fall for the scam.

The ongoing threat emphasizes the need for organizations to treat AI assistants as part of their attack surface. Security teams must implement measures to monitor and isolate these tools to prevent exploitation.

Incorporating robust security measures is essential for users relying on AI technologies. Being aware of the potential risks associated with such tools can help in mitigating these threats effectively.

Edward Zhou
Edward Zhou

CEO & Founder

 

CEO & Founder of Gopher Security, leading the development of Post-Quantum cybersecurity technologies and solutions..

Related Articles

Ransomware Attacks Target Russian Vodka and Healthcare Sectors

The Novabev Group, parent company of the Beluga vodka brand, experienced a ransomware attack on July 14, 2025, causing significant disruptions. The attack affected WineLab, the company's liquor store chain, leading to a three-day closure of over 2,000 locations in Russia. The company reported that the attack crippled its IT infrastructure, particularly point-of-sale systems and online services. Novabev Group stated, "The company maintains a principled position of rejecting any interaction with cybercriminals and refuses to fulfill their demands."

By Alan V Gutnov July 19, 2025 3 min read
Read full article

Retail Sector Faces Surge in Ransomware Attacks: A 2025 Analysis

Publicly disclosed ransomware attacks on the retail sector globally surged by 58% in Q2 2025 compared to Q1, with UK-based firms being particularly targeted, according to a report by BlackFog. This spike in attacks follows high-profile breaches affecting retailers like Marks & Spencer (M&S), The Co-op, and Harrods, attributed to the threat actor known as Scattered Spider.

By Alan V Gutnov July 19, 2025 2 min read
Read full article

AI-Driven Lcryx Ransomware Emerges in Cryptomining Botnet

A cryptomining botnet active since 2019 has incorporated a likely AI-generated ransomware known as Lcryx into its operations. Recent analysis by the FortiCNAPP team at FortiGuard Labs identified the first documented incident linking H2miner and Lcryx ransomware. This investigation focused on a cluster of virtual private servers (VPS) utilized for mining Monero.

By Edward Zhou July 19, 2025 3 min read
Read full article

Preventing ClickFix Attacks: Safeguarding Against Human Error

ClickFix is an emerging social engineering technique utilized by threat actors to exploit human error. This technique involves misleading users into executing malicious commands under the guise of providing "quick fixes" for common computer issues. Threat actors use familiar platforms and deceptive prompts to encourage victims to paste and run harmful scripts.

By Alan V Gutnov July 19, 2025 3 min read
Read full article