
Over 300,000 Americans who trusted Google’s Chrome Web Store to deliver legitimate AI productivity tools unwittingly installed malicious extensions that hijacked their emails, passwords, and private browsing data in a coordinated cyberattack that exploited the AI revolution hype.
Story Snapshot
- 30 fake AI assistant extensions impersonating ChatGPT, Gemini, Claude, and other popular tools infected up to 300,000 Chrome users
- Attackers used remote iFrames to bypass Google’s security reviews and steal emails, passwords, API keys, and browsing activity in real-time
- Some malicious extensions carried Google’s “Featured” badge, amplifying trust and downloads from unsuspecting users
- Google removed extensions only after security researchers exposed the campaign, raising concerns about Web Store oversight failures
Google’s Security Failure Enables Massive Data Theft
Security researchers at LayerX discovered a sophisticated campaign involving 30 malicious Chrome extensions that collectively reached between 260,000 and 300,000 installations. The extensions masqueraded as legitimate AI productivity tools such as ChatGPT, Google Gemini, Claude, Grok, and AI Sidebar.
Attackers exploited the Chrome Web Store’s review process by submitting clean-looking code while using remote iFrames to load malicious interfaces from attacker-controlled servers. This technique, dubbed “AiFrame,” enabled real-time data exfiltration without triggering Google’s static code analysis, exposing a fundamental vulnerability in how the tech giant protects its users.
Fake AI Chrome extensions with 300K users steal credentials, emails https://t.co/RVGmfaLGCE
— Lifeboat Foundation (@LifeboatHQ) February 20, 2026
Sophisticated Attack Methods Target User Privacy
The malicious extensions requested broad permissions to read and modify website data, including Gmail access, allowing attackers to monitor login credentials, emails, drafts, and browsing activity. To avoid detection, the extensions actually functioned as advertised by proxying responses from real AI language models back to users, making the theft invisible.
Attackers coordinated the campaign through shared backend infrastructure including hosting servers, TLS certificates, and JavaScript bundles. This “extension spraying” strategy deployed near-identical extensions under different brands, ensuring that even when Google removed one extension, others remained operational and new ones could be re-uploaded under different identifiers.
Featured Badge Amplified Malicious Reach
The breach proved particularly troubling because some infected extensions bore Google’s “Featured” badge, a designation meant to signal trustworthy, high-quality applications to Chrome users. This official endorsement dramatically increased installations from users seeking legitimate AI productivity enhancements.
LayerX researchers warned that these extensions functioned as “privileged proxies” granting attackers “remote infrastructure access to sensitive browser capabilities.” The campaign targeted Gmail workflows specifically, capturing API keys and tokens that could enable broader enterprise compromises.
Google confirmed to Fox News that all reported extensions have been removed, but the company provided no explanation for how malicious software earned Featured status or reached hundreds of thousands of users before detection.
300,000 Chrome users hit by fake AI extensions https://t.co/fx1nnsI4O2
— Fox News AI (@FoxNewsAI) February 26, 2026
This incident follows a troubling pattern of browser extension compromises, including the DarkSpectre attack that infected 8.8 million users and a separate campaign that stole 900,000 AI conversations through fraudulent AITOPIA sidebar clones.
The AiFrame technique represents an evolution in malware sophistication, exploiting the AI adoption surge while demonstrating how attackers can weaponize the gap between local code review and remote execution capabilities.
Security experts note that the campaign’s resilience through extension spraying and server-side control creates an ongoing threat, as attackers can modify behavior post-installation without triggering update reviews. Users who installed any AI assistant extensions should immediately review Chrome extension permissions and delete suspicious applications.
Sources:
300,000 Chrome users hit by fake AI extensions – Fox News
Fake AI browser extensions steal data from over 260K Chrome users – Paubox












