Perplexity, the AI search engine that positioned itself as a privacy-focused alternative to traditional search, is facing a proposed class action lawsuit that alleges the company violated user privacy even when users had enabled its “Incognito” mode. The lawsuit claims that Perplexity “effectively planted a bug” on users’ computers by embedding trackers from Meta and Google inside its AI search engine. For a company that built its brand on privacy promises, the allegations represent a significant reputational and legal challenge.
According to the lawsuit, Perplexity’s incognito mode “does nothing” to protect user privacy. Even paid users who enabled the feature had their conversations shared with Meta and Google, along with their email addresses and other identifiers that allowed these companies to personally identify them. The trackers embedded in Perplexity’s platform allegedly collected this information without meaningful disclosure or genuine consent.
The proposed class action seeks to represent all users who paid for Perplexity’s services while believing their conversations would remain private. The legal theory centers on deceptive practices-Perplexity marketed its incognito mode as providing privacy protection that, according to the lawsuit, it did not actually deliver.
This lawsuit arrives at a moment when privacy concerns around AI systems have reached unprecedented intensity. AI companies collect vast amounts of data from users-conversations, search queries, behavioral patterns-and use this data to train future models, improve services, and in many cases, serve targeted advertising. The tension between the data these systems require to function and the privacy expectations users reasonably hold has become one of the central challenges facing the AI industry.
Perplexity’s position as an AI-native search engine makes these allegations particularly significant. Unlike traditional search engines that index the web, Perplexity generates responses to queries using large language models. This approach requires processing user queries through AI systems that may themselves involve third-party infrastructure, creating potential pathways for data to flow to partners, advertisers, or analytics providers.
For AI companies, trust is a precious and fragile commodity. Users share intimate details with AI systems-they ask personal questions, disclose health concerns, discuss financial matters, and reveal professional secrets. They do so in the expectation that this information will be handled responsibly. When that trust is perceived as betrayed, the damage can be severe and lasting.
The lawsuit has implications beyond Perplexity. It raises questions about the privacy practices of AI companies generally and whether industry-wide privacy promises can be trusted. If the allegations prove true, they suggest that even companies explicitly committed to privacy may not be delivering on their commitments.
For users of AI services, the Perplexity lawsuit is a reminder that privacy promises from AI companies deserve scrutiny. Users should understand what data AI services collect, how that data is used, and what guarantees actually exist regarding privacy. Reading privacy policies carefully remains important for users who want to understand what they are agreeing to.