The Prompt You Thought Was Private
On a morning sometime in early 2026, a man in Utah opened Perplexity AI in his browser and began typing. He asked about his family's finances. He asked about his tax obligations. He asked about his investment portfolio and the strategies he was considering. He used Perplexity the way millions of people use AI search engines: as a private conversation with a machine that knows things, conducted in a text box that feels like a confidential space.
The text box was not confidential. According to a class-action lawsuit filed on April 1, 2026 in federal court in San Francisco, every prompt the man typed was transmitted to Meta and Google via tracking scripts embedded in Perplexity's website.[1] The scripts activated the moment he logged in. They intercepted the full text of his queries as URL strings inside his browser and forwarded them to Meta and Google's advertising infrastructure. His investment strategies, his tax situation, his family's financial details – all of it delivered to two of the largest advertising companies on earth, by a tool that markets itself as a search engine for people who want better answers.
The man had used Perplexity's Incognito mode. It did not matter.
![]()
Jeremy Bentham, "Panopticon" (1791). University College London. The architecture where every occupant is visible from the centre, and no occupant can see the watcher. Public domain.
The Screen Door on the Submarine
Perplexity's Incognito mode promises that conversations will not be saved to the user's library and will expire after 24 hours.[2] The feature controls what Perplexity stores in your account. It does not control what happens before Perplexity's application logic ever touches your prompt.
The tracking scripts operate at the browser level. They are embedded in the page itself – loaded when the page loads, executing in the user's browser, intercepting data as it moves between the text box and Perplexity's servers.[3] The scripts transmit to Meta and Google before Perplexity's own privacy controls apply. Incognito mode governs what Perplexity keeps. The tracking scripts govern what Meta and Google receive. These are two different systems, and they do not talk to each other.
The lawsuit alleges that a subscribed user who entered a prompt such as "What is the best treatment for liver cancer?" had the entire prompt shared with Meta and Google via a full-string URL intercepted inside the browser.[4] The prompt was not anonymized. It was not summarized. It was transmitted whole, as typed, to companies whose business model is converting personal data into advertising revenue.
Incognito mode is a privacy setting for one system embedded inside a surveillance architecture run by two others. The setting works. The architecture works harder.
What the Privacy Policy Says, and What the Trackers Do
Company privacy policies state that it does not "sell" or "share" sensitive personal information for cross-context behavioral advertising.[5] Independent analysis of the policy found that Perplexity shares approximately 36 percent of collected data with third parties, including AI model providers, analytics tools, and advertising partners.[6]
The distinction matters. Perplexity's privacy policy governs what Perplexity does with the data it collects through its own systems. The tracking scripts allegedly operate outside that governance – they are third-party code (Meta Pixel, Google Analytics, or equivalents) that collects data independently and transmits it to infrastructure Perplexity does not control.[7] The privacy policy addresses Perplexity's behaviour. It does not address Meta's behaviour with data Meta's own scripts collected from Perplexity's pages.
Meta's response to the lawsuit was revealing. A spokesperson pointed to a Facebook help page stating it is against Meta's rules for advertisers to send the company sensitive information.[8] The framing is precise: Meta classifies Perplexity as an advertiser. The data arrives through advertising infrastructure. The rules that govern it are advertising rules – not privacy rules, not user-consent rules, not the rules a person would expect to apply to a conversation they believed was private.
The company said it had not been served the lawsuit and could not verify its existence or claims.[9]
The Pattern
This pattern is not new, it is the oldest pattern in consumer technology, applied to the newest category of tool.
Shoshana Zuboff, professor emerita at Harvard Business School and author of The Age of Surveillance Capitalism, named it in 2019: the systematic extraction of human experience as raw material for commercial prediction and control.[10] Google Search embedded tracking into every query from the beginning. Facebook embedded tracking into every page that displayed a Like button. Instagram embedded tracking into every photo interaction. The mechanism is always the same: offer a useful service, embed third-party tracking scripts in the page, and let the scripts collect data that the service's own privacy policy does not govern.
AI is the latest surface for this architecture. Perplexity processes approximately 15 million messages per day.[11] These prompts are longer, more specific, and more personal than traditional search queries. A Google search for "liver cancer treatment" reveals an interest. A Perplexity prompt asking "My mother was diagnosed with stage 3 liver cancer last week, what are the treatment options and survival rates for someone her age" reveals a family crisis. The prompt format invites disclosure. The tracking architecture harvests it.
The lawsuit names three defendants: Perplexity, Meta, and Google.[12] The legal theory is that all three are jointly liable – Perplexity for embedding the scripts, Meta and Google for receiving and exploiting the data. The cause of action include violations of the California Consumer Privacy Act (CCPA) and the California Electronic Communications Privacy Act (CalECPA).
Big AI asks you to think out loud. Tracking scripts record what you say. Advertising networks sell what you said to people who want to track you.
The Opt-Out That Does Not Opt Out
Users can opt out of AI data retention with a toggle in Account Settings under the Preferences tab that prevents Perplexity from using searches to improve its models.[13] Enterprise users are excluded from AI training automatically.
The toggle controls training data. It does not control tracking scripts. When a user who has opted out of AI data retention and enables Incognito mode, and configures every available privacy setting in Perplexity's interface they have done everything available on the platform, to ensure privacy. The tracking scripts (according to lawsuit allegations) continue to transmit regardless.
The architecture of the opt-out is instructive. The user is given control over one data flow (training) while a different data flow (advertising) operates through a different mechanism (browser-level scripts) governed by different rules (Meta's advertiser policies and Google's analytics terms). The user only sees one set of controls on their screen.
The data flows through multiple channels, each with its own governance, its own destination, and its own commercial purpose.
Woodrow Hartzog, a professor at Boston University School of Law who studies privacy and digital ethics, has argued that privacy controls that give users the feeling of control without the fact of control are themselves a dark pattern.[14] The controls are real. The privacy is not.
What a Private Prompt Actually Requires
There is a simple test for whether a conversation with an AI is private: does the prompt leave your infrastructure?
Perplexity answers yes – three times over. Once to Perplexity's servers. Once to the underlying model provider (OpenAI or Anthropic, depending on the query). Once to Meta and Google via tracking scripts. Three separate companies receive the text you typed in a box that felt like a private conversation.
Managed cloud AI services (ChatGPT, Claude.ai, Gemini) answer yes – once. The prompt goes to the model provider. Whether it is retained, used for training, or accessible to the provider's employees depends on the terms of service, which change periodically and are rarely read.
Self-hosted AI platforms answer this privacy question differently. The prompt does not need to leave the building. The model can run on your hardware. The interface runs on your server. There are no tracking scripts because there is no advertising business model to feed. There is no third-party model provider because the model is local. The conversation is private because the architecture makes privacy the default, not an opt-in toggle that governs one data flow while others operate unmonitored.
Sage.is AI-UI is built on this architecture. AGPL-3 licensed, self-hostable, model-agnostic. No tracking scripts. No Meta Pixel. No Google Analytics. The code is open – anyone can verify what it does and what it does not do. Sage is a small platform and only offers web-search through their privacy tools, unlike the integration that makes Perplexity leak your searches.[15] The architecture is the argument, not the feature set: privacy is not a setting. It is a topology.
The Conversation You Cannot Take Back

Abraham Solomon, "Advice to those about to set out in life" (c. 1856). A private conversation overheard. The architecture of the room determines who can listen. Public domain.
The man in Utah cannot un-share his financial data. The prompt has been transmitted. Meta has it. Google has it. The tracking scripts ran in his browser, in real time, as he typed. No amount of retroactive opt-out, account deletion, or privacy setting adjustment will retrieve the data from Meta and Google's infrastructure.
Every Perplexity user who has ever typed a prompt (about their health, their finances, their legal situation, their relationships, their fears) has potentially done the same thing. The lawsuit seeks class-action status on behalf of all of them.
The prompt felt private. The text box looked like a conversation. The incognito toggle promised discretion. Underneath, the tracking scripts ran the same code they run on every advertising-supported website on the internet, collecting the same data, transmitting it to the same companies, for the same purpose: to turn your private behaviour into their product.
The prompt you thought was private was not private. It was inventory.
The views expressed are those of the editorial board. The author has no financial relationship with Perplexity AI, Meta, or Google. Full disclosure and transparency is a feature, not a bug.
Class-action lawsuit filed April 1, 2026, U.S. District Court, Northern District of California (San Francisco). Plaintiff: John Doe (Utah). Defendants: Perplexity AI Inc., Meta Platforms Inc., Alphabet Inc./Google. Bloomberg. Yahoo News. ↩︎
Perplexity Help Center, "Account & Settings." Incognito mode: conversations not saved to library, expire after 24 hours. perplexity.ai. ↩︎
Lawsuit complaint, as reported by The Decoder: "undetectable tracking software" embedded in Perplexity's code activates on login. Also: SiliconSnark. ↩︎
Lawsuit complaint, as reported by MediaPost. Full prompt text transmitted via URL string interception. ↩︎
Perplexity Privacy Policy. perplexity.ai/hub/legal/privacy-policy. ↩︎
Independent privacy analysis by Cape. Approximately 36% of collected data shared with third parties. cape.co. ↩︎
Lawsuit allegation: tracking scripts operate independently of Perplexity's own privacy controls. Roborhythms. ↩︎
Meta spokesperson response, as reported by Bloomberg. Pointed to Facebook help page stating it is against Meta's rules for advertisers to send sensitive information. ↩︎
Perplexity spokesperson: "We have not been served any lawsuit that matches this description so we are unable to verify its existence or claims." Benzinga. ↩︎
Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, 2019). Zuboff is professor emerita at Harvard Business School. ↩︎
Perplexity usage estimates, 2025-2026. Approximately 15 million queries per day per company reporting and industry analysis. ↩︎
Causes of action: California Consumer Privacy Act (CCPA) and California Electronic Communications Privacy Act (CalECPA). ComplianceHub. ↩︎
Perplexity Help Center, "Data Retention and Privacy." AI data retention toggle in Account Settings > Preferences. Enterprise users excluded automatically. perplexity.ai. ↩︎
Woodrow Hartzog, Privacy's Blueprint: The Battle to Control the Design of New Technologies (Cambridge, MA: Harvard University Press, 2018). Hartzog is professor at Boston University School of Law. See also Hartzog and Daniel Solove, "The Scope and Potential of FTC Data Protection," George Washington Law Review 83 (2015). ↩︎
Sage.is AI-UI, AGPL-3 licensed. sage.is. Self-hostable, model-agnostic, no tracking scripts, no third-party analytics, exportable conversation data. ↩︎
Sage.is