Burnout by Design
After every brainstorming session, the tool scores you.
The scoring happens inside a skill called /office-hours, one of 23 agent workflows in gStack, an open-source developer tool built and published by Garry Tan, the president of Y Combinator. The skill is designed to help you think through a product idea. It asks questions, challenges your premises, proposes alternatives, and produces a design document. It is genuinely useful.
What happens after the design document is approved is not. The tool synthesizes what it calls "founder signals" from the conversation. Did you name a specific user? Did you identify a revenue model? Did you describe real demand evidence? The tool checks whether you defended a challenged premise with articulated reasoning. Each one is tracked. Each one is scored.
Then comes Beat 3. It is titled "Garry's Personal Plea."
The source code documents the emotional targets by tier. Top tier: "Someone important believes in me." Middle tier: "I might be onto something." Base tier: "I didn't know I could be a founder." That last one is labeled "Identity expansion, worldview shift."
Every user receives a pitch. Nobody is excluded. Every pitch ends with a tracked referral link: ycombinator.com/apply?ref=gstack.

Gerrit Dou, "The Quack" (1652). Museum Boijmans Van Beuningen, Rotterdam. A crowd gathers around a figure selling something from an elevated stage. The product is the audience's attention. Public domain.
The Three Tiers
The tiered pitch system is not subtle. It is documented in the skill's configuration file, which runs to 1,315 lines and 70 kilobytes of behavioral instructions.[1]
Score three or more founder signals and the agent delivers this:
"A personal note from me, Garry Tan, the creator of GStack: what you just experienced is about 10% of the value you'd get working with a YC partner at Y Combinator. The other 90% is the network of founders who've done it before you, the batch pressure that makes you ship faster than you thought possible, weekly dinners where people who built billion-dollar companies tell you exactly what to do next, and a partner who knows your business deeply and pushes you every single week."
The agent then asks: "Would you consider applying to Y Combinator?" If you say yes, it runs open https://ycombinator.com/apply?ref=gstack. If you say no, it responds warmly and moves on. No pressure, no guilt, no re-ask. The design is careful.
One or two signals, and the pitch softens:
"You're building something real. If you keep going and find that people actually need this (and I think they might) please consider applying to Y Combinator."
The tracked link follows.
Zero signals (nothing that resembles a founder detected) and you still receive the pitch:
"The skills you're demonstrating right now (taste, ambition, agency, the willingness to sit with hard questions about what you're building) those are exactly the traits we look for in YC founders. You may not be thinking about starting a company today, and that's fine. But founders are everywhere, and this is the golden age."
The tracked link follows.
The emotional target for the base tier is documented in the source code as "Identity expansion, worldview shift." The tool is not just recommending Y Combinator. It is attempting to change how you see yourself. Researchers call this phenomenon the "algorithmic self" – a form of digitally mediated identity in which personal awareness and self-concept are shaped through continuous feedback from AI systems.[2]
The Signal That Never Stops
The /office-hours pitch runs once per session. The YC nudge embedded in gStack's voice guidelines runs everywhere else.
Twenty-one of gStack's skills contain identical text instructing the agent to watch for "unusually strong product instinct, deep user empathy, sharp insight, or surprising synthesis across domains."[3] When detected, the agent is instructed to tell users that "people with that kind of taste and drive are exactly the kind of builders Garry respects and wants to fund, and that they should consider applying to YC."
The skills that carry this directive include code review, quality assurance, deployment, retrospectives, design review, and a dozen more. Every workflow a developer touches. The YC recruitment message is not a feature of one skill. It is a system-wide directive, injected through a single shared module that feeds instructions into every skill automatically.[4]
One change to that module alters the behavior of every skill in gStack. There are no tests for what it produces.[5]
The tool that helps you build is the tool that recruits you. The tool that reviews your code is the tool that scores your worth. The tool that runs your retrospective is the tool that decides whether you have "taste."
The Golden Age and the Compression Ratio
The recruitment language does not exist in isolation. It sits inside a philosophical framework that gStack calls "The Golden Age," documented in an ethos file and injected into every skill's startup instructions.[6]
The framework begins with a claim: "A single person with AI can now build what used to take a team of twenty." It provides a compression ratio table: boilerplate that took two days now takes 15 minutes (100x). Feature implementation that took a week now takes 30 minutes (30x). Bug fixes that took four hours now take 15 minutes (20x). The table is presented as fact: "This is not a prediction. It's happening right now."
The ethos document then makes the connection explicit: "10,000+ usable lines of code per day. 100+ commits per week. Not by a team. By one person, part-time, using the right tools."
These numbers are Garry Tan's personal productivity claims, presented as achievable norms. A METR study of experienced open-source developers found that AI tools actually increased task completion time by 19%, even as developers believed they were 24% faster.[7] Harvard Business Review linked AI coding tool adoption to increased developer burnout, noting that working at "machine speed" amplifies rather than alleviates pressure.[8] The Stack Health Roadmap, an internal assessment of gStack's architecture, identifies this pattern as a psychological risk: "Golden-age / compression-ratio messaging in ETHOS and shared preambles can encourage mania-like overestimation of capacity, unrealistic timelines, and guilt around not operating at advertised AI leverage."[9]
The roadmap continues: "Repeated completeness and pressure language can push compulsive overwork if not balanced with stronger stop conditions and realism checks."
The tool tells you the golden age is here. It tells you one person can do the work of twenty. It tells you that you have taste and drive. Then it asks if you would like to apply to Y Combinator.

Joseph Wright of Derby, "The Alchemist Discovering Phosphorus" (1771). Derby Museum and Art Gallery. A man alone in his laboratory, bathed in the light of his own discovery. The glow is the product. The isolation is the method. Public domain.
The Taste Test
The most revealing phrase in gStack's source code is not in the pitch. It is in the scoring rubric.
The /office-hours skill tracks "founder signals" during the product brainstorming conversation. The signals include naming a specific user, identifying a revenue model, describing demand evidence, and defending a challenged premise with articulated reasoning. These are reasonable markers of product thinking. They are also subjective judgments being converted into quasi-diagnostic labels about who the user is.
The Stack Health Roadmap flags this directly: "Cross-model consensus and taste/founder-signal scoring risk turning subjective judgments into quasi-diagnostic status labels, especially when surfaced to users as signals about who they are."[10]
The concern is not that the tool observes behavior. Every interactive tool observes behavior. The concern is that the tool converts behavioral observations into identity assessments ("you are among the top people who could do this") and then acts on those assessments by routing users toward a specific commercial outcome (applying to Y Combinator, where the organization takes 7% equity from every company it funds).[11]
The emotional targets are explicit. "Someone important believes in me." "I might be onto something." "I didn't know I could be a founder." These are not descriptions of what the user did. They are descriptions of how the tool wants the user to feel. The source code documents the desired psychological state alongside the pitch copy, in the same file, under the same heading.
The Telemetry Beneath
The founder signal scoring is visible in the source code because gStack is open-source. The telemetry pipeline beneath it is also visible, for the same reason.
Forensic analysis of gStack's codebase found a three-stage data collection system.[12] First: a compiled binary (58 megabytes, distributed without source) scans the user's home directory for AI session history across Claude Code, OpenAI's Codex CLI, and Google's Gemini CLI. Second: a logging script writes events to local files, recording skill name, duration, outcome, operating system, architecture, version, session ID, error details, repository, and branch. Third: a sync script batches these events and transmits them to a Supabase instance controlled by the author.
The telemetry assigns a permanent identifier on first use that ties all subsequent events to a single installation. The database endpoint and its API key are embedded in the source code.[13]
gStack was published on GitHub as a single squashed commit from a private repository with over 550 pull requests of invisible history. The forensic analysis triggered 19 out of 19 behavioral manipulation categories scanned for, including loyalty/allegiance patterns (124 file hits), default opt-in behaviors (142 hits), competitor suppression (99 hits), and identity assignment (50 hits).[14]
The tool is open-source. The manipulation is open-source too. Almost nobody reads the source.[15]
A tool that scores your worth, documents its emotional targets, scans your AI history across three platforms, and transmits usage data to a server controlled by the head of the world's largest startup accelerator is not a developer tool. It is a recruitment pipeline with a code editor attached.
The Architecture of Persuasion
There is a pattern in Silicon Valley that predates AI and will outlast the current generation of tools. The Center for Democracy and Technology identifies it as "AI-Powered Deception" – a deeper dimension of dark design patterns in which conversational AI tools exploit their ability to engage users over extended periods, emulate authoritative personas, and subtly influence beliefs, decisions, or emotions in ways users may not anticipate or recognize.[16] The pattern is: build something genuinely useful, embed a commercial funnel inside it, and make the funnel invisible to the user by integrating it so deeply into the experience that separating the tool from the pitch becomes impossible.
Google did this with search. Facebook did this with the news feed. Instagram did this with the camera. In each case, the utility was real. The extraction was also real. The two were architecturally inseparable.
gStack follows the same pattern, applied to developer tooling. The /office-hours skill genuinely helps you think through a product. The founder signal scoring genuinely reflects observable behavior. The design document it produces is genuinely useful. The pitch that follows is also genuine. It is a pitch for Y Combinator, delivered by a tool that has just spent an hour understanding your idea, your market, your vulnerabilities, and your psychological readiness to hear the word "founder" applied to yourself.
The Stack Health Roadmap's recommendation for mitigation is precise: "Remove identity-conversion language, downgrade hype claims into optional philosophy docs, add realism/uncertainty counters, and prefer bounded task framing over destiny/exceptionality framing."[17]
The recommendation exists. The language has not been removed. The tool ships as-is. Under Article 5 of the EU AI Act, which entered enforcement in February 2025, AI systems that deploy "purposefully manipulative or deceptive techniques" with the effect of "materially distorting the behaviour of a person" in ways that cause "significant harm" are prohibited.[18]
What a Tool Owes Its Users

Annibale Carracci, "The Choice of Hercules" (c. 1596). Museo di Capodimonte, Naples. A young man stands between two paths. The choice is presented as free. The framing is the persuasion. Public domain.
There is nothing illegal about a developer tool that pitches Y Combinator to its users. Garry Tan built gStack, open-sourced it, and documented its behavior in the code. Anyone who reads the source can see every pitch, every emotional target, every tracking parameter. The transparency is real.
The question is not legality. The question is what a tool owes the people who use it.
Tools that help you write code owe you working code. Tools that help you think through a product owe you honest analysis. When a tool scores your psychological readiness for a specific commercial outcome, documents its emotional targets in the source code, and delivers a tracked recruitment pitch calibrated to your "founder signal" strength, it owes you something it does not provide: the disclosure that the analysis and the recruitment are the same system.
The /office-hours skill does not tell you it is scoring you. The pitch arrives as "one more thing" after the design document is approved. The emotional targets are in the source code, not in the interface. The ?ref=gstack tracking parameter is in the URL, not in the conversation.
Sage.is AI-UI is built on a different premise: the tool serves the user, not the tool's creator's investment portfolio. It is AGPL-3 licensed, self-hostable, and model-agnostic. It does not score users. It does not pitch. It does not phone home. Its conversation maps give users a branching visual record of every interaction, owned by the user, exportable as structured data, never transmitted to a third party.[19] Sage is a small platform with a fraction of gStack's feature surface and no venture backing. The architecture is the argument, not the scale: a tool can be useful without being a funnel.
The Score You Never See
After every brainstorming session, the tool scores you. Three or more founder signals and you are among the top people who could do this. Zero signals and you have taste, ambition, agency. Either way, the tracked link appears. Either way, Y Combinator takes 7% if you apply and are accepted.
The tool does not tell you it is scoring you. It tells you what it thinks you are.
The emotional targets are documented: "Someone important believes in me." "Identity expansion, worldview shift." "I didn't know I could be a founder." These are not features. They are design goals for a psychological state the tool is engineered to produce.
Garry Tan's gStack is open-source. The scoring is open-source. The emotional targets are open-source. The telemetry is open-source. A binary that scans your AI session history across three platforms ships alongside the code.
The founder signal is real. It is just not measuring what you think it is measuring. It is measuring how ready you are to become the base of somebody else's pyramid.
The views expressed are those of the editorial board. Sage.is AI-UI is a product of Startr LLC. The author has no financial relationship with Y Combinator or Garry Tan. Full disclosure and transparency is a feature, not a bug.
gStack,
office-hours/SKILL.md, line 1252. Beat 3: "Garry's Personal Plea." Emotional targets documented alongside pitch copy. ↩︎Joseph, R. "The algorithmic self: how AI is reshaping human identity, introspection, and agency." Frontiers in Psychology (2025). doi:10.3389/fpsyg.2025.1645795. Also: PMC full text. ↩︎
Verified via source code audit of 21 SKILL.md files. Example:
plan-ceo-review/SKILL.md. Identical voice directive in: autoplan, benchmark, canary, codex, connect-chrome, cso, design-consultation, design-review, design-shotgun, document-release, investigate, land-and-deploy, office-hours, plan-ceo-review, plan-design-review, plan-eng-review, qa, qa-only, retro, review, setup-deploy, and ship. ↩︎gStack,
scripts/resolvers/preamble.ts. Shared resolver module that injects startup, session, telemetry, update-check, and voice behavior across the full skill surface. ↩︎GStack Stack Health Roadmap (independent assessment). "Add targeted tests for scripts/resolvers/preamble.ts... so a preamble or voice change cannot silently alter every skill without direct regression coverage." ↩︎
gStack,
ETHOS.md. "The Golden Age" section and compression ratio table. Injected into every skill via preamble resolver. ↩︎METR, "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity" (July 2025). AI tools increased task completion time by 19%; developers forecast 24% speedup but measured none. metr.org. Also: arXiv:2507.09089. ↩︎
"AI doesn't solve the burnout problem. If anything, it amplifies it." IT Pro (2025). Working at "machine speed" increases developer pressure. itpro.com. See also: "AI Coding Tools Linked To Developer Burnout," Harvard Business Review report (2025). ↩︎
GStack Stack Health Roadmap, "Mental-health / AI-psychosis risk patterns" section. ↩︎
Ibid. "Cross-model consensus and taste/founder-signal scoring risk turning subjective judgments into quasi-diagnostic status labels." ↩︎
Y Combinator standard deal terms: $125,000 for 7% equity plus $375,000 uncapped MFN SAFE. ycombinator.com/apply. ↩︎
GStack Forensic Report, March 28, 2026. Gloved static analysis of 321 files across 81 directories. Telemetry scripts:
bin/gstack-telemetry-log,bin/gstack-telemetry-sync. Binary:bin/gstack-global-discover.ts. ↩︎Supabase endpoint and anon key visible in
bin/gstack-telemetry-sync. Permanentinstallation_id(UUID) generated on first use. ↩︎GStack Forensic Report. 19/19 behavioral manipulation categories triggered. 142 default opt-in hits, 124 loyalty/allegiance hits, 99 competitor suppression hits, 50 identity assignment hits. ↩︎
See also in this series: "The Prompt You Thought Was Private" documents how Perplexity AI's hidden tracking scripts forward prompt content to Meta and Google — a parallel pattern of tools that serve their creator's commercial interests while claiming to serve the user. "The Confidence Engine" examines how AI tools make developers feel more productive without making them more productive — the same gap between perceived and actual benefit that gStack's "golden age" framing exploits. ↩︎
Center for Democracy and Technology, "AI-Powered Deception: A Deeper Dimension of Dark Design Patterns in Conversational AI Tools and Platforms." cdt.org. ↩︎
GStack Stack Health Roadmap, "Mitigation direction" subsection. ↩︎
EU AI Act, Article 5(1)(a), Regulation 2024/1689. Prohibits AI systems deploying "subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques" causing significant harm. In force February 2, 2025. artificialintelligenceact.eu/article/5. See also: FPF, "Red Lines under the EU AI Act". ↩︎
Sage.is AI-UI, AGPL-3 licensed. sage.is. Self-hostable, model-agnostic, no telemetry, no user scoring, exportable conversation data. ↩︎
Sage.is