Why ChatGPT Needs Ads (And What They Changed to Make It Happen)
OpenAI once called ads a "last resort." What changed — and whether Anthropic will face the same pressure.
Why we're* writing this piece
There is a flood of AI news, and we want to be a life raft, not contribute to the torrent. So here's why we're writing about ads:
Privacy matters, and you're interested in it
Understanding the structure & strategy of companies gives you a framework to make sense of their actions, which makes news more predictable & less overwhelming
I diffed the privacy policies using the Wayback Machine & Cowork and needed to tell someone about it
There are concrete actions you can take to secure your accounts, and we want you to know about them
About those ChatGPT ads
You may have seen the Super Bowl ads: ChatGPT is now serving ads to its Free and Go tiers, and Anthropic, Claude's parent company, ran a series of spots — titled "Betrayal," "Deception," "Treachery," and "Violation" — promising they'll never do the same. Two companies, opposite bets.
If you, your organization, and your loved ones only use paid AI accounts, this may not seem like your problem. But as we saw with social media, the decisions companies make to monetize 'free' accounts can end up shaping the society we're all a part of.
So how is it that one company, OpenAI, has decided its future will be ad-supported and the other, Anthropic, feels confident not just holding off for now but saying never in the most public way possible?
OpenAI: (almost) a billion hungry mouths to feed
When I saw that ChatGPT was going to start running ads, I wasn't surprised. They have a planetary-sized user base — more than 900 million weekly users — that doesn't pay its way, and they are conseqeuntly burning through cash as a company. The well-established playbook for this situation is to serve ads.
My first thought wasn't for the ads themselves but for the privacy policy. As part of my research for our Foundations course, I have read the privacy policy top to bottom more than once, and it didn't feel like it left space for ads. I was sure they were going to have to update it.
In early February, the email arrived: "Updates to OpenAI's Privacy Policy." I knew exactly which clause I wanted to check: the one that states they don't "sell" personal data, don't "share" it for "cross-contextual behavioral advertising," and don't process it for "targeted advertising." To my surprise, that section was entirely unchanged. How were they going to target ads?
I decided to roll back the tapes to get the complete picture. I pulled the previous version of OpenAI's privacy policy using the Wayback Machine, then used Anthropic's Cowork to diff the old and new versions. Half a dozen changes emerged, which together lay the groundwork for an ad machine while leaving the no-selling pledge technically intact.
The most important change is a new line saying OpenAI can use your data to "personalize and customize your experience" — language broad enough to cover deciding which ads to show you.
Then comes the data collection to feed that personalization. OpenAI expanded the information they can gather in two telling ways. Connect your phone contacts, and they upload your address book. Use their new Atlas browser, and your browsing history is captured too.
Now, OpenAI says all the right things about how these ads will work. Their announcement commits to four principles: answer independence (ads don't influence ChatGPT's answers), conversation privacy (conversations stay private from advertisers), no data sales, and user control (you can turn off personalization and clear ad data at any time). The ads they've described are contextual — their example is someone searching for dinner ideas and being served a sponsored enchilada kit. You expressed intent, here's a product. That's Google-style advertising: safe, legible, boring.
Some personalization makes contextual ads better — showing you the right enchilada kit, not just any enchilada kit. But you don't need your users' address books for that. You don't need their browsing history. Taken together, the privacy policy changes are overbuilt for the ads OpenAI described. Which raises the question: what were they built for?
The architecture supports segment-based targeting, retargeting across websites, and a social graph built from your contacts — the full apparatus of behavioral advertising, running inside OpenAI's systems, without ever technically "selling" your data.
Google built its ad empire starting from intent — you searched for something, here's a relevant result. Meta built its ad empire starting from identity — who you are, who you know, what you engage with. Both companies do behavioral targeting now, but the foundation is different. The ads OpenAI announced start from intent: you asked about dinner, here's an enchilada kit. The plumbing OpenAI built starts from identity: your contacts, your browsing, your history across conversations.
And OpenAI may end up with something more powerful than either Google or Meta. Google infers what you want from what you search for. Meta infers who you are from what you do. But ChatGPT doesn't have to infer. People tell it directly — in natural language, with a specificity and emotional texture that no search query or social graph can match. No one types "I'm anxious about my marriage and here's why and what does it mean" into Google. They do tell ChatGPT.
Google watches what you search for and serves you a result. Meta watches what you do and guesses what you feel. OpenAI is sitting on a machine where people volunteer what they feel and ask for help.
OpenAI hasn't built that ad machine yet. They could still decide not to. But the privacy policy changes create the capacity for it, and they've hired a team that knows how. As The Information reported in October 2025, one in five OpenAI employees now comes from Meta. This includes OpenAI's applications CEO, Fidji Simo, who runs ChatGPT and pretty much every other money-making division of the company. Before this role, she spent a decade at Facebook, where she was instrumental in building its monetization engine — launching the news feed ads that transformed the social network into a financial juggernaut. Hiring is its own kind of signal.
The bet OpenAI is making is that by the time the plumbing carries what it was built to allow — even if they told themselves "just in case" when they built it — we'll have already made our peace with ads in the conversation, and so will they.
So why can Anthropic say never?
Claude has less than 5% of ChatGPT's user base. But Anthropic's revenue is approaching OpenAI's $22 billion in annualized run rate. You don't need 900 million weekly users if you're attracting and monetizing the right ones.
Anthropic's revenue comes overwhelmingly from businesses: over 500 organizations spend more than a million dollars a year on Claude, eight of the Fortune 10 are customers, and Claude Code, a single product launched just last year, generates $2.5 billion in annual revenue on its own. Claude also doesn't center the compute-intensive features that melt GPUs — image generation, video — instead emphasizing cheaper-to-produce but higher-value outputs like code. With fewer free users, more paying businesses, and lower costs per interaction, Anthropic can simply afford to do what OpenAI cannot.
Being able to afford the absence of ads is one thing. Proclaiming on national television that you will never have them is another. In a February blog post titled "Claude is a space to think," Anthropic's leadership made the incentive argument explicit. Imagine a user says to Claude, "I can't sleep." A Claude optimizing for engagement might keep the conversation going. A Claude optimizing for ad revenue might surface a mattress brand. The Claude they say they want to build would help you figure out why you can't sleep — and the most useful version of that interaction might be a short one. As they argue, you can't optimize for that outcome and for advertising revenue at the same time.
I ran the same exercise on Anthropic's policy that I ran on OpenAI's — looking for the plumbing that would need to exist before ads could flow. It isn't there. No "personalize and customize your experience" language. No advertiser or ad network categories in data sharing. No ad-tracking cookies. No mechanism for segment-based targeting. The architecture matches the words: there are no pipes through which advertising could run — yet.
What could change
The same Anthropic Super Bowl campaign that advertised the absence of ads is helping to build the user base that could eventually make ads necessary. Claude is currently the number one app in the Apple App Store. Every new free user costs money to serve without generating revenue. Individually the cost is small, but in aggregate, millions of free users add up.
In the same piece where they explained they won't run ads, Anthropic's leadership has described an alternative to advertising: agentic commerce, where Claude handles purchases and bookings on your behalf. You ask Claude to find a flight or compare insurance plans; Claude facilitates the transaction and earns a fee. The distinction from advertising is that the interaction is user-initiated — you asked for help buying something, rather than having a product placed in front of you by a company that paid for your attention. Though to a user, this difference may feel semantic.
As it turns out, this commerce path is the same path OpenAI described before it started running ads. In March 2025, Sam Altman said he was "more excited" about commerce than advertising — specifically, a model where ChatGPT earns a small affiliate fee when someone buys something through a recommendation. "We're never going to take money to change placement or whatever," he said. "I kind of just don't like ads that much."
Eleven months later, ChatGPT launched ads.
Anthropic's privacy policy doesn't enable advertising. Its product isn't optimized for engagement. Its revenue doesn't depend on consumer attention. The architecture matches the announcements — for now. But OpenAI once made the same trust argument, described the same commerce-first path, and arrived at ads anyway.
The lesson of social media wasn't just that our data was collected. It was that the advertising model created incentives to manipulate the emotions it was mining — because anxious, outraged, uncertain people engage more, and engagement is what advertisers pay for. An AI assistant optimized for the same incentives wouldn't just observe what you feel. It would have reason to amplify it.
The decisions being made right now about how AI companies monetize their free tiers will shape how hundreds of millions of people think through the most personal questions of their lives. We have better tools to read the architecture this time. The question is whether we'll use them.
What you can do
To secure your information on any account with any AI:
Ensure you've turned off model training. In ChatGPT's Settings, under Data Controls, toggle off "Help improve the model for everyone." This doesn't affect ads, but it keeps your conversations from being used to train future models. If you are on an Enterprise account, you may not see the option because it has been turned off institutionally.
To keep your information from being used for ad targeting on ChatGPT Free & Go plans:
You can't opt out of advertising, but you can opt out of personalization. In ChatGPT's Settings, under Ad Controls, toggle off Personalize Ads. This won't prevent you from seeing ads, but it will prevent OpenAI from using your past chats and memories to select them.
To build a better world:
If you work in education, policy, or advocacy, bring these issues into conversations about AI fluency. The conversation about responsible AI use often focuses on what students do with AI. But when students are on free accounts with no data protections or privacy protections, an equally urgent question is what AI does with them.
* As we teach in Foundations, AI is not a magic typewriter — writing is still a process, AI just enhances it. To get this newsletter out regularly is not a job for me & AI alone: Katharine Nevins is the managing editor of Coauthored. For purposes of this particular newsletter, it was helpful that Katharine spent two years working at AppNexus, an online display advertising platform. She is also an operating partner at Parameter Ventures and has held a number of senior product roles in tech, most recently as Chief Technology Officer of SaladStop!