Why Are People Deleting ChatGPT in 2026? (295% Uninstall Spike Explained)
Why Are People Deleting ChatGPT? The Real Reasons Behind the Mass Exodus
Pentagon contracts, presidential politics, and a court order that means your deleted chats were never actually deleted — here's the full story behind the biggest AI backlash of 2025.
People are deleting ChatGPT in 2026 primarily because OpenAI signed a deal with the U.S. Department of Defense in late February 2026. This triggered a 295% spike in app uninstalls in a single day, a 700,000-person #QuitGPT boycott, and a mass migration to Claude.
Claude's ranking on the US App Store — dethroning ChatGPT
Something unusual is happening in the world of AI. ChatGPT — for years the dominant, near-synonymous face of consumer artificial intelligence — is facing a user revolt. People are uninstalling the app, cancelling paid subscriptions, and posting farewell screenshots on social media. The numbers are dramatic. But the reasons behind them are more interesting than a simple tech spat.
This is about war contracts, surveillance fears, political donations, and a court order that quietly changed what "delete" means when you talk to an AI.
The Pentagon Deal That Started It All
The breaking point came when OpenAI CEO Sam Altman announced a new partnership with the United States Department of Defense. To much of ChatGPT's user base — a mix of students, writers, developers, and professionals — this felt like a line crossed. The announcement triggered an immediate and measurable response: app uninstall data from market intelligence firm Sensor Tower showed ChatGPT mobile uninstalls surging by 295% in a single day.
The backlash wasn't purely about the abstract idea of AI in the military. Users also pointed to OpenAI's concurrent lobbying efforts to prevent state-level AI regulation — a move critics argued would give federal agencies, and by extension the current administration, far greater control over how AI tools are deployed domestically.
"OpenAI has been lobbying for AI to go unregulated at the state level — critics say this would give the Trump administration ultimate control over AI implementation."— Analysis of the QuitGPT movement
The #QuitGPT Movement
Discontent crystallised into an organised campaign. A decentralised boycott movement calling itself "QuitGPT" spread across Reddit, Instagram, and a dedicated website, encouraging users to cancel subscriptions and migrate to alternatives. The grievances were multiple and interconnected.
The QuitGPT Grievances, Summarised
- Military AI use: The DoD deal and its implications for autonomous weapons and surveillance systems.
- Political donations: OpenAI leadership made donations to a pro-Trump super PAC, alienating a large segment of the user base.
- Immigration enforcement: Reports that AI tools were being used by agencies, including ICE in immigration enforcement operations.
- Deregulation push: OpenAI's lobbying against state-level AI oversight.
By the time the campaign peaked, organisers claimed over 700,000 users had committed to the boycott — a number that, even if partially inflated, represents a remarkable moment of consumer pushback against a technology company.
Your Deleted Chats Were Never Actually Deleted
Alongside the political drama, a more quietly alarming development emerged from the US court system. In the ongoing lawsuit between The New York Times and OpenAI, a federal judge issued a preservation order in May 2025 — requiring ChatGPT conversations to be retained indefinitely, even when users press delete.
This suspended OpenAI's standard 30-day deletion policy for most users. In other words, anyone who had conversations with ChatGPT and trusted that deleting them removed them from OpenAI's servers was mistaken. Those conversations are now preserved as potential legal evidence, with no clear timeline for when — or if — they will ever be deleted.
For privacy-conscious users, this was the final straw.
Where Are They Going? Claude.
The beneficiary of this exodus has been Claude, the AI assistant built by Anthropic. In the days following the peak of the ChatGPT backlash, Claude surged to the top of the US App Store — a first for the product, and notably the first time in memory that an AI assistant had overtaken ChatGPT in American download charts.
The contrast users are pointing to is deliberate and visible. Anthropic notably refused to enter into a deal with the Pentagon, and its publicly stated policies explicitly prohibit Claude from being used in autonomous weapons systems or in the mass surveillance of citizens. For a user base newly sensitised to questions of military and government AI use, this distinction carries real weight.
Whether the shift is permanent — or whether it represents a temporary protest that fades as memories of the controversy do — remains to be seen. Tech history is full of boycotts that burned bright and then fizzled. But the scale of this one, and the number of compounding factors behind it, suggests something more durable may be underway.
What This Means for the AI Industry
The ChatGPT backlash is, at its core, a story about trust. For years, AI companies operated in a kind of friction-free zone: users were fascinated enough by the technology that questions about corporate ethics, government partnerships, and data practices rarely broke through to mainstream attention. That may be changing.
Users are now paying attention to where AI companies take their money, who they partner with, and what happens to the data those conversations generate. The "just use the product" era of AI adoption may be giving way to something more demanding: an era where AI companies are held to the same scrutiny as any other powerful institution in public life.
OpenAI built its reputation on the promise of making AI beneficial for humanity. That promise is now being put to a vote — one uninstall at a time.
