OpenAI’s ChatGPT is facing an unprecedented wave of consumer backlash following the announcement of a high-profile partnership with the U.S. Department of Defense (recently rebranded as the Department of War). According to fresh data from market intelligence firm Sensor Tower, daily uninstalls of the ChatGPT mobile app in the United States surged by a staggering 295% on February 28, 2026—a sharp departure from its typical daily churn rate of just 9%. This mass “digital exodus” was accompanied by a 775% spike in one-star reviews and a significant drop in new downloads, as users expressed deep concerns over the integration of consumer AI models into classified military and surveillance networks. While OpenAI CEO Sam Altman has defended the move as a “necessary partnership” for national security and technical safety, the optics of the deal have triggered a widespread “Cancel ChatGPT” movement across social media, with many users publicly sharing screenshots of their deleted accounts.
In stark contrast, Anthropic’s Claude has emerged as the primary beneficiary of this shift in public sentiment. Following its public refusal to enter into similar defense contracts—citing firm “red lines” against the use of AI for mass surveillance or autonomous weaponry—Claude saw its U.S. downloads jump by 51% in a single weekend. By Saturday, March 1, Claude officially dethroned ChatGPT to become the #1 free app on the U.S. Apple App Store, marking a historic milestone in the competition between the two AI giants. Analysts suggest that the “Claude Boom” is a direct result of users seeking a more “safety-first” alternative that aligns with their ethical boundaries. As the fallout continues, OpenAI has already begun amending its agreement to include more explicit anti-surveillance language, but for now, the data indicates that a significant portion of the American public is increasingly wary of the intersection between generative AI and the military-industrial complex
