I canceled before this, but I definitely can’t see myself renewing chatgpt because of this and so much other shadiness.
I just don’t want them to succeed anymore, and I don’t think there’s really any world where they regain my trust.
And to be clear I don’t expect my actions do make a difference, I just would feel dirty now that they have gotten into bed with this administration. Plus I should probably assume I’d have zero privacy now too…
I'm one of the people that canceled my OpenAI Plus subscription after Sam Altman demonstrated his lack of integrity last week.
I like to think I'm doing something, but honestly I am not sure that it would make a difference if literally every consumer canceled their subscription at this point. They now have an "in" with the slush fund that we call "US military contracting", which is probably worth more than what they were going to make from all the people who canceled their accounts. Not to mention their bribes to our president, meaning that the markets can always be rigged in their favor.
My impression is that lots of companies that deal exclusively with the US govt are doing fine, but it doesn't seem like they'd draw the best talent or become the biggest companies. If that's OpenAI's fate, I'm okay with it.
I canceled my sub a while back and figured hey - worst case, I saved some money (since I subscribe to multiple AI services). Regardless, casting your one vote with your money is important.
I guess what frustrates me is how much people seem to have amnesia with anything involving tech. In a week or two this will fade from the collective consciousness, and OpenAI will be unaffected and they'll release new features and people will start subscribing again.
I mean, look at all the reputational damage that has happened with Microsoft and Google throughout the years; it's so common that it's basically ignored now.
I'm not saying they're going to blow up next month, but OpenAI is substantially overleveraged on being a "wave of the future" company where all the smartest people want to work and all the best other companies want to do business with. I don't think a world where they become Palantir 2 can support their current capital expenditures.
Should have done it earlier [0] when no-one cared and the signs were all there.
Anyway this boycott is going to fail. Why? Because Claude was used for not only the Venezuela operation [1] but also the one in Iran [2].
Remember Anthropic never objected to military use in foreign operations, but only domestic. So if the government never made that request to Anthropic, there wouldn't be any outrage, and military use would continue with the (illegal) war in Iran still using Anthropic as a vendor.
So no different to how P̶a̶l̶a̶n̶t̶i̶r̶ Anthropic is already used in the middle east.
The only thing that will save OpenAI is a miracle. The deals only prolong the pain. Just end it already. Nobody wants their products. We want affordable RAM and SSD.
Unless ~800k users cancel it is still a net positive for them and that is with the current relatively small contract they have. The reality is that money is in B2B.
According to QuitGPT[0], as of now, 2.5 million people have done so, which they base on "website signatures, share counts on social media, and credible app usage data". I don't know how accurate this is, but seems a bit of a high number.
I regret ever having signed up. But not only because of this. Also because they just don't clearly say that they don't analyze, read or even use my prompts - it just freaks me out that some derived or "anonymized" version of my prompt is saved or even used for training other models.
No offense meant, but unfortunately tone cannot be given well via text: What did you think was happening there? The concerns you have expressed seem like they were clearly happening for all users, no matter what, from the beginning.
Genuine question - what’s wrong with military leveraging modern technologies? They have been doing it for ages. And thanks to this process for example we have now internet.
Or what’s inherently wrong with catering to military customer?
A lot of people won’t be dealing with a customer that may be using their product to inflict bodily harm or murder someone. A lot of people are also displeased with the current head of Ministry of Defense, and with his actions, e.g. that he allegedly ordered people killed after their boat has been destroyed
I'm confused.
Sam writes "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
How is this different from what Anthropic wanted?
As far as I can tell, and I'm sure someone will correct me if I'm wrong, the OpenAI version was, we'll let the government do whatever it wants, that's specified in law. Whereas Anthropic seemed to say, we're not going to cross those lines, even if they are permitted by law.
Important distinction as there are now no laws against autonomous weapons and surveillance has been widespread and unhindered by judicial oversight for quite a while now.
So I guess the question is, when Sam says "and we put them into our agreement," what exactly was in the agreement? I distrust Sam. But if you were to trust his statements, it seems like he's saying he implemented in his agreement with the gevernment the safety features Anthropic wanted.
DoD kept asking for changes of contract where at least the legalese would be changed to somewhat more permissive but Anthropic stayed their ground.
Sam Altman probably let them do that, while using language like "we have technical means of oversight and the same red lines as Anthropic". But in reality they will allow DoD to do what Anthropic didn't.
Hegseth declared quite unofficially (on X) that any company that does business with the US military must cease all business interactions with Anthropic. If that actually becomes an official thing, that will be worth billions lost to Anthropic, whereas a ChatGPT boycott by a few million people (who probably weren't using it that much anyway) is mostly a drop in the bucket.
Yeah but OpenAI's market share is down to 45% from 69% a year ago, they are #4 on LLM Arena behind Claude, Gemini and Grok, they are heavily loss making and winning may depend on having the best researchers who are likely put off.
But the 10M that care will spend their money elsewhere. That is already something, for OpenAI and also for the other competitors these people will switch to.
> more than 1.5 million people have taken action, either by cancelling subscriptions, sharing boycott messages on social media, or signing up via quitgpt.org
That's just silly compared to their user base and wont have any effect
i mean, if Chinese AI companies are constantly distilling the latest anthropic models, and those companies are closely tied to the CCP/PLA, aren't anthropic models already being used for military purposes?
I canceled before this, but I definitely can’t see myself renewing chatgpt because of this and so much other shadiness.
I just don’t want them to succeed anymore, and I don’t think there’s really any world where they regain my trust.
And to be clear I don’t expect my actions do make a difference, I just would feel dirty now that they have gotten into bed with this administration. Plus I should probably assume I’d have zero privacy now too…
I'm one of the people that canceled my OpenAI Plus subscription after Sam Altman demonstrated his lack of integrity last week.
I like to think I'm doing something, but honestly I am not sure that it would make a difference if literally every consumer canceled their subscription at this point. They now have an "in" with the slush fund that we call "US military contracting", which is probably worth more than what they were going to make from all the people who canceled their accounts. Not to mention their bribes to our president, meaning that the markets can always be rigged in their favor.
My impression is that lots of companies that deal exclusively with the US govt are doing fine, but it doesn't seem like they'd draw the best talent or become the biggest companies. If that's OpenAI's fate, I'm okay with it.
I canceled my sub a while back and figured hey - worst case, I saved some money (since I subscribe to multiple AI services). Regardless, casting your one vote with your money is important.
I guess what frustrates me is how much people seem to have amnesia with anything involving tech. In a week or two this will fade from the collective consciousness, and OpenAI will be unaffected and they'll release new features and people will start subscribing again.
I mean, look at all the reputational damage that has happened with Microsoft and Google throughout the years; it's so common that it's basically ignored now.
I'm not saying they're going to blow up next month, but OpenAI is substantially overleveraged on being a "wave of the future" company where all the smartest people want to work and all the best other companies want to do business with. I don't think a world where they become Palantir 2 can support their current capital expenditures.
Maybe they could try and rebrand as a "lifestyle brand" like Palantir has been. https://www.wired.com/story/palantir-wants-to-be-a-lifestyle...
I was on the fence before last week happened, but that really sealed it for me.
I'm glad I was able to export all my data, but they made me wait 24 hours nearly-on-the-dot to get it.
Wonder how many folks didn't bother waiting.
Same here. And unfortunately I agree.
I generally don’t have faith that many consumer boycotts will work, but boy is it ever easy to switch away from openAI.
Should have done it earlier [0] when no-one cared and the signs were all there.
Anyway this boycott is going to fail. Why? Because Claude was used for not only the Venezuela operation [1] but also the one in Iran [2].
Remember Anthropic never objected to military use in foreign operations, but only domestic. So if the government never made that request to Anthropic, there wouldn't be any outrage, and military use would continue with the (illegal) war in Iran still using Anthropic as a vendor.
So no different to how P̶a̶l̶a̶n̶t̶i̶r̶ Anthropic is already used in the middle east.
[0] https://www.theguardian.com/technology/2025/jun/17/openai-mi...
[1] https://www.theguardian.com/technology/2026/feb/14/us-milita...
[2] https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...
The only thing that will save OpenAI is a miracle. The deals only prolong the pain. Just end it already. Nobody wants their products. We want affordable RAM and SSD.
What I find surprising is that I knew Altman is a horrible human being and a terrible steward of the AI revolution, yet I still got disappointed.
Unless ~800k users cancel it is still a net positive for them and that is with the current relatively small contract they have. The reality is that money is in B2B.
According to QuitGPT[0], as of now, 2.5 million people have done so, which they base on "website signatures, share counts on social media, and credible app usage data". I don't know how accurate this is, but seems a bit of a high number.
[0] https://quitgpt.org/
They don't claim that's 2.5M cancellations of paid subscriptions. It's "taken action". It's vague for a reason
Related:
How do I cancel my ChatGPT subscription?
https://news.ycombinator.com/item?id=47190997
OpenAI – How to delete your account
https://news.ycombinator.com/item?id=47193478
I regret ever having signed up. But not only because of this. Also because they just don't clearly say that they don't analyze, read or even use my prompts - it just freaks me out that some derived or "anonymized" version of my prompt is saved or even used for training other models.
No offense meant, but unfortunately tone cannot be given well via text: What did you think was happening there? The concerns you have expressed seem like they were clearly happening for all users, no matter what, from the beginning.
Genuine question - what’s wrong with military leveraging modern technologies? They have been doing it for ages. And thanks to this process for example we have now internet.
Or what’s inherently wrong with catering to military customer?
What’s going on here?
Nothing. And there’s nothing wrong with Anthropic sticking to their principles.
What is wrong, however, is that the retaliatory move to label Anthropic as a supply chain risk is not ok. Further, the timing of it all stinks.
Everyone smells a rat, not just me. Ever heard of “ask the audience”? The crowd is right a HUGE majority of the time.
A lot of people won’t be dealing with a customer that may be using their product to inflict bodily harm or murder someone. A lot of people are also displeased with the current head of Ministry of Defense, and with his actions, e.g. that he allegedly ordered people killed after their boat has been destroyed
Politics with a sprinkles of hypocrisy, got it.
I'm confused. Sam writes "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement." How is this different from what Anthropic wanted?
As far as I can tell, and I'm sure someone will correct me if I'm wrong, the OpenAI version was, we'll let the government do whatever it wants, that's specified in law. Whereas Anthropic seemed to say, we're not going to cross those lines, even if they are permitted by law.
Important distinction as there are now no laws against autonomous weapons and surveillance has been widespread and unhindered by judicial oversight for quite a while now.
So I guess the question is, when Sam says "and we put them into our agreement," what exactly was in the agreement? I distrust Sam. But if you were to trust his statements, it seems like he's saying he implemented in his agreement with the gevernment the safety features Anthropic wanted.
He's trying to make it sound so, but in legal domain, devil lies in the details.
It seems that government wanted to use Claude for mass analysis of commercially obtained data on American people and Anthropic wouldn't let them (source: https://www.theatlantic.com/technology/2026/03/inside-anthro... ).
DoD kept asking for changes of contract where at least the legalese would be changed to somewhat more permissive but Anthropic stayed their ground.
Sam Altman probably let them do that, while using language like "we have technical means of oversight and the same red lines as Anthropic". But in reality they will allow DoD to do what Anthropic didn't.
See this for more information: https://www.lesswrong.com/posts/PBrggrw4mhgbksoYY/a-tale-of-...
Hegseth declared quite unofficially (on X) that any company that does business with the US military must cease all business interactions with Anthropic. If that actually becomes an official thing, that will be worth billions lost to Anthropic, whereas a ChatGPT boycott by a few million people (who probably weren't using it that much anyway) is mostly a drop in the bucket.
Imagine if OpenAI fails one day and sells to a company like Palantir. What would happen to all of the sensitive data that they're sitting on?
Most regular people, they don't care and will continue to use ChatGPT
ChatGPT has 900 million users now, across continents and they don't care about this DoW deal.
Yeah but OpenAI's market share is down to 45% from 69% a year ago, they are #4 on LLM Arena behind Claude, Gemini and Grok, they are heavily loss making and winning may depend on having the best researchers who are likely put off.
But the 10M that care will spend their money elsewhere. That is already something, for OpenAI and also for the other competitors these people will switch to.
Earlier: https://news.ycombinator.com/item?id=47230990
Anthropic are really suffering with the influx of new users, multiple times per day there are alerts.
What a great problem to have. As a startup I’d love to be in that position.
Here's a reminder that before you cancel you can go to Settings -> Data Controls -> Export Data to keep your chats.
> more than 1.5 million people have taken action, either by cancelling subscriptions, sharing boycott messages on social media, or signing up via quitgpt.org
That's just silly compared to their user base and wont have any effect
i mean, if Chinese AI companies are constantly distilling the latest anthropic models, and those companies are closely tied to the CCP/PLA, aren't anthropic models already being used for military purposes?
[dead]
[dead]