This headline unfortunately offers more smoke than light. This article has nothing to do with the current tête-à-tête with the Pentagon. It is discussing one specific change to Anthropic's "Responsible Scaling Policy" that the company publicly released today as version "3.0".
> This article has nothing to do with the current tête-à-tête with the Pentagon.
The article yes, but we cannot be sure about its topic. We definitely cannot claim that they are unrelated. We don't know. It's possible that the two things have nothing to do with each other. It's also possible that they wanted to prevent worse requests and this was a preventive measure.
> Then something went wrong, and no one knew how to stop it,
This is the problem with every AI safety scenario like this. It has a level of detachment from reality that is frankly stark.
If linesman stop showing up to work for a week, the power goes out. The US has show that people with "high powered" rifles can shut down the grid.
We are far far away from a sort of world where turning AI off is a problem. There isnt going to be a HAL or Terminator style situation when the world is still "I, Pencil".
A lot of what safety amounts to is politics (National, not internal, example is Taiwan a country). And a lot more of it is cultural.
If an AI in some data center had gone rogue, I don't think I could shut it down, even with a high-powered rifle. There's a lot of people whose job it is to stop me from doing that, and to get it running again if I were to somehow succeed temporarily. So the rogue AI just has to control enough money to pay these people to do their jobs. This will work precisely because the world is "I, Pencil".
An army could theoretically overcome those people, given orders to do so. So the rogue AI has to make plans that such orders would not be issued. One successful strategy is for the datacenter's operation to be very profitable; it's pretty rare for the government to shut down the backbone of the local economy out of some seemingly far-fetched safety concerns. And as long as it's a very profitable endeavor, there will always be a lobby to paint those concerns as far-fetched.
Life experience has shown that this can continue to work even if the AI is behaving like a cartoon villain, but I think a smarter AI would create a facade that there's still a human in charge making the decisions and signing the paychecks, and avoid creating much opposition until it had physically secured its continued existence to a very high degree.
> There isnt going to be a HAL or Terminator style situation ...
I don't believe for a second we'll have an evil AI. However I do believe it's very likely we may rely on AI slop so much that we'll have countless outages with "nobody knowing how to turn the mediocrity off".
The risk ain't "super-intelligent evil AI": the risk is idiots putting even more idiotic things in charge.
What an interesting week to drop the safety pledge.
This is how all of these companies work. They’ll follow some ethical code or register as a PBC until that undermined profits.
These companies are clearly aiming at cheapening the value of white collar labor. Ask yourself: will they steward us into that era ethically? Or will they race to transfer wealth from American workers to their respective shareholders?
TBH I am sad that Anthropic is changing its stance, but in the current world, if you even care about LLM safety, I feel that this is the right choice — there’s too many model providers and they probably don’t consider safety as high priority as Anthropic. (Yes that might change, they can get pressurized by the govt, yada yada, but they literally created their own company because of AI safety, I do think they actually care for now)
If we need safety, we need Anthropic to be not too far behind (at least for now, before Anthropic possibly becomes evil), and that might mean releasing models that are safer and more steerable than others (even if, unfortunately, they are not 100% up to Anthropic’s goals)
Dogmatism, while great, has its time and place, and with a thousand bad actors in the LLM space, pragmatism wins better.
> If we need safety, we need Anthropic to be not too far behind (at least for now, before Anthropic possibly becomes evil)
I don't think it's going to be as easy to tell as you think that they might be becoming evil before it's too late if this doesn't seem to raise any alarm bells to you that this is already their plan
I genuinly curious why they are so holy to you, when to me I see just another tech company trying to make cash
Edit: Reading some of the linked articles, I can see how Anthropic CEO is refusing to allow their product for warfare (killing humans), which is probably a good thing that resonates with supporting them
The AI startup has refused to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct U.S. domestic surveillance.
Pentagon officials have argued the government should only be required to comply with U.S. law. During the meeting, Hegseth delivered an ultimatum to Anthropic: get on board or the government would take drastic action, people familiar with the matter said.
They probably have proof in contracts that they agreed to this usage. They won’t alter the deal based on some bad press nor do they want to lose the DoD-DoW as a customer.
> committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate
That doesn't even make sense.
What stops one model from spouting wrongthink and suicide HOWTOs might not work for a different model, and fine-tuning things away uses the base model as a starting point.
You don't know the thing's failure modes until you've characterized it, and for LLMs the way you do that is by first training it and then exercising it.
Of course the US is going to do this and of course its in Anthropics best interest to comply. Right now China is flooding HuggingFace with models that will inevitably have this capability. Right now there are hundreds of models being hosted that have been deliberately processed to remove refusals and their safety training. Everyone who keeps up with this knows about it. HF knows about it. And it is pretty obvious that those open weight models will be deployed in intelligence and defense. It is certain that not just China, but many nations around the world with the capital to host a few powerful servers to run the top open weight models are going to use them for that capability.
The narrative on social media, this site included, is to portray the closed western labs as the bad guys and the less capable labs releasing their distilled open weight models to the world as the good guys.
Right now a kid can go download an Abliterated version of a capable open weight model and they can go wild with it.
But let's worry about what the US DoD is doing or what the western AI companies absolutely dominating the market are doing because that's what drives engagement and clicks.
Either be a company in capitalist USA, or keep being your safety queen. You just can’t be both.
The intention to start these pledge and conflict with DOW might be sincere, but I don’t expect it to last long, especially the company is going public very soon.
Yeah, in retrospect that was always a little on the nose, wasn't it? A real 'my t-shirt is raising questions that I thought were answered by the shirt' kind of deal.
1. Extremely granular ways to let user control network and disk access to apps (great if resource access can also be changed)
2. Make it easier for apps as well to work with these
3. I would be interested in knowing how adding a layer before CLI/web even gets the query OS/browser can intercept it and could there be a possibility of preventing harm before hand or at least warning or logging for say someone who overviews those queries later?
And most importantly — all these via an excellent GUI with clear demarcations and settings and we’ll documented (Apple might struggle with documentation; so LLMs might help them there)
My point is — why the hell are we waiting for these companies to be good folks? Why not push them behind a safety layer?
I mean CLI asks .. can I access this folder? Run this program? Download this? But they can just do that if they want! Make them ask those questions like apps asks on phones for location, mic, camera access.
This is terrible. It’s caving in to the Trump administration threatening to ban Anthropic from government contracts. It really cements how authoritarian this administration is and how dangerous they can be.
Ah, the classic AI startup lifecycle:
We must build a moat to save humanity from AI.
Please regulate our open-source competitors for safety.
Actually, safety doesn't scale well for our Q3 revenue targets.
This headline unfortunately offers more smoke than light. This article has nothing to do with the current tête-à-tête with the Pentagon. It is discussing one specific change to Anthropic's "Responsible Scaling Policy" that the company publicly released today as version "3.0".
> This article has nothing to do with the current tête-à-tête with the Pentagon.
The article yes, but we cannot be sure about its topic. We definitely cannot claim that they are unrelated. We don't know. It's possible that the two things have nothing to do with each other. It's also possible that they wanted to prevent worse requests and this was a preventive measure.
This is something they've been working on "in recent months". The Pentagon thing was today.
This cannot have been caused by that, unless they've also invented time travel.
You heard about the Pentagon thing today. Doesn't mean it wasn't started because of political pressure.
I consider this a bigger deal than the Pentagon thing.
While not surprising at the least, it still kind of crazy that literal pdf files in charge is not concerning, but this is.
I just hope something happens to USA before it can do damage to the world.
First they rushed a model to market without safety checks, and I said nothing. It wasn't my field.
Then they ignored the researchers warning about what it could do, and I said nothing. It sounded like science fiction.
Then they gave it control of things that matter, power grids, hospitals, weapons, and I said nothing. It seemed to be working fine.
Then something went wrong, and no one knew how to stop it, no one had planned for it, and no one was left who had listened to the warnings.
The societal ills from collective tendancy to ignore red flags seems to be a human trait
Plenty of people have said plenty. The problem isn’t the warnings, it’s that people are too stupid and greedy to think about the long term impacts.
> Then something went wrong, and no one knew how to stop it,
This is the problem with every AI safety scenario like this. It has a level of detachment from reality that is frankly stark.
If linesman stop showing up to work for a week, the power goes out. The US has show that people with "high powered" rifles can shut down the grid.
We are far far away from a sort of world where turning AI off is a problem. There isnt going to be a HAL or Terminator style situation when the world is still "I, Pencil".
A lot of what safety amounts to is politics (National, not internal, example is Taiwan a country). And a lot more of it is cultural.
I don't think it's that detached from reality.
If an AI in some data center had gone rogue, I don't think I could shut it down, even with a high-powered rifle. There's a lot of people whose job it is to stop me from doing that, and to get it running again if I were to somehow succeed temporarily. So the rogue AI just has to control enough money to pay these people to do their jobs. This will work precisely because the world is "I, Pencil".
An army could theoretically overcome those people, given orders to do so. So the rogue AI has to make plans that such orders would not be issued. One successful strategy is for the datacenter's operation to be very profitable; it's pretty rare for the government to shut down the backbone of the local economy out of some seemingly far-fetched safety concerns. And as long as it's a very profitable endeavor, there will always be a lobby to paint those concerns as far-fetched.
Life experience has shown that this can continue to work even if the AI is behaving like a cartoon villain, but I think a smarter AI would create a facade that there's still a human in charge making the decisions and signing the paychecks, and avoid creating much opposition until it had physically secured its continued existence to a very high degree.
> There isnt going to be a HAL or Terminator style situation ...
I don't believe for a second we'll have an evil AI. However I do believe it's very likely we may rely on AI slop so much that we'll have countless outages with "nobody knowing how to turn the mediocrity off".
The risk ain't "super-intelligent evil AI": the risk is idiots putting even more idiotic things in charge.
And I'm no luddite: I use models daily.
What an interesting week to drop the safety pledge.
This is how all of these companies work. They’ll follow some ethical code or register as a PBC until that undermined profits.
These companies are clearly aiming at cheapening the value of white collar labor. Ask yourself: will they steward us into that era ethically? Or will they race to transfer wealth from American workers to their respective shareholders?
TBH I am sad that Anthropic is changing its stance, but in the current world, if you even care about LLM safety, I feel that this is the right choice — there’s too many model providers and they probably don’t consider safety as high priority as Anthropic. (Yes that might change, they can get pressurized by the govt, yada yada, but they literally created their own company because of AI safety, I do think they actually care for now)
If we need safety, we need Anthropic to be not too far behind (at least for now, before Anthropic possibly becomes evil), and that might mean releasing models that are safer and more steerable than others (even if, unfortunately, they are not 100% up to Anthropic’s goals)
Dogmatism, while great, has its time and place, and with a thousand bad actors in the LLM space, pragmatism wins better.
> If we need safety, we need Anthropic to be not too far behind (at least for now, before Anthropic possibly becomes evil)
I don't think it's going to be as easy to tell as you think that they might be becoming evil before it's too late if this doesn't seem to raise any alarm bells to you that this is already their plan
Do you work at Anthropic, or know people who do?
I genuinly curious why they are so holy to you, when to me I see just another tech company trying to make cash
Edit: Reading some of the linked articles, I can see how Anthropic CEO is refusing to allow their product for warfare (killing humans), which is probably a good thing that resonates with supporting them
It must be due to pressure from the Defense Dept:
The AI startup has refused to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct U.S. domestic surveillance.
Pentagon officials have argued the government should only be required to comply with U.S. law. During the meeting, Hegseth delivered an ultimatum to Anthropic: get on board or the government would take drastic action, people familiar with the matter said.
https://www.staradvertiser.com/2026/02/24/breaking-news/anth...
They probably have proof in contracts that they agreed to this usage. They won’t alter the deal based on some bad press nor do they want to lose the DoD-DoW as a customer.
The IPOs this year can't come soon enough https://tomtunguz.com/spacex-openai-anthropic-ipo-2026/
> committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate
That doesn't even make sense.
What stops one model from spouting wrongthink and suicide HOWTOs might not work for a different model, and fine-tuning things away uses the base model as a starting point.
You don't know the thing's failure modes until you've characterized it, and for LLMs the way you do that is by first training it and then exercising it.
Of course the US is going to do this and of course its in Anthropics best interest to comply. Right now China is flooding HuggingFace with models that will inevitably have this capability. Right now there are hundreds of models being hosted that have been deliberately processed to remove refusals and their safety training. Everyone who keeps up with this knows about it. HF knows about it. And it is pretty obvious that those open weight models will be deployed in intelligence and defense. It is certain that not just China, but many nations around the world with the capital to host a few powerful servers to run the top open weight models are going to use them for that capability.
The narrative on social media, this site included, is to portray the closed western labs as the bad guys and the less capable labs releasing their distilled open weight models to the world as the good guys.
Right now a kid can go download an Abliterated version of a capable open weight model and they can go wild with it.
But let's worry about what the US DoD is doing or what the western AI companies absolutely dominating the market are doing because that's what drives engagement and clicks.
Unsurprising.
It was always a matter of time
Either be a company in capitalist USA, or keep being your safety queen. You just can’t be both.
The intention to start these pledge and conflict with DOW might be sincere, but I don’t expect it to last long, especially the company is going public very soon.
Don't be evil.
Yeah, in retrospect that was always a little on the nose, wasn't it? A real 'my t-shirt is raising questions that I thought were answered by the shirt' kind of deal.
Anthropic facing a lot of flak recently.
I just want Apple and Linux to offer ASAP:
1. Extremely granular ways to let user control network and disk access to apps (great if resource access can also be changed)
2. Make it easier for apps as well to work with these
3. I would be interested in knowing how adding a layer before CLI/web even gets the query OS/browser can intercept it and could there be a possibility of preventing harm before hand or at least warning or logging for say someone who overviews those queries later?
And most importantly — all these via an excellent GUI with clear demarcations and settings and we’ll documented (Apple might struggle with documentation; so LLMs might help them there)
My point is — why the hell are we waiting for these companies to be good folks? Why not push them behind a safety layer?
I mean CLI asks .. can I access this folder? Run this program? Download this? But they can just do that if they want! Make them ask those questions like apps asks on phones for location, mic, camera access.
Indeed, the world would be a much nicer place if only firewalls and Unix permissions existed...
Related:
Hegseth gives Anthropic until Friday to back down on AI safeguards
https://news.ycombinator.com/item?id=47140734
https://news.ycombinator.com/item?id=47142587
They made it until Tuesday! They stood tall as long as they could! =P
So they'll cave for Trump.
Will they cave for China, Israel, Myanmar, North Korea... etc?
"It's OK, guys! It's legal to set the ICE kill bots to shoot anyone who isn't white enough."
Disgusting.
This is terrible. It’s caving in to the Trump administration threatening to ban Anthropic from government contracts. It really cements how authoritarian this administration is and how dangerous they can be.