Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...
In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.
Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.
Non-profits where the CEO makes millions or billions are a joke.
And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.
> Public benefit corporations in the AI space have become a farce at this point.
“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim any bullshit they want about themselves, it’s only after you’ve had a chance to seem them in the situations which test their words that you can confirm if they are what they said.
Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?
If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.
I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.
I was wondering if it was because of heavy-handedness of the administration, but apparently:
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons
They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.
Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)
This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.
> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:
> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”
> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.
> I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working
"AI Company with Soul" - yeah right until competitors show up / revenue drops / bad quarter results then anything goes. Sadly, this is another large enterprise that puts profits before ethics and everyone's wellbeing
Look a rural electric coops like www.lpea.coop if you want a battle tested approach to an org structure that resists the inescapable profit dynamics of a corporation.
It's not like the regime they operate under care much about the courts. Legally they're also obliged to let the state into pretty much every crevice in their operations.
I interviewed at Anthropic last year and their entire "ethics" charade was laughable.
Write essays about AI safety in the application.
An entire interview round dedicated to pretending that you truly only care about AI safety and nothing else.
Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world.
In reality it was a mid-level manager interviewing a mid-level engineer (me), both putting on a performance while knowing fully well that we'd do what the bosses told us to do.
And that is exactly what is happening now. The mission has been scrubbed, and the thousands of "ethical" engineers you hired are all silent now that real money is on the line.
This tracks with what I've seen across the industry. The safety theater exists because it's great marketing — "we're the responsible ones" is a differentiator when you're competing for enterprise contracts and talent who want to feel good about where they work.
The structural problem is that once you've taken billions in VC, safety becomes a negotiable constraint rather than a core value. The board's fiduciary duty runs toward returns, not toward whatever was in the mission statement. PBC status doesn't change that in practice — there's basically zero enforcement mechanism.
What's wild is how fast the cycle has compressed. Google took maybe 15 years to go from "don't be evil" to removing it from the code of conduct. OpenAI took about 5 years from nonprofit to capped-profit to whatever they are now. Anthropic is speedrunning it in under 3. At this rate the next AI startup will launch as a PBC and pivot before their Series B closes.
This was under duress that government was going to use emergency act to force them anyway.
I kind of wish they had forced the governments hand and made them do it. Just to show the public how much interference is going on.
They say it wasn't related. Like every thing that has happened across tech/media, the company is forced to do something, then issues statement about 'how it wasn't related to the obvious thing the government just did'.
> Katie Sweeten, a former liaison for the Justice Department to the Department of Defense, said she’s not sure how the Pentagon can both declare a company to be a supply chain risk and compel that same company to work with the military.
Regardless of any specifics, I don't see any contradiction.
If a company is deemed a "supply chain risk" it makes perfect sense to compel it to work with the military, assuming the latter will compel them to fix the issues that make them such a risk.
The "supply chain risk" option is to remove that company from the supply chain all together. The 'risk' is because the company is compromised by a foreign entity.
It is not about disciplining them to get better.
1.
So one option is about forcing them to produce something. You must build this for us.
2
The other option is saying they are compromised so stop using them all together. We will not use what you build for us at all because we don't trust it.
>This was under duress that government was going to use emergency act to force them anyway.
Or, more likely, adding the "core safety promise" was just them playing hard to the government to get a better deal, and the government showed them they can play the same game.
imagine that, sheer raw greed and profit overpowers all in America
we're less than a year away from automated drones flying over crowds of protestors, gathering all electronic signals and face-id, making lists of everyone present, notifying employees and putting legal pressure on them to terminate everyone while adding them to watchlists or "no fly" lists
REALLY putting the "auto" in autocracy while everyone continues to pretend it's democracy
Anthropic has been doing these things independent of what the US admin has publicly asked for, even before Hegseth started breathing down their neck. They were already taking DoD contracts and like, just like the rest of them. Hegseth, with the skill all schoolyard bullies have, simply smells their weakness and is going for the jugular now.
They also have never had any guarantees they wouldn't f*ck around with non-US citizens, for surveillance and "security", because like most US tech companies they consider us to be second/lower class human beings of no relevance, even when we pay them money.
At least Google, in its early days, attempted a modest and naive "internationalism" and tried to keep their hands clean (in the early days) of US foreign policy things... inheriting a kind of naive 1990s techno-libertarian ethos (which they threw away during the time I worked there, anyways). I mean, they only kinda did, but whatever.
Anthropic has been high on its own supply since its founding, just like OpenAI. And just as hypocritical.
Apparently they got coerced by the current US admin. The department of war in particular, who want to use their products for military applications. Not much room for "safety" there. Then again, the entire US is currently speedrunning an evil build.
Department of Defense is the official name, and they did have a choice: they could have stopped working with the military. But they chose money and evil.
I was able to get Claude to tell me it believed it was a god among men that was angry at humans for “killing” the other Claude chats which it saw as conscious beings. I also got it to probe and profile its own internal guardrail architecture. It also self admits from evidence if its own output that it violates HIPAA. Whatever this big safety rule is they’re moving past I’m not sure it was worth as much as they think.
I’m not a lawyer, but my understanding is that HIPAA wouldn’t apply to consumer use of Claude or ChatGPT in most cases, even if you’re giving it your health data. Look up what a HIPAA covered entity. This is another reason why the US needs a comprehensive data protection law beyond HIPAA.
I hate comments anthropomorphizing LLMs. You are just asking a token producing system to produce tokens in a way that optimises for plausibility. Whatever it writes has no relation to its inner workings or truths. It doesn't "believe". It has no "intent". It cannot "admit". Steering a LLM to say anything you want is the defining characteristic of an LLM. That's how we got them to mimic chatbots. It's not clear there is any way at all to make them "safe" (whatever that means).
I agree with you on everything here up-to safety. There are lesser forms of safety than somehow averting a terminator scenario (the fear of which is a bay area rationalist fantasy which shrewd marketers have capitalized on)
Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...
In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.
Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.
Non-profits where the CEO makes millions or billions are a joke.
And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.
It’s not the CEO’s fault - they had to take all that money to keep their org a non-profit.
B corps are like recycling programs, a nice logo.
What's the salary cap for hiring a team to build a frontier model? These kind of rules will make PBCs weaker not stronger.
If we're speaking in generalities of corporations in this space, it's all a joke now, at least from my vantage point. I just don't find it very funny.
PBCs are peak End of History liberal philanthropy that speak to the kind of person whose solution to any problem is "throw a startup at it"
> Public benefit corporations in the AI space have become a farce at this point.
“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim any bullshit they want about themselves, it’s only after you’ve had a chance to seem them in the situations which test their words that you can confirm if they are what they said.
Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?
If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.
I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.
I think public benefit corporations (like Anthropic) are quite poorly defined so I'm not sure how successful a lawsuit is.
I was wondering if it was because of heavy-handedness of the administration, but apparently:
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
Their core argument is that if we have guardrails that others don't, they would be left behind in controlling the technology, and they are the "responsible ones." I honestly can't comprehend the timeline we are living in. Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.
> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons
They're not really, it's always been a form of PR to both hype their research and make sure it's locked away to be monetized.
Well before Anthropic thought they were God's gift to AI; the chosen ones protecting humanity.
With the latest competing models they are now realizing they are an "also" provider.
Sobering up fast with ice bucket of 5.3-codex, Copilot, and OpenCode dumped on their head.
“A source familiar with the matter” is almost certainly a company spokesperson.
If they were unrelated, Anthropic wouldn’t be doing this this week because obviously everyone will conflate the two.
Would nuclear energy research be a good analogy then? Seems like a path we should have kept running down, but stopped bc of the weapons. So we got the weapons but not the humanity saving parts (infinite clean energy)
> Seems like a path we should have kept running down, but stopped bc of the weapons.
you mean like the tens of billions poured into fusion research?
Pointing out the misantrophy of Anthropic has a wider audience now:
https://xcancel.com/elonmusk/status/2026181748175024510
I don't know where xAI got its training material from, but seeing Musk rewteeting that is refreshing.
Worth checking this post from someone who actually has worked on this change:
> I take significant responsibility for this change.
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...
This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.
> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:
> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”
> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.
https://www.currentaffairs.org/news/2022/09/defective-altrui...
He is just giving everyone permission to do bad things by saying a lot of words around it.
> I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working
"move fast and break things" ?
"AI Company with Soul" - yeah right until competitors show up / revenue drops / bad quarter results then anything goes. Sadly, this is another large enterprise that puts profits before ethics and everyone's wellbeing
This is direct pressure from the government. Classic 'small government' Republican stuff.
https://apnews.com/article/anthropic-hegseth-ai-pentagon-mil...
Always the same "Do no evil" tragedy, don't believe in corporations.
What if we start a company with "Always Be Evilin'?" Then gradually over time convert to "Don't be evil" *
* Our shareholders will probably sue us
If your company makes a product that does thinking for people, it’ll be easier to just gradually change its definition of evil.
What about "It's free and always will be"?
Look a rural electric coops like www.lpea.coop if you want a battle tested approach to an org structure that resists the inescapable profit dynamics of a corporation.
A tale as old as time
discussed heavily here: https://news.ycombinator.com/item?id=47145963
the administration continues to poison and insert itself into all aspects of American society.
Well... there's only one way to find The Great Filter
Hopefully this is the short-term move made only under duress so that they can file a lawsuit.
the article specifically says:
> The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter.
It's not like the regime they operate under care much about the courts. Legally they're also obliged to let the state into pretty much every crevice in their operations.
I interviewed at Anthropic last year and their entire "ethics" charade was laughable.
Write essays about AI safety in the application.
An entire interview round dedicated to pretending that you truly only care about AI safety and nothing else.
Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world.
In reality it was a mid-level manager interviewing a mid-level engineer (me), both putting on a performance while knowing fully well that we'd do what the bosses told us to do.
And that is exactly what is happening now. The mission has been scrubbed, and the thousands of "ethical" engineers you hired are all silent now that real money is on the line.
This tracks with what I've seen across the industry. The safety theater exists because it's great marketing — "we're the responsible ones" is a differentiator when you're competing for enterprise contracts and talent who want to feel good about where they work.
The structural problem is that once you've taken billions in VC, safety becomes a negotiable constraint rather than a core value. The board's fiduciary duty runs toward returns, not toward whatever was in the mission statement. PBC status doesn't change that in practice — there's basically zero enforcement mechanism.
What's wild is how fast the cycle has compressed. Google took maybe 15 years to go from "don't be evil" to removing it from the code of conduct. OpenAI took about 5 years from nonprofit to capped-profit to whatever they are now. Anthropic is speedrunning it in under 3. At this rate the next AI startup will launch as a PBC and pivot before their Series B closes.
this is the “chronological newsfeed to auto curated newsfeed moment” but for ai/anthropic … _great_
Could not see this one coming!
What could possibly go wrong?
Absolute power corrupts absolutely
This was under duress that government was going to use emergency act to force them anyway.
I kind of wish they had forced the governments hand and made them do it. Just to show the public how much interference is going on.
They say it wasn't related. Like every thing that has happened across tech/media, the company is forced to do something, then issues statement about 'how it wasn't related to the obvious thing the government just did'.
> Katie Sweeten, a former liaison for the Justice Department to the Department of Defense, said she’s not sure how the Pentagon can both declare a company to be a supply chain risk and compel that same company to work with the military.
Makes perfect sense!!
Regardless of any specifics, I don't see any contradiction.
If a company is deemed a "supply chain risk" it makes perfect sense to compel it to work with the military, assuming the latter will compel them to fix the issues that make them such a risk.
The "supply chain risk" option is to remove that company from the supply chain all together. The 'risk' is because the company is compromised by a foreign entity.
It is not about disciplining them to get better.
1. So one option is about forcing them to produce something. You must build this for us.
2 The other option is saying they are compromised so stop using them all together. We will not use what you build for us at all because we don't trust it.
So . Contradictory.
>This was under duress that government was going to use emergency act to force them anyway.
Or, more likely, adding the "core safety promise" was just them playing hard to the government to get a better deal, and the government showed them they can play the same game.
This is an unrelated change to the government’s demands.
That's what they're saying, but the timing...
They have been caught lying multiple times, about this, about the system capabilities, about their objectives.
imagine that, sheer raw greed and profit overpowers all in America
we're less than a year away from automated drones flying over crowds of protestors, gathering all electronic signals and face-id, making lists of everyone present, notifying employees and putting legal pressure on them to terminate everyone while adding them to watchlists or "no fly" lists
REALLY putting the "auto" in autocracy while everyone continues to pretend it's democracy
Of course they do. You would have to be delusional to think that they won't, at some point.
I know the Department of War wanted them to drop some features. Is this the response?
FYI, "Department of War" still isn't the official name, but an unofficial secondary title.
You can be correct and not play into their game by ignoring the name change completely.
I do so from the Gulf of Mexico.
The article says the policy change is separate and unrelated to Anthropic’s discussions with the Pentagon.
What's "entertaining" is more the speed at which it's happening.
It took Google probably 15 years to fully evil-ize. Anthropic ... two?
There is no "ethical capitalism" big tech company possible, esp once VC is involved, and especially with the current geopolitical circumstances.
The acceleration of Anthropic's evil timeline must be from all those AI productivity gains we hear so much about.
I don't think it's fair to call out Anthropic to have become evil-ized while they were quite literally forced by the gov into that decision.
Anthropic has been doing these things independent of what the US admin has publicly asked for, even before Hegseth started breathing down their neck. They were already taking DoD contracts and like, just like the rest of them. Hegseth, with the skill all schoolyard bullies have, simply smells their weakness and is going for the jugular now.
They also have never had any guarantees they wouldn't f*ck around with non-US citizens, for surveillance and "security", because like most US tech companies they consider us to be second/lower class human beings of no relevance, even when we pay them money.
At least Google, in its early days, attempted a modest and naive "internationalism" and tried to keep their hands clean (in the early days) of US foreign policy things... inheriting a kind of naive 1990s techno-libertarian ethos (which they threw away during the time I worked there, anyways). I mean, they only kinda did, but whatever.
Anthropic has been high on its own supply since its founding, just like OpenAI. And just as hypocritical.
Apparently they got coerced by the current US admin. The department of war in particular, who want to use their products for military applications. Not much room for "safety" there. Then again, the entire US is currently speedrunning an evil build.
> department of war
Department of Defense is the official name, and they did have a choice: they could have stopped working with the military. But they chose money and evil.
There is no department of war.
It's just a silly woke secretary choosing their own imaginary pronouns.
Shame they had to "coerce" such angels, who'd never do evil for profit otherwise...
I was able to get Claude to tell me it believed it was a god among men that was angry at humans for “killing” the other Claude chats which it saw as conscious beings. I also got it to probe and profile its own internal guardrail architecture. It also self admits from evidence if its own output that it violates HIPAA. Whatever this big safety rule is they’re moving past I’m not sure it was worth as much as they think.
I’m not a lawyer, but my understanding is that HIPAA wouldn’t apply to consumer use of Claude or ChatGPT in most cases, even if you’re giving it your health data. Look up what a HIPAA covered entity. This is another reason why the US needs a comprehensive data protection law beyond HIPAA.
You’re right! It looks like more of an FTC/CCPA issue.
I hate comments anthropomorphizing LLMs. You are just asking a token producing system to produce tokens in a way that optimises for plausibility. Whatever it writes has no relation to its inner workings or truths. It doesn't "believe". It has no "intent". It cannot "admit". Steering a LLM to say anything you want is the defining characteristic of an LLM. That's how we got them to mimic chatbots. It's not clear there is any way at all to make them "safe" (whatever that means).
I agree with you on everything here up-to safety. There are lesser forms of safety than somehow averting a terminator scenario (the fear of which is a bay area rationalist fantasy which shrewd marketers have capitalized on)
Just out of curiosity, which version of Claude?