It is incredible how far the overton window has moved on this issue.
When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
Attitude towards war depends on context. In 2007 "war" meant "Iraq" which was extremely unpopular, pointless, and had an imperialist flavor. Today "war" means Gaza, Iran, and Venezuela, but it also means Ukraine and Chinese aggression, possibly ramping up to an invasion of Taiwan. I suspect Amodei and many Anthropic employees are thinking of the latter.
> it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
(spoiler alert)
Wasn't this one of the plot points of the Val Kilmer movie Real Genius? They had to trick the students into creating a weapon by siloing them off from each other and having them build individual but related components? How far we've fallen! Nobody has to take ethics during undergrad anymore I guess...
If you are waiting until undergraduate level to take ethics, it's far too late to matter anyways.
Doubly so for "business ethics" classes which became à la mode in the post-Enron era. They attempt to teach fundamental ethics, when at most it should be a very thin layer on top of a well founded internal moral framework and well-accepted ethical standards inculcated from day 1 of kindergarten.
Morals are taught 0-9 [0], Ethics perhaps slightly later as it requires more complex thought processes.
Reminds me of the story of someone's woman working for a research lab to improve the computer-controlled automatic emergency landings of planes with total power failure.
... or so she was told.
She was unknowingly designing glide-bomb avionics.
I feel like these stories are apocryphal. I mean, I can't say for certain that no US DoD research program used subterfuge to trick the performers into working on The Most Racist Bomb. But I can say that in 20 years I've never seen a dearth of people ready, willing, able, and actively participating with full knowledge that they are creating The Fastest Bomb and The Sneakiest Bom and The Biggest Bomb Without Actually Going Nuclear.
IDK, maybe it's different outside the National Capitol Region. But here, you could probably shout "For The Empire" as a toast in the right bars and people wouldn't think you were joking.
What? I'm not questioning whether the weapons research actually happened. I'm questioning the sincerity of people claiming they didn't know what they were doing. I've seen plenty of weapons programs. They aren't a secret to the people working on them. My point is, the government doesn't need to lie to researchers or even pay them very well to get them to develop weapons because there are plenty of intelligent-enough people willing to do it almost for free.
If "This doesn't fit into my mental model, so everyone else must be lying" is how you deal with things you didn't personally experience, do what you have to.
Yes, and even their two exceptions, only one is on moral grounds. They don't want to provide tools for autonomous killing machines because the technology isn't good enough, yet. Once that 'yet' is passed they will be fine supplying that capability. Anthropic is clearly the better company over OpenAI, but that doesn't mean they are good. 'lesser evil' is the correct term here for sure.
Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members at risk.
Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.
> Fisher [...] suggested implanting the nuclear launch codes in a volunteer. If the President of the United States wanted to activate nuclear weapons, he would be required to kill the volunteer to retrieve the codes.
>> [...] The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being. [...]
>> When I suggested this to friends in the Pentagon they said, "My God, that's terrible. Having to kill someone would distort the President's judgment. He might never push the button."
> — Roger Fisher, Bulletin of the Atomic Scientists, March 1981[10]
The danger is that we won't be sending these fully-autonomous drones to 'war', but anytime a person in power feels like assassinating a leader or taking out a dissident, without having to make a big deal out of it. The reality is that AI will be used, not merely as a weapon, but as an accountability sink.
War is not moral. It may be necessary, but it is never moral. The only best choice is to fight at every turn making war easy. Our adversaries will, or likely already have, gone the autonomous route. We should be doing everything we can to put major blockers on this similar to efforts to block chemical, biological and nuclear weapons. The logical end of autonomous targeting and weapons is near instant mass killing decisions. So at a minimum we should think of autonomous weapons in a similar class as those since autonomy is a weapon of mass destruction. But we currently don't think that way and that is the problem.
Eventually, unfortunately, we will build these systems but it is weak to argue that the technology isn't ready right now and that is why we won't build them. No matter when these systems come on line there will be collateral damage so there will be no right time from a technology standpoint. Anthropic is making that weak argument and that is primarily what I am dismissive of. The argument that needs to be made is that we aren't ready as a society for these weapons. The US government hasn't done the work to prove they can handle them. The US people haven't proven we are ready to understand their ramifications. So, in my view, Anthropic shouldn't be arguing the technology isn't ready, no weapon of war is ever clean and your hands will be dirty no matter how well you craft the knife. Instead Anthropic should be arguing that we aren't ready as a society and that is why they aren't going to support them.
What do you mean, "hallucinates and kills people"? Killing people is the thing the military is using them for; it's not some accidental side effect. It's the "moral choice" the same way a cruise missile is — some person half a world away can lean back in their chair, take a sip of coffee, click a few buttons and end human lives, without ever fully appreciating or caring about what they've done.
I'm sure it was meant as "kills the wrong people."
People are always worried about getting rid of humans in decision-making. Not that humans are perfect, but because we worry that buggy software will be worse.
The people that actually target and launch these things do think about what they have done. It is the people ordering them to do it that don't. There is a difference, I hope.
> Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members.
I guess let the record state that I am deeply morally opposed to automated killing of any kind.
I am sick to my stomach when I really try to put myself in the shoes of the indigenous peoples of Africa who were the first victims of highly automatic weapons, “machine guns” or “Gatling guns”. The asymmetry was barbaric. I do hope that there is a hell, simply that those who made the decision to execute en masse those peoples have a place to rot in internal hellfire.
To even think of modernizing that scene of inhumane depravity with AI is despicable. No, I am deeply opposed to automated killing of any kind.
The “machine gun” has a more complicated history, and the first practical example may have been Gatling’s, or an earlier example used in Europe https://en.wikipedia.org/wiki/Machine_gun
Isn't this the moral hazard of war as it becomes more of a distance sport? That powerful governments can order the razing of cities and assassinate leaders with ease?
We need to do it because our enemies are doing it, in any case.
It came later than I anticipated, but it did come after all. There is a reason companies like 9mother are working like crazy on various way to mitigate those risks.
I do not think that anyone but the US and Israel have assassinated leaders in the last 30 years. I also question their autonomous drone advancement. Russia and China did not have the means to help Venezuela and they do not have the means to help Iran.
I think it's the opposite. The human cost of war is part of what keeps the USA from getting into wars more than it already is - no politician wants a second Vietnam.
If war is safe to wage, then it just means we'll do it more and kill more people around the globe.
The flip side is it's very unlikely that AI won't become that good any time soon, so it'll always remain a means to hold out. Especially since nobody has explicitly defined what "good enough" entails.
You must be joking. Which values, set by who? Jobs the marketer, Ellison the tyrant, or Gates the sociopath?
Please, spare us. They built a surveillance state masquerading as marketing companies and banal products. Don't play remember when if you don't actually remember.
Values relating to mistrust of the military (as per the context of the post I responded to) as well as values relating to ownership of the tech you bought and of personal privacy.
Get off your high horse and stop talking down to a person you don't know. Take your anger out on someone else.
Yeah, it wasn’t some kind of ethical utopia, but it sure as hell has gotten a lot less ethical in real terms. When you start
Making things operate in ways that people dislike or are deceived by, it’s a very slippery slope, because everything from there all the way through eating babies is just a matter of degree.
Trite as it may seem, don’t be evil is actually a very, very strong statement, as is do no harm. 70 percent of tech market cap these days is a a million tiny harms, a warm pool of diluted evil.
Well, aren't you just the sweetest little emotional manipulator? Ethical, for sure. Perhaps, you are angry due to ignorance and react poorly to someone shattering your illusions.
If LLM's are indeed a game changer professionally, you kind of need to pick one.
Personally, I loathe seeing power shift towards mega corporations like that, away from being able to run your own computer with free software, but it feels like the economics are headed that way in terms of productivity.
It's easy to say "I will never let the Department of Defense use my search engine for evil!" Or "the more money they spend on me, the less they have for weapons!" ( https://en.wikiquote.org/wiki/Theo_de_Raadt ) when you aren't really expecting money. But when somebody shows up with a check, it becomes much harder to stick to your principles. Especially after watching Palantir (and "don't be evil" Google) rake in plenty of dough.
Yeah, and they still happen even today (there were some recent ones with ICE and Israel), but the companies themselves have still worked in war businesses.
If you graduated in 2007, your classmates were born around 1985. Their parents were mostly born in the mid 50s to the mid 60s and came to political consciousness either during the Vietnam War or immediately thereafter. No war since has been even close to as unpopular or frankly as salient. It’s the passing out of cultural relevance of that war that you are noticing.
> No war since has been even close to as unpopular or frankly as salient.
Iraq.
Spoiler alert, a bunch of the current ones are going to be seen similarly too.
Also keep in mind when making comparisons that the Vietnam war was not unpopular with Americans at the beginning, and many people justified it all throughout, using language that will be similar to observers of later wars.
Correct that there was no Iraq generation because there was no draft and numbers were way smaller. Vietnam had over half a million troops at the height of that war. Iraq had under 170k.
But the war was still deeply unpopular. There is a reason America did the extraordinary - to that point - and elect its first black president.
The economic toll will be greater with these wars than Vietnam.
Sure, but it's not reasonable to call it as unpopular domestically as the Vietnam War, which had more than 12 times the casualties, spread over a group that on the whole was unwilling to fight and had to be drafted into the conflict, thereby spreading the pain of lost loved ones throughout society rather than concentrating it heavily into the poorer and less politically powerful social and economic classes. As unpopular as the Iraq war was, the American people's distaste didn't really do much to end it.
That’s reasonable. In the context of the larger discussion here a post up thread’s implication that a graduate in 2007 would be anti-war because of Vietnam is kind of dubious. Public opinion of the war shifted quite a lot in the four years after “Mission Accomplished” and Freedom Fries.
But tell me, what would you like your country to do when conflicts arise due to want of natural resources? Would you want your country to just give up that resource your people depend on, like may be 50/50?
Do you believe it will always be possible to settle on a solution in a peaceful way that works for everyone?
Like we have solar now. People talk about how it saves environment. But I think another similar win would be reduction in dependency on oil, and countries won't have to go to war over oil. But it takes time...
But it seems what technology gives, technology takes away. Because new technologies comes with its own resource requirements. And the cycle looks like it will go on...
> It is incredible how far the overton window has moved on this issue.
> When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war,
In 2007 the US was the sole world hegemon. It could afford to let the smartest people work on ad delivery systems.
In 2026, in certain fields, China has a stronger economy and military. Russia is taking over Europe. India and Brazil are going their own way. China is economically colonizing Africa.
The US can't afford to let it's enemies develop strong AI weapons first because of the naive thinking that Russia/China/others will also have naive thinkers that will demand the same.
---
People were just as naive with respect to Ukraine. They were saying that mines and depleted uranium shells are evil. But when Russia attacked, many changed their minds because they realized you can't kill Russians with grandstanding on noble principle. You kill them with mines and depleted uranium shells.
Hopefully people here will change their minds before a hot war. As the saying goes, America always picks the right solution after trying all the wrong ones.
I'm a decade older so maybe I missed the memo but I think you'll have a hard time naming tech companies that actually refused to work with the military, which were large enough and important enough to be in danger of selling something to the military (i.e. not Be Inc. or Beenz.com)
Clearly, all of the traditional big leagues were lined up to take the Army's money. IBM, Control Data, Cray, SGI, and HP all viewed weapons research as a major line of business. DEC was the default minicomputer of the DoD and Sun created features to court the intelligence community including the DoD "Trusted Workstation". Sperry Rand defined "military industrial complex".
Well, they made a big deal about saying that while they sold their software to the Defense Department, it wasn't actually being used to kill people. Except for well-known military contractors (e.g., Raytheon), who have sold plenty of software specifically to kill people.
I guess there's a reason we saw plenty of articles about software used somewhat defensively -- such as distinguishing whether a particular "bang" was a gunshot, and where it likely came from -- instead of offensively -- such as improvements to targeting software.
> When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
I don't think it was very common really.
I think for the most part it was tech companies whose systems were not being used for war who like to boast that they refused to let their systems be used for war. Or that they creatively interpreted "for war" that since they were not actually manufacturing explosives, they could claim it was not for war.
> they have to couch it in language clarifying that they would love to support war, actually,
Yes they do because they are trying to sell to the Department of War.
No one made Anthropic try to be a military contractor. It’s pretty much the definition of being a military contractor that your product helps to kill people.
When people (myself included FWIW) warn about the dangers of American imperialism, it's because:
1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
2. Every American company with sufficient size eventually becomes a defense contractor. That's really what's happened with the tech companies. They're moving in lockstep with the administration on both domestic and foreign policy;
3. The so-called "imperial boomerang" [2]. Every tactic, weapon and strategy used against colonial subjects are eventually used against the imperial core eg [3]. Do you think it's an accident that US police forces have become increasingly militarized?
The example I like to give is China's high speed rail. China started building HSR only 20 years ago and now has over 32,000 miles of HSR tracks taking ~4M passengers per day. The estimated cost for the entire network is ~$900B. That's less than the US spends on the military every year.
I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Then again, I think Steve Jobs was the only Silicon Valley billlionaire not in a transhumanist polycule with a more than even chance of being in the files.
Thank you for mentioning the term 'imperial boomerang'. You really saw it in the militarization of the police after the Iraq War. Gone are the donut munchers.
> I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Given that Steve Jobs was best friends with Larry Ellison, I’d say he wouldn’t have bent the knee because he would’ve been standing hand in hand with Trump, just like Larry.
>1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
This humanist view unfortunately doesn’t hold anymore in the modern world. Boomers will be happy as long as not a single dollar is spent on housing, so that their own homes can appreciate in value. Republicans would rather burn money than spend it on houses, hospitals, or bridges that might benefit immigrants or “other people” more than themselves.
I used an American political party only as a reference, but the same phenomenon can be seen in many countries around the world. Society has become incredibly cynical and has regressed a lot in terms of humanity.
>"Boomers will be happy as long as not a single dollar is spent on housing"
Not sure what boomers you are talking about. I for one am disgusted at what is happening with the things in general and with the housing in particular. I do not want my house to appreciate Ad infinitum. I do not want to have ever growing class of have-not's so that few jerks can own the governments and half of the world.
For almost all of history, including recent history, tech and military went together. Whether compound bows, or spears or metallurgy.
Euler used his math to develop artillery tables for the Prussian army.
von Neumann helped develop the atom bomb.
The military played a huge role in creating Silicon Valley.
However, to people who grew up in the mid to late 90s, it is easy to miss that that period was a major aberration. You had serious people talking about the end of history. You had John Perry Barlow's utterly naive Declaration of Independence of Cyberspace which looks more and more naive every year.
The Overton window has not shifted, at least not among rank-and-file tech workers. There was very loud and vocal internal opposition to building and selling weapons[0]. They all lost the argument in the boardrooms because the US government writes very big checks. But I am told they are very much still around.
CEOs are bound to sociopathically amoral behavior - not by the law, but by the Pareto-optimal behavior of the job market for executives. The law obligates you to act in the interests of the shareholders, but it does not mandate[1] that Line Go Up. That is a function of a specific brand of shareholder that fires their CEOs every 18 months until the line goes up.
In 2007, Big Tech had plenty of the consumer market to conquer, so they could afford to pretend to be opposed to selling to the military. But the game they were playing was always going to end with them selling to the military. Once they were entrenched they could ignore the no-longer-useful-to-us-right-now dissenters, change their politics on a dime, and go after the "real money".
[0] Several of the sibling comments are mentioning hypothetical scenarios involving dual-use technologies or obfuscated purposes. Those are also relevant, but not the whole story.
[1] There are plenty of arguments a CEO could use to defend against a shareholder lawsuit that they did not take a particularly short-sighted action. Notably, that most line-go-up actions tend to be bad long-term decisions. You're allowed to sell low-risk investments.
Complaining loudly about working with the government to build weapons and then continuing to build them isn't the same as people refusing to work for companies that handle weapons contracts. The window has indeed shifted, with tech workers now merely virtue signaling on social media.
Around 10 years ago, in college, in Calculus class I had a very ambitious classmate, wanted to go to DARPA and work on Robotics. I asked if he was thinking it through solely from technical perspective or considering ethics side as well. Clearly, he didn't understand the question and I directly inquired - what if the code you write or autonomous machine you contribute to used for killing? His response - that's not my problem.
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into crafting tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
See in your case with the military you can directly say, hey my code will be used to bomb other people possibly. But in today's times it isn't (I am sure even then) so cut and dry. I worked in AdTech industry (like 60% of the bay area techies). So the ad tech I write gets shown to millions/billions. What about ads influencing elections and then politicians waging wars? Anti-vax ads which influence people and then kill them. Scam ads. Insurance ads and then people not getting cancer meds from the same insurance. Am I responsible for those deaths? I would say Yes.
But what is the option? I feel each of us wants to draw a line based off of our morality but the circumstances don't allow us to stick to it (still gotta pay rent)
We are all on a Titanic the way I see it. It's just the DARPA guy is gonna sink first. Rest of us are just pretending to be Jack trying to be the last ones to go.
> "The fact is that a mere training in one or more of the exact sciences, even combined with very high gifts, is no guarantee of a humane or sceptical outlook."
>” I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion.”
This seems more like credentialist arrogance than a well-reasoned judgment.
"fully autonomous weapons and mass domestic surveillance"
I still don't buy this discussion. How exactly do they want to use an llm for autonomous weapons, given it's not even possible to reliably have a piece of code written without having to review it?
And how is a 1M token window model suppposed to be useful for mass surveillance?
Honest questions, I am sure I am missing some details. Because so far it looks like a very sophisticated marketing strategy.
> Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance?
Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
What’s a “warfighter?” Do they come from the “Gulf of America?” We used to call them servicemen or service members. Emphasizing they served the people. I guess that’s too effeminate for our roided up and ironically hyper-insecure Secretary of Defense.
A new term was needed some decades ago. "man" titles have not been politically correct for a while, "member" sounds awkward and bureaucratic. In some other languages, "soldier" can be used for all military personnel, while English ended up with a more narrow meaning.
"Awkward and bureacratic" is literally the point of naming conventions commonly adopted by democracies. Titles like "president" or "prime minister", departments like "Department of Defense", referring to government employees as "civil servants", etc. are all intentional measures meant to strip away the prestige and egotism associated with positions of authority in an effort to avoid it going to people's heads, and to remind them that they are meant to serve the good of the public that pays for their existence rather than ruling over them.
There are so many inference providers not working for Department of War. Even Alibaba and sure China has lots of issues but they are not bombing anyone now if that's your first priority. Or else, smaller US / European / Asian companies with pure civilian focus. SOTA open weights models they serve are perfectly suitable for coding and chat. I run a local Qwen3.5-122B-A10B-NVFP4 instance and it writes entire Android apps from scratch and that's a midsized model.
Sorry for the off-topic but what hardware are you running Qwen3.5-122B-A10B-NVFP4 on? Is it physically local or just self-administered? Thanks in advance.
Can you give a list of high quality alternatives? Morally speaking i would put China on par with the US if not worse (due to their ongoing Uyghur genocide). I will check out Qwen3 but would be interested in others.
Frankly it’s a shitshow all around.
The truth is that nobody gives a fuck about this. They have no moral qualms, just practical.
And these are the people that should bring us the future.
Man what a depressing scenario.
To state the obvious, I think when corruption and power in government go unchecked, companies eventually end up facing situations like this. It’s almost like making a deal with the devil.
At the beginning, they’re usually doing it for the money — and maybe some level of patriotism. Eventually they find themselves involved in things so ugly that they can’t really stomach it anymore. At the same time, they can’t easily back out either.
Then a new CEO comes in and thinks the previous guy was too soft, "He couldn’t handle it, but I can."
The Department of Defense was named as such after the detonations of the atomic bomb in Hiroshima and Nagasaki
We - as a humanity - collectively recognized the weight of our creation, and decided to walk back
Discussing “AI alignment” in the same breadth as aligning with a “Department of War” (in any country) is simply not an intellectually sound position
None of the countries we’ve attacked this year pose an existential threat to humanity. In contrast, striking first and pulling Europe, Russia, and China into a hot war beginning in the Middle East surely poses a greater collective threat than bioweapons, sentient AI, or the other typical “AI alignment” concerns
Why aren’t there more dissidents among the researcher ranks?
Among those who would resist, half would've done so outwardly by now and been fired, the other half would be hiding their activity. In both cases we wouldn't be hearing about them now.
> Why aren’t there more dissidents among the researcher ranks?
Because they’ve likely all lost faith in humanity watching Trump get reelected and now just want to get rich and hope to insulate their families from the reality we’re all living in.
"We both want a docile American public who go along with our desires so we can achieve goals that may be contrary to the interests of the American public."
As someone looking at this from outside the US, the whole sequence of events is frankly terrifying.
I fear that frontier AI is going to be nationalised for military purposes, not just in the US but across the globe.
At the same time, I really don’t know what Anthropic were expecting when they described their technology as potentially more dangerous than an atom bomb while agreeing to integrate purpose-built models with Palantir to be deployed in high-security networks for classified military tasks.
Would love to enumerate those commonalities. Run by a psychopath? Commitment to violent lethality? Burning billions of dollars for uncertain goals? (ok there's one)
That may be true but changing the department's name can only be done with an act of congress, which has not been done yet. Thus, the name is still officially and legally Dept of Defense.
Just because a name is more accurate doesn't mean that it's its new name. Otherwise we wouldn't be the United States of America (we are literally not united bc Hawaii and Alaska are not contiguous, and we are figuratively not united because... Well, you know)
After hearing Palmer Luckey's argument for the name change[0], I tend to think it's good change.
Some of his arguments:
It used to be called the department of war, and it had a better track record with regard to foreign conflict, under that name then it did under the DoD name.
Department of war is a more honest name, department of defense is a somewhat newspeak term, although "Department of Peace" would be worse.
It's harder to seek funding for "war", then it is to seek funding for "defense".
If you ask someone, "Do you want to spend money on education or war?", you will get a different answer asking, "Do you want to spend money on education or defense?".
The problem with this argument is that the _original_ Department of War is now called the Department of the Army, which existed alongside the Department of the Navy. Besides, it’s a moot point unless Congress actually changes the name.
Regarding Luckey's other statements, I can almost assure you that the administration did not think as much about it as Luckey has. Insecure Pete just thought the title "Secretary of Defense" was too wussy so he wanted to be Secretary of War.
Also, I think people mainly have issue with the fact that Trump is just randomly and unilaterally renaming random stuff and demolishing buildings without congressional approval. If he had gone through the correct alleys then maybe people could ignore it. Maybe. We'd probably still have qualms about it, but at least we'd know that our representatives had a say.
It'll be very interesting to see how this case gets resolved - in court and in the court of public opinion. I believe it's incredibly important and I hope they prevail.
Its incredibly simple - they want to get off the supply chain risk list.
Its very evident in his statement, he's trying very hard to clarify what that list means for corporations and downstream business with large commercial and strategic companies.
Imagine if Microsoft, Amazon, Google, etc decided that they don't want to ANY sort of minuscule risk (real or perceived) to their massive public sector business lines (via all their DoD DoJ NHS and other 3 letter agencies, state agencies, city and local municipals etc) - and decide to cancel their enterprise Anthropic licenses - which is a VERY possible scenario.
And these are the big players, theres a whole slew of medium and small players all with existing government contracts that need to tread carefully.
I think this is one of the weaknesses of rationalism and effective altruism, is that it tries to make a clean break from the common law legal reasoning that the government, and thus corporations, operate on. While I find rationalism to be a useful lens, the fact is that the common law legal framework is totally dominant, and so these deontological arguments made rationally collapse very quickly when translated to the dominant framework.
As much as Trump and Hegseth would like it to be called the Department of War, it still takes an act of Congress to change the name of the Department of Defense. No reason to call it by anything else until that happens.
This is such a foot stomping childish thing to get caught up on. It does not at all matter what a dept is called. Try to get over the extremely superficial.
Not everything has to be a conspiracy or some 4D chess business move. Dario is a morally motivated person and regretted the tone that was being conveyed in that memo, so he apologized.
Yeah, that's completely unbelievable. You don't just accidentally call Trump a "dictator" or go on an extended tirade about Sam Altman. Clearly, he was speaking how he truly felt and how he's doing damage control.
I'm sorry but it does not very much still exist. Otherwise, Congress would be doing something other than praying for the Anointed One and his holy war.
I'm not obeying in advance, but I'm not giving lip service to normality, either.
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
The OpenAI astroturfers jumped on this one. Their only interest is in trying to spin Anthropic as not meaningfully better to dissuade people from switching, not to get people to drop both companies altogether.
...is Anthropic "meaningfully better" though? They're still fine being a defense contractor, and they lack the tools to enforce the ethics they want to uphold. They seemingly contribute even less to FOSS than OpenAI does (low bar) and split hairs over IP ownership when open models distill their results. Am I supposed to root for them because of their manufactured internet drama?
It's very reminiscent of the half-assed security theater that Google and Apple fought over. Neither one of them resisted government coercion in the end, they just took different routes to end up as federal asskissers.
I don't think we won't get AGI if Anthropic were to implode, and frankly, right now, I'd rather have someone say clearly, "They cannot stomach the existence of someone telling them 'No' or adhering to moral principles. Like spoiled children they can't hear the former and are terrified by later because it might expose them to the condemnation they deserve."
Long time ago I worked for a company that I learned was selling it's software to help target people during the Iraq war. I quit because I cannot support building software that kills people.
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can imagine what it feels like being a worker there in that line of business.
I've also quit a job where the products I was working were meant to be deployed to CBP to hunt down immigrants. It's a nice gesture, but it won't stop these companies. They just hired someone else without an ethical backbone, and continued the project like nothing happened.
Tech leadership is rotten to the core, and that can't be fixed by individuals making a stand.
I've quit jobs and been laid off from jobs and I will admit that when I do, I always kind of hope that the company goes bankrupt the day after I leave because I was so important. Companies I've quit or been laid off have gone bankrupt, but it took years and sadly I don't think there's any way for me to draw a logical connective of "no tombert -> company fails".
I've never quit a company on purely ethical grounds, but I have turned down interviews and offers because of them. They're probably not going to go bankrupt just by not hiring me, but I like to think that making it incrementally harder to find talent slows down their progress of doing evil things, if only a little.
That's probably still a delusion of grandeur on my end, but we all should have an ethical line that we won't cross; most of us end up working for monsters and/or assholes, especially at BigCos, so your options generally boil down to "work for an asshole who's doing evil that you can live with" or "go live in a Unabomber shed". I guess it's important to make sure that "the evil thing you can live with here" isn't just any act of evil.
At a technical level, I don't believe they're specifically working on targeting anyone. They're providing a general-purpose API that Palantir is presumably using to build the target-finding software.
I imagine that's why the implementation got so far along before this blew up. Someone at Anthropic talked with someone at Palantir and they had a "you did what? Did you read the contract terms" moment, and that was after it went into production.
DoD still has not meaningfully moved to the DoW moniker, to me it represents the most fascist tendency, to make announcements and presume that’s enough to change the truth on the ground. The legal entity one contracts with is DoD. Going along with “DoW” is signal to me that a party has capitulated to the most absurd form of governance.
Pragmatically, it's for the best to use its preferred name instead of legal name when sucking up to the department and Trump to try to get back in good graces.
You got me wondering, so I checked to see how much Anthropic's bribed Trump so far. According to Dario, Trump has been soliciting bribes, but they refused to pay, and the contract "renegotiation" is retribution:
"Amodei claimed that tensions between his company and the Trump administration stem partly from the firm’s refusal to financially support Trump and its approach to AI regulation and safety issues."
"As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more."
It's disgusting honestly. There are likely at least 136 directly reported civilian and child deaths linked to the operations where their services were used. And they are very proud.
The internal memo did read as fairly unhinged and political, which is not the message Dario likes to present. I'm glad he addressed this. It was unprofessional and unhelpful - even if Sam Altman is, in fact, a disgusting lunatic.
The one where he accuses Trump of retaliating against Anthropic after failing to solicit a bribe?
That should be the headline here. We know Trump personally made $4B last year, and we know he's been using the full power of the US gov't to retaliate against people that don't "support" him.
Come 2029, when there's an opportunity for the corruption trials to start, this sort of behavior needs to be front of the public mind, both at the top, and throughout his network of appointees.
I find it frustrating that apparently we just gave up on Trump giving up his tax returns, or putting his businesses into a blind trust. This was a big deal in 2016~2019, but I guess the entire world just decided it wasn't worth it.
Now we have a president who doesn't even hide his bribes, and instead starts multiple cryptocurrencies and has a publicly traded company in order to optimize the bribery. Maybe this is this "Department of Government Efficiency" thing I keep hearing about; it's never been more efficient to bribe public officials.
It is incredible how far the overton window has moved on this issue.
When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
Now Anthropic wants to have two narrow exceptions, on pragmatic and not moral grounds. To do so, they have to couch it in language clarifying that they would love to support war, actually, except for these two narrow exceptions. And their careful word choice suggests that they are either navigating or expect to navigate significant blowback for asking for two narrow exceptions.
My, the world has changed.
Attitude towards war depends on context. In 2007 "war" meant "Iraq" which was extremely unpopular, pointless, and had an imperialist flavor. Today "war" means Gaza, Iran, and Venezuela, but it also means Ukraine and Chinese aggression, possibly ramping up to an invasion of Taiwan. I suspect Amodei and many Anthropic employees are thinking of the latter.
> it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
(spoiler alert)
Wasn't this one of the plot points of the Val Kilmer movie Real Genius? They had to trick the students into creating a weapon by siloing them off from each other and having them build individual but related components? How far we've fallen! Nobody has to take ethics during undergrad anymore I guess...
>I’m going to tell you about how I took a job building software to kill people.
>But don’t get distracted by that; I didn’t know at the time.
Caleb Hearth: "Don't Get Distracted" https://calebhearth.com/dont-get-distracted
Also in Good Will Hunting, when Will (Matt Damon) delivers a scathing job rejection to the NSA.
1997. The War on Terror has a lot to answer for.
https://youtu.be/tH0bTpwQL7U
If you are waiting until undergraduate level to take ethics, it's far too late to matter anyways.
Doubly so for "business ethics" classes which became à la mode in the post-Enron era. They attempt to teach fundamental ethics, when at most it should be a very thin layer on top of a well founded internal moral framework and well-accepted ethical standards inculcated from day 1 of kindergarten.
Morals are taught 0-9 [0], Ethics perhaps slightly later as it requires more complex thought processes.
[0] https://familiesforlife.sg/pages/fflparticle/Young-Children-...
"Why do you wear that toy on your head?" "Because if I wear it anywhere else it chafes"
"A laser is a beam of coherent light." "Does that mean it talks?"
"Your stutter has improved." "I've been giving myself shock treatment." "Up the voltage."
"In the immortal words of Socrates, who said, 'I drank what?'"
"Is there anything I can do for you? Or...more to the point... to you?"
"Can you drive a six-inch spike through a board with your penis?" "...not right now." "A girl's got to have her standards."
"What are you looking at? You're laborers, you're supposed to be laboring! That's what you get for not having an education!"
-- I'm sure I could remember more if I thought about it for a bit. That movie made quite an impression on young me.
I think my favorite exchange was the following:
Professor Hathaway: "I want to start seeing more of you around in the lab."
Chris Knight: "Fine. I'll gain weight."
also relevant to Ender's Game, which came out 8 months before Real Genius
God bless you for referencing that film.
Reminds me of the story of someone's woman working for a research lab to improve the computer-controlled automatic emergency landings of planes with total power failure.
... or so she was told.
She was unknowingly designing glide-bomb avionics.
I feel like these stories are apocryphal. I mean, I can't say for certain that no US DoD research program used subterfuge to trick the performers into working on The Most Racist Bomb. But I can say that in 20 years I've never seen a dearth of people ready, willing, able, and actively participating with full knowledge that they are creating The Fastest Bomb and The Sneakiest Bom and The Biggest Bomb Without Actually Going Nuclear.
IDK, maybe it's different outside the National Capitol Region. But here, you could probably shout "For The Empire" as a toast in the right bars and people wouldn't think you were joking.
I feel like these stories are apocryphal.
They're not. But if it makes you feel better to believe that, everyone has their own coping mechanism.
What? I'm not questioning whether the weapons research actually happened. I'm questioning the sincerity of people claiming they didn't know what they were doing. I've seen plenty of weapons programs. They aren't a secret to the people working on them. My point is, the government doesn't need to lie to researchers or even pay them very well to get them to develop weapons because there are plenty of intelligent-enough people willing to do it almost for free.
Lots of people working on the Manhattan project did not know what they were working on. The core group of physicists did, but not many others.
I think you could get away with that excuse in 1945 when this whole system was first being created from scratch. It's been 80 years since then.
If "This doesn't fit into my mental model, so everyone else must be lying" is how you deal with things you didn't personally experience, do what you have to.
“someone’s woman”?
lol I am guessing that was an autocorrect error.
Yes, and even their two exceptions, only one is on moral grounds. They don't want to provide tools for autonomous killing machines because the technology isn't good enough, yet. Once that 'yet' is passed they will be fine supplying that capability. Anthropic is clearly the better company over OpenAI, but that doesn't mean they are good. 'lesser evil' is the correct term here for sure.
Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members at risk.
Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.
It's sort of like the opposite of this idea:
https://en.wikipedia.org/wiki/Roger_Fisher_(academic)#Preven...
> Fisher [...] suggested implanting the nuclear launch codes in a volunteer. If the President of the United States wanted to activate nuclear weapons, he would be required to kill the volunteer to retrieve the codes.
>> [...] The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being. [...]
>> When I suggested this to friends in the Pentagon they said, "My God, that's terrible. Having to kill someone would distort the President's judgment. He might never push the button."
> — Roger Fisher, Bulletin of the Atomic Scientists, March 1981[10]
The danger is that we won't be sending these fully-autonomous drones to 'war', but anytime a person in power feels like assassinating a leader or taking out a dissident, without having to make a big deal out of it. The reality is that AI will be used, not merely as a weapon, but as an accountability sink.
This is exactly how all other weapons of mass destruction were rationalised.
"If we develop <terrible weapon> we can save so many lives of our soldiers". It always ends up being used to murder civilians.
War is not moral. It may be necessary, but it is never moral. The only best choice is to fight at every turn making war easy. Our adversaries will, or likely already have, gone the autonomous route. We should be doing everything we can to put major blockers on this similar to efforts to block chemical, biological and nuclear weapons. The logical end of autonomous targeting and weapons is near instant mass killing decisions. So at a minimum we should think of autonomous weapons in a similar class as those since autonomy is a weapon of mass destruction. But we currently don't think that way and that is the problem.
Eventually, unfortunately, we will build these systems but it is weak to argue that the technology isn't ready right now and that is why we won't build them. No matter when these systems come on line there will be collateral damage so there will be no right time from a technology standpoint. Anthropic is making that weak argument and that is primarily what I am dismissive of. The argument that needs to be made is that we aren't ready as a society for these weapons. The US government hasn't done the work to prove they can handle them. The US people haven't proven we are ready to understand their ramifications. So, in my view, Anthropic shouldn't be arguing the technology isn't ready, no weapon of war is ever clean and your hands will be dirty no matter how well you craft the knife. Instead Anthropic should be arguing that we aren't ready as a society and that is why they aren't going to support them.
What do you mean, "hallucinates and kills people"? Killing people is the thing the military is using them for; it's not some accidental side effect. It's the "moral choice" the same way a cruise missile is — some person half a world away can lean back in their chair, take a sip of coffee, click a few buttons and end human lives, without ever fully appreciating or caring about what they've done.
I'm sure it was meant as "kills the wrong people."
People are always worried about getting rid of humans in decision-making. Not that humans are perfect, but because we worry that buggy software will be worse.
The people that actually target and launch these things do think about what they have done. It is the people ordering them to do it that don't. There is a difference, I hope.
> Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members.
I guess let the record state that I am deeply morally opposed to automated killing of any kind.
I am sick to my stomach when I really try to put myself in the shoes of the indigenous peoples of Africa who were the first victims of highly automatic weapons, “machine guns” or “Gatling guns”. The asymmetry was barbaric. I do hope that there is a hell, simply that those who made the decision to execute en masse those peoples have a place to rot in internal hellfire.
To even think of modernizing that scene of inhumane depravity with AI is despicable. No, I am deeply opposed to automated killing of any kind.
The Gatling Gun was first deployed in the US civil war, not in Africa. https://en.wikipedia.org/wiki/Gatling_gun
The “machine gun” has a more complicated history, and the first practical example may have been Gatling’s, or an earlier example used in Europe https://en.wikipedia.org/wiki/Machine_gun
Isn't this the moral hazard of war as it becomes more of a distance sport? That powerful governments can order the razing of cities and assassinate leaders with ease?
We need to do it because our enemies are doing it, in any case.
It came later than I anticipated, but it did come after all. There is a reason companies like 9mother are working like crazy on various way to mitigate those risks.
I do not think that anyone but the US and Israel have assassinated leaders in the last 30 years. I also question their autonomous drone advancement. Russia and China did not have the means to help Venezuela and they do not have the means to help Iran.
Russia and other states have demonstrably conducted targeted killings.
https://en.wikipedia.org/wiki/List_of_heads_of_state_and_gov...
>"Russia and China did not have the means to help Venezuela"
Of course they have the means. Nothing technical prohibits them from blowing couple of carriers. But the price they would have to pay is way too high.
Doesn’t this just lower the bar on going to war? Putting real lives on the line makes war a costly last resort.
I think it's the opposite. The human cost of war is part of what keeps the USA from getting into wars more than it already is - no politician wants a second Vietnam.
If war is safe to wage, then it just means we'll do it more and kill more people around the globe.
The troops were told they're headed for Armageddon this go round
Safe for whom?
Safe for the aggressors, I mean. If war is easy and cheap for us to wage, we will do more of it, and likely make the world a worse place.
The flip side is it's very unlikely that AI won't become that good any time soon, so it'll always remain a means to hold out. Especially since nobody has explicitly defined what "good enough" entails.
Military isn't quite as aggressively catering to the people who historically have bullied techies as they used to.
Aside from that - there's a lot more people in tech now. It grew too fast too quick to maintain all the values it had back in the 00's and earlier.
>maintain all the values it had.
You must be joking. Which values, set by who? Jobs the marketer, Ellison the tyrant, or Gates the sociopath?
Please, spare us. They built a surveillance state masquerading as marketing companies and banal products. Don't play remember when if you don't actually remember.
Values relating to mistrust of the military (as per the context of the post I responded to) as well as values relating to ownership of the tech you bought and of personal privacy.
Get off your high horse and stop talking down to a person you don't know. Take your anger out on someone else.
Yeah, it wasn’t some kind of ethical utopia, but it sure as hell has gotten a lot less ethical in real terms. When you start Making things operate in ways that people dislike or are deceived by, it’s a very slippery slope, because everything from there all the way through eating babies is just a matter of degree.
Trite as it may seem, don’t be evil is actually a very, very strong statement, as is do no harm. 70 percent of tech market cap these days is a a million tiny harms, a warm pool of diluted evil.
>take your anger out on someone else
Well, aren't you just the sweetest little emotional manipulator? Ethical, for sure. Perhaps, you are angry due to ignorance and react poorly to someone shattering your illusions.
Jobs the Marketer! You want to lump Jobs in with Ellison because he had the gall to purchase advertising for his products?
> they have to couch it in language clarifying that they would love to support war,
This is what baffles me when I see people flocking to them for subscriptions based on these events.
If LLM's are indeed a game changer professionally, you kind of need to pick one.
Personally, I loathe seeing power shift towards mega corporations like that, away from being able to run your own computer with free software, but it feels like the economics are headed that way in terms of productivity.
It's easy to say "I will never let the Department of Defense use my search engine for evil!" Or "the more money they spend on me, the less they have for weapons!" ( https://en.wikiquote.org/wiki/Theo_de_Raadt ) when you aren't really expecting money. But when somebody shows up with a check, it becomes much harder to stick to your principles. Especially after watching Palantir (and "don't be evil" Google) rake in plenty of dough.
Also: https://gist.github.com/kemitchell/fdc179d60dc88f0c9b76e5d38... .
What tech companies were these? I was younger in 2007 but i feel like i would remember if companies were openly refusing to participate in war.
Such protests are commonplace at Google: https://www.nytimes.com/2018/04/04/technology/google-letter-...
Yeah, and they still happen even today (there were some recent ones with ICE and Israel), but the companies themselves have still worked in war businesses.
If you graduated in 2007, your classmates were born around 1985. Their parents were mostly born in the mid 50s to the mid 60s and came to political consciousness either during the Vietnam War or immediately thereafter. No war since has been even close to as unpopular or frankly as salient. It’s the passing out of cultural relevance of that war that you are noticing.
> No war since has been even close to as unpopular or frankly as salient.
Iraq.
Spoiler alert, a bunch of the current ones are going to be seen similarly too.
Also keep in mind when making comparisons that the Vietnam war was not unpopular with Americans at the beginning, and many people justified it all throughout, using language that will be similar to observers of later wars.
> Iraq
Not in same ballpark. There’s no Iraq generation the way there’s a Vietnam one.
> Spoiler alert, a bunch of the current ones are going to be seen similarly too.
No they won’t. The lack of a draft and mass domestic casualties dramatically changes the picture. Especially on the saliency axis.
Correct that there was no Iraq generation because there was no draft and numbers were way smaller. Vietnam had over half a million troops at the height of that war. Iraq had under 170k.
But the war was still deeply unpopular. There is a reason America did the extraordinary - to that point - and elect its first black president.
The economic toll will be greater with these wars than Vietnam.
The biggest protest in world history was in response to the invasion of Iraq. It’s reasonable to call it unpopular.
https://en.wikipedia.org/wiki/15_February_2003_Iraq_War_prot...
Sure, but it's not reasonable to call it as unpopular domestically as the Vietnam War, which had more than 12 times the casualties, spread over a group that on the whole was unwilling to fight and had to be drafted into the conflict, thereby spreading the pain of lost loved ones throughout society rather than concentrating it heavily into the poorer and less politically powerful social and economic classes. As unpopular as the Iraq war was, the American people's distaste didn't really do much to end it.
https://en.wikipedia.org/wiki/United_States_military_casualt...
That’s reasonable. In the context of the larger discussion here a post up thread’s implication that a graduate in 2007 would be anti-war because of Vietnam is kind of dubious. Public opinion of the war shifted quite a lot in the four years after “Mission Accomplished” and Freedom Fries.
There is an Iraq group but we’re just a much smaller group
The only difference between now and 2007 is the curtain has been pulled back revealing how things have always worked.
>refuse to let their systems be used for war..
I don't want wars.
But tell me, what would you like your country to do when conflicts arise due to want of natural resources? Would you want your country to just give up that resource your people depend on, like may be 50/50?
Do you believe it will always be possible to settle on a solution in a peaceful way that works for everyone?
Your logic here is sound, sure. But don't tell me you can be so naive as to believe that the U.S. military is a defensive mechanism
>But don't tell me you can be so naive as to believe that the U.S. military is a defensive
I am not. Every country is corrupt, and war makes a lot of money for powerful people, but does it justify sabotaging your own existence?
Isn't the point of technology and engineering to find alternatives with the resources that one has?
Yes, but it takes time.
Like we have solar now. People talk about how it saves environment. But I think another similar win would be reduction in dependency on oil, and countries won't have to go to war over oil. But it takes time...
But it seems what technology gives, technology takes away. Because new technologies comes with its own resource requirements. And the cycle looks like it will go on...
Personally, I'd rather that my country (USA) be taken over by China than bomb innocents in the Middle East.
> It is incredible how far the overton window has moved on this issue.
> When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war,
In 2007 the US was the sole world hegemon. It could afford to let the smartest people work on ad delivery systems.
In 2026, in certain fields, China has a stronger economy and military. Russia is taking over Europe. India and Brazil are going their own way. China is economically colonizing Africa.
The US can't afford to let it's enemies develop strong AI weapons first because of the naive thinking that Russia/China/others will also have naive thinkers that will demand the same.
---
People were just as naive with respect to Ukraine. They were saying that mines and depleted uranium shells are evil. But when Russia attacked, many changed their minds because they realized you can't kill Russians with grandstanding on noble principle. You kill them with mines and depleted uranium shells.
Hopefully people here will change their minds before a hot war. As the saying goes, America always picks the right solution after trying all the wrong ones.
I'm a decade older so maybe I missed the memo but I think you'll have a hard time naming tech companies that actually refused to work with the military, which were large enough and important enough to be in danger of selling something to the military (i.e. not Be Inc. or Beenz.com)
Clearly, all of the traditional big leagues were lined up to take the Army's money. IBM, Control Data, Cray, SGI, and HP all viewed weapons research as a major line of business. DEC was the default minicomputer of the DoD and Sun created features to court the intelligence community including the DoD "Trusted Workstation". Sperry Rand defined "military industrial complex".
Well, they made a big deal about saying that while they sold their software to the Defense Department, it wasn't actually being used to kill people. Except for well-known military contractors (e.g., Raytheon), who have sold plenty of software specifically to kill people.
I guess there's a reason we saw plenty of articles about software used somewhat defensively -- such as distinguishing whether a particular "bang" was a gunshot, and where it likely came from -- instead of offensively -- such as improvements to targeting software.
Yes, and IBM had a particularly tainted history from WWII.
For every company that stands on values, there is another that will do some shady shit for a dollar.
Sperry Rand? You’re up awful late grandpa.
This wasn't really that long ago.
https://www.google.com/maps/@37.6735255,-122.389804,3a,31.2y...
> When I graduated in 2007, it was common for tech companies to refuse to let their systems be used for war, and it was an ordinary thing when some of my graduating classmates refused to work at companies that did let their systems be used for war. Those refusals were on moral grounds.
I don't think it was very common really.
I think for the most part it was tech companies whose systems were not being used for war who like to boast that they refused to let their systems be used for war. Or that they creatively interpreted "for war" that since they were not actually manufacturing explosives, they could claim it was not for war.
I’d argue it’s come full circle and it hasn’t changed a bit.
There wouldn’t be a Silicon Valley without World War 2 and US gov. funding of Stanford to develop radar basically.
The initial investment from then gave critical capital mass for Stanford, the VCs, and the tech companies of today.
https://youtu.be/ZTC_RxWN_xo?si=gGza5eIv485xEKLS
> they have to couch it in language clarifying that they would love to support war, actually,
Yes they do because they are trying to sell to the Department of War.
No one made Anthropic try to be a military contractor. It’s pretty much the definition of being a military contractor that your product helps to kill people.
The reckoning will come.
Watch as the same people pushing for war today will pretend they were always against it 10 years from now.
I guess we're just doomed to repeat the same cycles.
When people (myself included FWIW) warn about the dangers of American imperialism, it's because:
1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
2. Every American company with sufficient size eventually becomes a defense contractor. That's really what's happened with the tech companies. They're moving in lockstep with the administration on both domestic and foreign policy;
3. The so-called "imperial boomerang" [2]. Every tactic, weapon and strategy used against colonial subjects are eventually used against the imperial core eg [3]. Do you think it's an accident that US police forces have become increasingly militarized?
The example I like to give is China's high speed rail. China started building HSR only 20 years ago and now has over 32,000 miles of HSR tracks taking ~4M passengers per day. The estimated cost for the entire network is ~$900B. That's less than the US spends on the military every year.
I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Then again, I think Steve Jobs was the only Silicon Valley billlionaire not in a transhumanist polycule with a more than even chance of being in the files.
[1]: https://www.archives.gov/milestone-documents/president-dwigh...
[2]: https://en.wikipedia.org/wiki/Imperial_boomerang
[3]: https://www.amnestyusa.org/blog/with-whom-are-many-u-s-polic...
Thank you for mentioning the term 'imperial boomerang'. You really saw it in the militarization of the police after the Iraq War. Gone are the donut munchers.
> I really what Steve Jobs would've done were he still alive. Tim Apple has bent the knee and kissed the ring. Would Steve Jobs have done the same? I'm not so sure. He may well have been ousted (again) because of it.
Given that Steve Jobs was best friends with Larry Ellison, I’d say he wouldn’t have bent the knee because he would’ve been standing hand in hand with Trump, just like Larry.
>1. As President Eisenhower said in his farewell address in 1961 [1], every dollar spent on the military-industrial complex is a dollar not spent on schools or houses or hospitals or bridges;
This humanist view unfortunately doesn’t hold anymore in the modern world. Boomers will be happy as long as not a single dollar is spent on housing, so that their own homes can appreciate in value. Republicans would rather burn money than spend it on houses, hospitals, or bridges that might benefit immigrants or “other people” more than themselves.
I used an American political party only as a reference, but the same phenomenon can be seen in many countries around the world. Society has become incredibly cynical and has regressed a lot in terms of humanity.
>"Boomers will be happy as long as not a single dollar is spent on housing"
Not sure what boomers you are talking about. I for one am disgusted at what is happening with the things in general and with the housing in particular. I do not want my house to appreciate Ad infinitum. I do not want to have ever growing class of have-not's so that few jerks can own the governments and half of the world.
> My, the world has changed.
No. Your tech experience was an aberration.
For almost all of history, including recent history, tech and military went together. Whether compound bows, or spears or metallurgy.
Euler used his math to develop artillery tables for the Prussian army.
von Neumann helped develop the atom bomb.
The military played a huge role in creating Silicon Valley.
However, to people who grew up in the mid to late 90s, it is easy to miss that that period was a major aberration. You had serious people talking about the end of history. You had John Perry Barlow's utterly naive Declaration of Independence of Cyberspace which looks more and more naive every year.
The Overton window has not shifted, at least not among rank-and-file tech workers. There was very loud and vocal internal opposition to building and selling weapons[0]. They all lost the argument in the boardrooms because the US government writes very big checks. But I am told they are very much still around.
CEOs are bound to sociopathically amoral behavior - not by the law, but by the Pareto-optimal behavior of the job market for executives. The law obligates you to act in the interests of the shareholders, but it does not mandate[1] that Line Go Up. That is a function of a specific brand of shareholder that fires their CEOs every 18 months until the line goes up.
In 2007, Big Tech had plenty of the consumer market to conquer, so they could afford to pretend to be opposed to selling to the military. But the game they were playing was always going to end with them selling to the military. Once they were entrenched they could ignore the no-longer-useful-to-us-right-now dissenters, change their politics on a dime, and go after the "real money".
[0] Several of the sibling comments are mentioning hypothetical scenarios involving dual-use technologies or obfuscated purposes. Those are also relevant, but not the whole story.
[1] There are plenty of arguments a CEO could use to defend against a shareholder lawsuit that they did not take a particularly short-sighted action. Notably, that most line-go-up actions tend to be bad long-term decisions. You're allowed to sell low-risk investments.
Complaining loudly about working with the government to build weapons and then continuing to build them isn't the same as people refusing to work for companies that handle weapons contracts. The window has indeed shifted, with tech workers now merely virtue signaling on social media.
As the Heritage Foundation has said, we are in a cold civil war for our country and right now, the authoritarians are winning.
Millenials were famously called "generation sell". It is all corporate now, DEI one day, ICE the next. Just follow your leaders.
Around 10 years ago, in college, in Calculus class I had a very ambitious classmate, wanted to go to DARPA and work on Robotics. I asked if he was thinking it through solely from technical perspective or considering ethics side as well. Clearly, he didn't understand the question and I directly inquired - what if the code you write or autonomous machine you contribute to used for killing? His response - that's not my problem.
After spending couple of years studying in the US, I came to conclusion that executives and board members in industry doesn't care about society or humans, even universities don't push students towards critical thinking and ethics, and all has turned into a vocational training, turning humans into crafting tools.
The same time, at Harvard, I attended VR innovation week and the last panel discussion of the day was Ethics and Law, which was discussed by Law Professor, a journalist and a moderator and was attended a handful of people. I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion. The discussion basically was - how product companies build affects the society? Laws aren't founders problem, that's what lawyers are for, and ethics - who cares, right?
This frenzy, this rat race towards next billion dollar company at any cost, has tore down the fabric of the society to the individual thinking level; or more like not thinking, just wanting and needing.
See in your case with the military you can directly say, hey my code will be used to bomb other people possibly. But in today's times it isn't (I am sure even then) so cut and dry. I worked in AdTech industry (like 60% of the bay area techies). So the ad tech I write gets shown to millions/billions. What about ads influencing elections and then politicians waging wars? Anti-vax ads which influence people and then kill them. Scam ads. Insurance ads and then people not getting cancer meds from the same insurance. Am I responsible for those deaths? I would say Yes.
But what is the option? I feel each of us wants to draw a line based off of our morality but the circumstances don't allow us to stick to it (still gotta pay rent)
We are all on a Titanic the way I see it. It's just the DARPA guy is gonna sink first. Rest of us are just pretending to be Jack trying to be the last ones to go.
My pet theory is that this has been accelerated due to the cultural rejection of the humanities as worthy of study.
Orwell wrote about this: https://orwell.ru/library/articles/science/english/e_scien
> "The fact is that a mere training in one or more of the exact sciences, even combined with very high gifts, is no guarantee of a humane or sceptical outlook."
>” I inquired why founders, CEOs or developers weren't in part of the discussion or in attendance? Moderator responded that they couldn't find them qualified enough to take part in the discussion.”
This seems more like credentialist arrogance than a well-reasoned judgment.
As Tom Lehrer sang:
"Once the rockets are up, who cares where they come down? That's not my department!" says Wernher von Braun.
"fully autonomous weapons and mass domestic surveillance"
I still don't buy this discussion. How exactly do they want to use an llm for autonomous weapons, given it's not even possible to reliably have a piece of code written without having to review it?
And how is a 1M token window model suppposed to be useful for mass surveillance?
Honest questions, I am sure I am missing some details. Because so far it looks like a very sophisticated marketing strategy.
> Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations.
> we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.
Why are people leaving openAI when this is Anthropic's stance? Are their two narrow requirements enough to draw the ethical boundary people are comfortable with?
What’s a “warfighter?” Do they come from the “Gulf of America?” We used to call them servicemen or service members. Emphasizing they served the people. I guess that’s too effeminate for our roided up and ironically hyper-insecure Secretary of Defense.
A new term was needed some decades ago. "man" titles have not been politically correct for a while, "member" sounds awkward and bureaucratic. In some other languages, "soldier" can be used for all military personnel, while English ended up with a more narrow meaning.
"Awkward and bureacratic" is literally the point of naming conventions commonly adopted by democracies. Titles like "president" or "prime minister", departments like "Department of Defense", referring to government employees as "civil servants", etc. are all intentional measures meant to strip away the prestige and egotism associated with positions of authority in an effort to avoid it going to people's heads, and to remind them that they are meant to serve the good of the public that pays for their existence rather than ruling over them.
Reddit discussion from 2016 (so before Trump).
https://www.reddit.com/r/changemyview/comments/4ta3hh/cmv_th...
There are many reasons to detest the current political landscape. Don't get distracted.
It’s a mistake to conflate “wants to spend money on the most ethical option available” with “ think the most ethical option available is perfect”
Why wouldn’t you move your dollars to someplace incrementally better?
You make it sound as if "the most ethical option available" is.. actually ethical?
Their statement doesn't make it sound they are incrementally better, they are trying to bend over backwards to keep working for war.
There are so many inference providers not working for Department of War. Even Alibaba and sure China has lots of issues but they are not bombing anyone now if that's your first priority. Or else, smaller US / European / Asian companies with pure civilian focus. SOTA open weights models they serve are perfectly suitable for coding and chat. I run a local Qwen3.5-122B-A10B-NVFP4 instance and it writes entire Android apps from scratch and that's a midsized model.
Sorry for the off-topic but what hardware are you running Qwen3.5-122B-A10B-NVFP4 on? Is it physically local or just self-administered? Thanks in advance.
I'm not sure there's really any good large model providers
Can you give a list of high quality alternatives? Morally speaking i would put China on par with the US if not worse (due to their ongoing Uyghur genocide). I will check out Qwen3 but would be interested in others.
Because Anthropic is called Anthropic and they have this really warm and inviting visual aesthetic.
Frankly it’s a shitshow all around. The truth is that nobody gives a fuck about this. They have no moral qualms, just practical. And these are the people that should bring us the future. Man what a depressing scenario.
To state the obvious, I think when corruption and power in government go unchecked, companies eventually end up facing situations like this. It’s almost like making a deal with the devil.
At the beginning, they’re usually doing it for the money — and maybe some level of patriotism. Eventually they find themselves involved in things so ugly that they can’t really stomach it anymore. At the same time, they can’t easily back out either.
Then a new CEO comes in and thinks the previous guy was too soft, "He couldn’t handle it, but I can."
And the cycle continues.
Raised an eyebrow a little at this sentence: "Anthropic has much more in common with the Department of War than we have differences."
The Department of Defense was named as such after the detonations of the atomic bomb in Hiroshima and Nagasaki
We - as a humanity - collectively recognized the weight of our creation, and decided to walk back
Discussing “AI alignment” in the same breadth as aligning with a “Department of War” (in any country) is simply not an intellectually sound position
None of the countries we’ve attacked this year pose an existential threat to humanity. In contrast, striking first and pulling Europe, Russia, and China into a hot war beginning in the Middle East surely poses a greater collective threat than bioweapons, sentient AI, or the other typical “AI alignment” concerns
Why aren’t there more dissidents among the researcher ranks?
Among those who would resist, half would've done so outwardly by now and been fired, the other half would be hiding their activity. In both cases we wouldn't be hearing about them now.
> Why aren’t there more dissidents among the researcher ranks?
Because they’ve likely all lost faith in humanity watching Trump get reelected and now just want to get rich and hope to insulate their families from the reality we’re all living in.
Let me rephrase it for you:
"We both want a docile American public who go along with our desires so we can achieve goals that may be contrary to the interests of the American public."
My eyebrows basically left my face after reading the whole thing.
This is not the forbidden love story I would've asked for.
As someone looking at this from outside the US, the whole sequence of events is frankly terrifying.
I fear that frontier AI is going to be nationalised for military purposes, not just in the US but across the globe.
At the same time, I really don’t know what Anthropic were expecting when they described their technology as potentially more dangerous than an atom bomb while agreeing to integrate purpose-built models with Palantir to be deployed in high-security networks for classified military tasks.
Would love to enumerate those commonalities. Run by a psychopath? Commitment to violent lethality? Burning billions of dollars for uncertain goals? (ok there's one)
Certain patterns at top ranks?
Nothing brings home the Orwellian nature of USA 2026 more for me than the word "warfighter".
I continue to be surprised how many people haven't heard term until now, it's been in common use in the US for 20+ years.
To me the most Orwellian thing is everyone using the newspeak name for the DoD.
> newspeak name for the DoD.
They changed the name and it matches the intention. It is not a newspeak name anymore.
DoW is the opposite of newspeak, it is much more transparent and honest about what that organization is and has been for my entire life
DoW is newspeak. Thats not it's name.
They do a lot more war than defense don't they?
That may be true but changing the department's name can only be done with an act of congress, which has not been done yet. Thus, the name is still officially and legally Dept of Defense.
Just because a name is more accurate doesn't mean that it's its new name. Otherwise we wouldn't be the United States of America (we are literally not united bc Hawaii and Alaska are not contiguous, and we are figuratively not united because... Well, you know)
After hearing Palmer Luckey's argument for the name change[0], I tend to think it's good change.
Some of his arguments:
It used to be called the department of war, and it had a better track record with regard to foreign conflict, under that name then it did under the DoD name.
Department of war is a more honest name, department of defense is a somewhat newspeak term, although "Department of Peace" would be worse.
It's harder to seek funding for "war", then it is to seek funding for "defense". If you ask someone, "Do you want to spend money on education or war?", you will get a different answer asking, "Do you want to spend money on education or defense?".
[0] Palmer Luckey talking to Mike Rowe about the name change: https://youtu.be/dejWbn_-gUQ?t=1007
The problem with this argument is that the _original_ Department of War is now called the Department of the Army, which existed alongside the Department of the Navy. Besides, it’s a moot point unless Congress actually changes the name.
> It's harder to seek funding for "war"
I'm confused. This seems like a bad change.
Regarding Luckey's other statements, I can almost assure you that the administration did not think as much about it as Luckey has. Insecure Pete just thought the title "Secretary of Defense" was too wussy so he wanted to be Secretary of War.
Also, I think people mainly have issue with the fact that Trump is just randomly and unilaterally renaming random stuff and demolishing buildings without congressional approval. If he had gone through the correct alleys then maybe people could ignore it. Maybe. We'd probably still have qualms about it, but at least we'd know that our representatives had a say.
It's been in use by overly earnest DoD officials and Raytheon salespeople. But no normal person would use it unironically.
However I suppose Amodei in this context can be included in the former group.
Yeah, it’s common alright. Commonly used as a joke by every veteran I’ve ever met to mock try-hards.
Just remember, we're not at war with Iran. The House Speaker said so.
We can use the word war because Iran used the word war. But it is not a War in the constitutional sense. Or something.
we are though, they plotted to assassinate the US president, not to mention being the #1 sponsor of terrorism in the middle east, attacking our allies
Sure they did. Thats why we only discovered it after we assassinated their current and former leaders.
US took out Iran's supreme leader. It's simple tit for tat.
Really? You made it through Covidpocalypse, but the there warfighter is a big problem?
It'll be very interesting to see how this case gets resolved - in court and in the court of public opinion. I believe it's incredibly important and I hope they prevail.
Messages about project Maven, Palantir and Anthropic integration are flagged by certain interest groups:
"Palantir's Maven uses Anthropic's Claude code, sources say."
https://www.reuters.com/technology/palantir-faces-challenge-...
It is always astonishing that the reviled mainstream press is more critical than hackers these days.
Not sure why Dario apologized for the internal memo leak. Seems like an odd thing to backtrack on.
Its incredibly simple - they want to get off the supply chain risk list.
Its very evident in his statement, he's trying very hard to clarify what that list means for corporations and downstream business with large commercial and strategic companies.
Imagine if Microsoft, Amazon, Google, etc decided that they don't want to ANY sort of minuscule risk (real or perceived) to their massive public sector business lines (via all their DoD DoJ NHS and other 3 letter agencies, state agencies, city and local municipals etc) - and decide to cancel their enterprise Anthropic licenses - which is a VERY possible scenario.
And these are the big players, theres a whole slew of medium and small players all with existing government contracts that need to tread carefully.
Probably because it hurts its position either in court or during negotiations with the DoW.
Right, I was hoping for Anthropic to stand its ground a bit more. There’s quite a bit of “ring kissing” undertones in today’s memo.
I think this is one of the weaknesses of rationalism and effective altruism, is that it tries to make a clean break from the common law legal reasoning that the government, and thus corporations, operate on. While I find rationalism to be a useful lens, the fact is that the common law legal framework is totally dominant, and so these deontological arguments made rationally collapse very quickly when translated to the dominant framework.
As much as Trump and Hegseth would like it to be called the Department of War, it still takes an act of Congress to change the name of the Department of Defense. No reason to call it by anything else until that happens.
This is such a foot stomping childish thing to get caught up on. It does not at all matter what a dept is called. Try to get over the extremely superficial.
Department of peace sounds even better than defense.
link to the memo?
https://pbs.twimg.com/media/HCmdjFGXwAAPI3d?format=jpg&name=...
thanks a lot
Not everything has to be a conspiracy or some 4D chess business move. Dario is a morally motivated person and regretted the tone that was being conveyed in that memo, so he apologized.
Yeah, that's completely unbelievable. You don't just accidentally call Trump a "dictator" or go on an extended tirade about Sam Altman. Clearly, he was speaking how he truly felt and how he's doing damage control.
I don’t feel that old, but I guess being 45 is ancient in tech.
The Silicon Valley tech jobs we have now has a history rooted in World War 2 and funding of it by the US gov.
https://youtu.be/ZTC_RxWN_xo?si=gGza5eIv485xEKLS
I’m not saying war is good or anything, but also don't ride a high horse cause none of it would be here w/o WW2.
Could they please start using the correct name? Department of Defense?
They still want that contract so they'll continue to pander.
The correct name is the Department of War.
Calling it the Department of Defense implies a system of laws, checks and balances which no longer exists.
It very much still exists, and statements like this are what’s called “obeying in advance.” Don’t do it.
You should respect the government’s choice. It is elected after all
I'm sorry but it does not very much still exist. Otherwise, Congress would be doing something other than praying for the Anointed One and his holy war.
I'm not obeying in advance, but I'm not giving lip service to normality, either.
It is the DoD u tip congress says otherwise
Congress hasn't said otherwise, so...
Cringing every time I see the word "warfighter", and disappointed that they're still pushing to keep that contract.
> I apologize for the tone of the post
What a world we live in now where private companies are apologising for the "tone" of their speech while official representatives of the government daily express blatant lies and misrepresentations without the slightest fear of consequence.
It really is incredibly sad that what was one of the most respected countries in the world has descended to this - an utter mockery of a functioning democracy.
It’s a business decision.
that just makes it sadder?
The OpenAI astroturfers jumped on this one. Their only interest is in trying to spin Anthropic as not meaningfully better to dissuade people from switching, not to get people to drop both companies altogether.
...is Anthropic "meaningfully better" though? They're still fine being a defense contractor, and they lack the tools to enforce the ethics they want to uphold. They seemingly contribute even less to FOSS than OpenAI does (low bar) and split hairs over IP ownership when open models distill their results. Am I supposed to root for them because of their manufactured internet drama?
It's very reminiscent of the half-assed security theater that Google and Apple fought over. Neither one of them resisted government coercion in the end, they just took different routes to end up as federal asskissers.
I built a website that shows a timeline of recent events involving Anthropic, OpenAI, and the U.S. government.
Posted here: https://news.ycombinator.com/item?id=47195085
I don't think we won't get AGI if Anthropic were to implode, and frankly, right now, I'd rather have someone say clearly, "They cannot stomach the existence of someone telling them 'No' or adhering to moral principles. Like spoiled children they can't hear the former and are terrified by later because it might expose them to the condemnation they deserve."
That seems overly vindictive. How would your opinion change in a hypothetical world where "AGI" was dependent on Anthropic's survival?
So is this a backtrack or clarification on their original stance? Do I need to be worried about skynet killing grandma?
[flagged]
This is turning into just another reality show. There are no adults anymore.
Long time ago I worked for a company that I learned was selling it's software to help target people during the Iraq war. I quit because I cannot support building software that kills people.
This is a message to people working for that line of business at Anthropic. You don't have to do it, you can quit. If you are helping this insane administration to conduct war on Iran quit. You don't need to have that kind of blood on your hands.
I saw a someone's hypothesis that a generative model was used to help classify buildings to decide what to bomb and that the Girls school was misclassified. If this was an Anthropic model, I can imagine what it feels like being a worker there in that line of business.
I've also quit a job where the products I was working were meant to be deployed to CBP to hunt down immigrants. It's a nice gesture, but it won't stop these companies. They just hired someone else without an ethical backbone, and continued the project like nothing happened.
Tech leadership is rotten to the core, and that can't be fixed by individuals making a stand.
I've quit jobs and been laid off from jobs and I will admit that when I do, I always kind of hope that the company goes bankrupt the day after I leave because I was so important. Companies I've quit or been laid off have gone bankrupt, but it took years and sadly I don't think there's any way for me to draw a logical connective of "no tombert -> company fails".
I've never quit a company on purely ethical grounds, but I have turned down interviews and offers because of them. They're probably not going to go bankrupt just by not hiring me, but I like to think that making it incrementally harder to find talent slows down their progress of doing evil things, if only a little.
That's probably still a delusion of grandeur on my end, but we all should have an ethical line that we won't cross; most of us end up working for monsters and/or assholes, especially at BigCos, so your options generally boil down to "work for an asshole who's doing evil that you can live with" or "go live in a Unabomber shed". I guess it's important to make sure that "the evil thing you can live with here" isn't just any act of evil.
I agree it won't fix the problem, but marginal drops in labor supply and skill can still have an impact.
> They just hired someone else without an ethical backbone
Or who simply had a different point of view than you.
At a technical level, I don't believe they're specifically working on targeting anyone. They're providing a general-purpose API that Palantir is presumably using to build the target-finding software.
I imagine that's why the implementation got so far along before this blew up. Someone at Anthropic talked with someone at Palantir and they had a "you did what? Did you read the contract terms" moment, and that was after it went into production.
Were you earning seven figures tho? That suppresses moral stances rather quickly I reckon
Perhaps. It should do the opposite though - you've likely got enough in the bank that you don't need to work a day in your life again.
There is a reason they call it 'fuck you money'
A lot of people downvoted me for saying the messaging of the internal post was bad. Good to see Dario is smart enough to see that it was a bad look.
DoD still has not meaningfully moved to the DoW moniker, to me it represents the most fascist tendency, to make announcements and presume that’s enough to change the truth on the ground. The legal entity one contracts with is DoD. Going along with “DoW” is signal to me that a party has capitulated to the most absurd form of governance.
Pragmatically, it's for the best to use its preferred name instead of legal name when sucking up to the department and Trump to try to get back in good graces.
Maybe it's bad that Anthropic wants to embrace the Department of War?
What's next, bribing Trump with gold bars and donations to "charity"?
They have a crypto coin for explicit bribing
You can also "invest" money for Trump's family to "earn" their "management fees."
You got me wondering, so I checked to see how much Anthropic's bribed Trump so far. According to Dario, Trump has been soliciting bribes, but they refused to pay, and the contract "renegotiation" is retribution:
https://news.ycombinator.com/item?id=47269649
"Amodei claimed that tensions between his company and the Trump administration stem partly from the firm’s refusal to financially support Trump and its approach to AI regulation and safety issues."
Nowhere, because there's no such department..
Wow, anthropic really shit the bed on that one.
"As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more."
It's disgusting honestly. There are likely at least 136 directly reported civilian and child deaths linked to the operations where their services were used. And they are very proud.
The internal memo did read as fairly unhinged and political, which is not the message Dario likes to present. I'm glad he addressed this. It was unprofessional and unhelpful - even if Sam Altman is, in fact, a disgusting lunatic.
The one where he accuses Trump of retaliating against Anthropic after failing to solicit a bribe?
That should be the headline here. We know Trump personally made $4B last year, and we know he's been using the full power of the US gov't to retaliate against people that don't "support" him.
Come 2029, when there's an opportunity for the corruption trials to start, this sort of behavior needs to be front of the public mind, both at the top, and throughout his network of appointees.
I find it frustrating that apparently we just gave up on Trump giving up his tax returns, or putting his businesses into a blind trust. This was a big deal in 2016~2019, but I guess the entire world just decided it wasn't worth it.
Now we have a president who doesn't even hide his bribes, and instead starts multiple cryptocurrencies and has a publicly traded company in order to optimize the bribery. Maybe this is this "Department of Government Efficiency" thing I keep hearing about; it's never been more efficient to bribe public officials.