I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.
I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.
It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.
I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
I don't see how public policy is being "forced" on anyone here? It seems like the system is working as intended: government wants to do X; company A says "I won't allow my product to be used for X"; government refuses to do business with company A. One side thinks the government should be allowed to dictate terms to a private supplier, the other side thinks the private supplier should be allowed to dictate terms to the government. Both are half right.
You can argue that the government refusing to do any business with company A is overreach, I suppose, but I imagine that the next logical escalation in this rhetorical slapfight is going to be the government saying "we cannot guarantee that any particular use will not include some version of X, and therefore we have to prevent working with this supplier"...which I sort of see?
Just to take the metaphor to absurdity, imagine that a maker of canned tomatoes decided to declare that their product cannot be used to "support a war on terror". Regardless of your feelings on wars on terror and/or canned tomatoes, the government would be entirely rational to avoid using that supplier.
I think the bigger insanity here is the labeling of a supply chain risk. It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic. It's another when it actively attempts to isolate Anthropic for political reasons.
> It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.
This is literally the mechanism by which the DoD does what you're suggesting.
Generally speaking, the DoD has to do procurement via competitive bidding. They can't just arbitrarily exclude vendors from a bid, and playing a game of "mother may I use Anthropic?" for every potential government contract is hugely inefficient (and possibly illegal). So they have a pre-defined mechanism to exclude vendors for pre-defined reasons.
Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.
What? I'm not completely familiar with bidding procedures but don't bidding procedures usually have requirements? Why not just list a requirement of unrestricted usage? Or state, we require models to be available for AI murder drones or whatever. Anthropic then can't bid and there's no need to designate then a supply chain risk.
> The Department of War is threatening to […] Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"
This issue is about more than the government blacklisting a company for government procurement purposes.
From what I understand, the government is floating the idea of compelling Anthropic — and, by extension, its employees — to do as the DoD pleases.
If the employees’ resistance is strong enough, there’s no way this will serve the government’s interests.
The President is crashing out on X because a company didn’t do what they wanted. “Forcing” is not a binary. Do you seriously believe that the government’s behavior here is acceptable and has no chilling effect on future companies?
The EU and UK is a long way from attracting top AI talent purely from opportunity and monetary terms.
Not to mention UK is arguably further down the mass surveillance pipeline than the US. They’ve always had more aggressive domestic intelligence surveillance laws which was made clear during the Snowden years, they’ve had flock style cameras forever, and they have an anti encryption law pitched seemingly yearly.
I’d imagine most top engineers would rather try to push back on the US executive branch overreach than move. At least for the time being.
For sure we’re not currently attracting the talent. There’s more to that than just money, but money is significant factor. When it comes to compensation, AI is too broad a category to have a meaningful debate. Hardware or software or mathematics or what kind of person? Etc.
I’m not gonna dispute the UK being further down some parts of the road.
Not sure what you’d count as top engineers, but I know enough that have been asking about and moving to the UK/EU that it’s been a noticeable reversal of the historic trends. Also, a major slowdown of these kinds of people in the UK/EU wanting to move to the US.
I’m not an AI engineer but it’s not hard to imagine why some bright talent would want to work at the most exciting AI companies in the US while also making 3-10x what they’d make in Europe.
Ideology is easy to throw around for internet comments but working on the cutting edge stuff next to the brightest minds in the space will always be a major personal draw. Just look at the Manhattan project, I doubt the primary draw for all of those academics was getting to work on a bomb. It was the science, huge funding, and interpersonal company.
See my other comments around here. This idea that salaries in the US are so much higher than Europe for all these top AI roles just isn’t true. Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.
This also isn’t hypothetical. I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate (which goes beyond just the AI topics).
And you might want to read a few books on the Manhattan project and the people involved before you use that analogy. I don’t think it’s particularly strong.
You seem to have a very ill-informed view of UK/EU salaries in this particular sector; And also: yeah, people take salary hits to go do things they believe in (this is like, the entire premise of the underpaid American startup founder model) - it should come as no surprise that people are willing to forgo pay for reasons other than just building their own business / making themselves personally wealthy.
Do UK and Europe have hardware manufacturing for those researches to work with once US imposes GPU export restrictions to them at the first whiff of competition/threat?
And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC. This is part of why there’s funding from the EU to develop Sovereign AI capabilities, currently focused on designing our own hardware. We’re nothing like as far behind as you might expect in terms of tech, just in terms of scale.
Also, while US export restrictions might make things awkward for a short while, it wouldn’t stop European innovation. The chips still flow, our own hardware companies would scale faster due to demand increase, and there’s the adage about adversity being the parent of all innovation (or however it goes).
You mean because of the international sanctions that needed Taiwanese, British and Dutch support to be effective?
Or because of the revoked processor design licenses from the British company Arm (which is still UK headquartered… despite being NASDAQ listed and largely owned by Japanese firm SoftBank)?
Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil? Or could stop us manufacturing RISC-V-based chips (Swiss-headquartered technology)?
The US is weak in digital-logic silicon fabrication and it knows it. That’s why it’s been so panicked about Intel and been trying to get TSMC to build fabs on US soil. They’re pouring tens of billions of dollars into trying to claw back ownership and control of it, but it’s not like Europe or China or others are standing still on it either.
The EUV and other factory equipment everyone's using is predominantly European. High-end testing tools used in R&D are largely European.
The fabs aren't, and that is no small thing. The tech stack is there though.
It's pretty tiresome that the HN audience keeps assuming Europe doesn't have "tech" because it doesn't have Facebook. Where do you think all the wealth comes from? Europe is all over everyone's R&D and supply chain.
I agree. And even if those workers stay in the U.S., there’s absolutely no guarantee that they’ll do their best to favor the government’s interests — quite the opposite, if anything.
At the end of the day it’s a matter of incentives, and good knowledge work can’t simply be forced out of people that are unwilling to cooperate.
Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.
Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.
"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
Yeah, it's a nice gesture, but having watched Google handle the protests in recent years and their culture inching a step closer to Amazon, I do not foresee their leadership being swayed by employee resistance. They'll either quietly sign an agreement and discreetly implement it, or they will go scorched earth on their employees again.
Why are employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?
Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given such employers probably the ability to monitor anything you do on your device, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you're anonymous, you might as well use the alternate verification option.
Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.
P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.
Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.
My understanding is that it’s about the contract allowing Anthropic to refuse service when they deem a red line has been crossed. Hegseth and friends probably don’t want any discussions to even start, about whether a red line may be in the process of being crossed,
and having to answer to that. They don’t want the legality or ethicality of any operation to be under Anthropic’s purview at all.
I think you're right, this isn't about a specific request but about defense contractors not getting to draw moral red lines. Palmer Luckey's statement on X/Twitter reflects the same idea: https://x.com/PalmerLuckey/status/2027500334999081294
The thinking seems to be that you can't have every defense contractor coming in with their own, separate set of red lines that they can adjudicate themselves and enforce unilaterally. Imagine if every missile, ship, plane, gun, and defense software builder had their own set of moral red lines and their own remote kill switch for different parts of your defense infrastructure. Palmer would prefer that the President wield these powers through his Constitutional role as commander-in-chief.
It’s about punishing a company that is not complying. It’s a show of force to deter any future objections on moral grounds from companies that want to do business with the US gov.
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?
I'm referring to the current situation. How is it not applicable? I think the government wants to eventually nationalize these companies and we have to stop them.
Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.
I agree. We will need hardware ownership as well eventually. But the earlier you open-source, the more you slow down the centralization because people will be more likely to buy hardware to run stuff at home and that gives hardware companies an opening to do the right thing.
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.
Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.
There mostly aren't physical barriers. Unlike nukes, where you need specific materials and equipment that we can try to keep tabs on, bioweapons can be made entirely with materials and equipment that would not be out of place in an academic or commercial lab. The largest limitation is knowledge, and the barriers there are falling quickly.
Symmetry is not guaranteed. If someone creates a deadly pathogen with a long pre-symptomatic period (which we know is possible, since HIV works this way) it could infect essentially everyone before discovery. Yes, powerful AI would likely rapidly speed up the process of responding to the threat after detection, especially in designing countermeasures, but if we don't learn about the threat in time we lose.
There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".
This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.
This is just not thinking clearly. There are bad things that are asymmetric in character, dramatically easier to do than to mitigate. There’s no antidote or vaccine to nuclear weapons.
This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.
Centralizing power is dangerous and leads to power struggles and instability.
It is not easy to create weapons. Why do you think the physical and legal barriers that exist today that prevent you from acquiring equipment and creating nuclear weapons will go away when everyone becomes smarter?
I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.
The point I would make: there are historical examples of international cooperation that work at least for some lengths of time. This is a good thing, a good tool to strive for, albeit difficult to reach.
There might be a small percentage of people nihilistic enough to want to unleash a truly devastating bioweapon, but basically everyone wants what AI has to offer.
I think that's a key difference as well.
And how would a treaty like that be enforced? Every country has legitimate uses for GPUs, to make a rendering farm or simulations or do anything else involving matrix operations.
All of the technology involved, in more or less the configuration needed to make your own ChatGPT, is dual use.
because bio-weapons labs take more to run than a workstation pc under your desk with a good graphics card. both in equipment material and training. Its hard to outlaw use of linear algebra and matrix multiplications.
If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.
But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.
We shouldn't be scammed by people who intend to get back on the Trump train once they've gotten what they want. But if someone's willing to openly oppose the Trump regime, even out of self-interest, I'm happy to let them feign as much ignorance as they'd like. If his power isn't broken the details of who resisted him when won't matter.
> This is why you can't gatekeep AI capabilities. They will eventually be taken from you by force.
Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.
I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.
The book "On Tyranny: 20 lessons from the 20th century" by the historian Timothy Snyder is an excellent read for these times. The very first lesson is "Do not obey in advance". It's about how authoritarian power often doesn't need to force compliance, people simply bend the knee in anticipation of being forced. This simply emboldens the authoritarians to go further.
I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.
We all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.
The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.
Sam Altman tells staff at an all-hands that OpenAI is negotiating a deal with the Pentagon, after Trump orders the end of Anthropic contracts - https://news.ycombinator.com/item?id=47188698
Both the automated verification methods depend on Google servers and Google can almost certainly retrieve that data if they want to regardless of if the signers or verifiers delete it.
You're assuming a lot about Elon's ability to assemble and execute a process competently. They will probably end up hiring people off this list and firing them later.
I think what is much more interesting is what OpenAI and Google will do. There's probably some threshold of signatories where the companies in question do not fire everyone when they decide they want the DoD's business, the question will be how many people have to sign to cross it... and will enough people sign.
I don't think Google would bat an eye at firing 500 people to secure a DoD contract, but would they fire 5,000?
That's what taking a stand looks like... if any of these employees lose their job, they are welcome to come crash at my place for as long as they would like; they will have a roof over their head and I will cook them 3 meals a day.
Would you like to see this extended globally? Could such a spirit exist multinationally? It’s asking a lot, because you’d be asking for a lot of courage from places like China, India, Russia, Middle East … anywhere that’s not Europe basically.
Well yes, but context matters here and this is the US government's decision to take with a US-based company.
While I understand why it matters for folks affiliated with prominent AI companies in particular to sign this, the more the American people stand together, the more pressure I think that puts on our government to act responsibly.
Idealistic and naive? Probably. But sometimes grassroots efforts do spark change, and it's high time the people of the USA start living up to the first word in our country's name.
Anyways, to answer your question directly: I welcome all the fine people of the world everywhere to join in what this open letter stands for.
Unfortunately, it's abundantly clear to many of us Americans that the current administration doesn't care what we think, never mind what people outside our country do. So I'll just start with the group that this department (in theory) is supposed to represent.
Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country. Private companies imposing such demands on our military should not be respected. Having weapons that can randomly detect a false positive and shut themselves down because they think you are using it wrong is a feature I would never want built in.
I have also been against these terms of services of restricting usage of AI models. It is ridiculous that these private companies get to dictate what I can or can't do with the tools. No other tools work like this. Every other tools is going to be governed by the legal system which the people of the country have established.
> Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country.
The point here, of course, being that Anthropic is very specifically claiming to not be a gun manufacturer, and Hegseth's response is that the DoD (W?) will force anthropic to build guns.
We have international laws and rules of war. We have weapon treaties (well, some of them are expiring). Sure, not everyone is signatory, or even follow the conventions they have ratified, but at least having these things in place makes it even remotely possible to categorize and document violations and start processes towards rulebreakers and antihumanist actions.
So I looked into what they cooked up in 2023, plus which countries signed it (scroll down to a link to the actual text). It's an extraordinarily pathetic text. Insulting even.
We will not be divided! United in obeying only orders from woke governments, be it on gender ideology, "misinformation", "fact checking" or takedowns, cancellations, blackouts and bans.
They've already been using Signal - which is "commercial" app, meaning it's not meant to be used like that - for top-secret (or at least highly sensitive) military communications during the military strikes on Yemen. If that was fake, I apologise, I was deceived. I wouldn't be surprised if things turned out that way again, to be honest. That's something to be expected, actually (IMO).
I see comments like this all the time on HN, including between community members. Why are you showing up now? Altman may be former YC and friends with Paul Graham, but he’s nevertheless a public figure and does plenty to deserve ridicule.
Are we allowed, for example, to call Trump an insecure man with orange skin and tiny hands? Is that a violation of our allowed speech?
>After famed investor Marc Andreessen met with government officials about the future of tech last May, he was “very scared” and described the meetings as “absolutely horrifying.” These meetings played a key role on why he endorsed Trump, he told journalist Bari Weiss this week on her podcast.
>What scared him most was what some said about the government’s role in AI, and what he described as a young staff who were “radicalized” and “out for blood” and whose policy ideas would be “damaging” to his and Silicon Valley’s interests.
>He walked away believing they endorsed having the government control AI to the point of being market makers, allowing only a couple of companies who cooperated with the government to thrive. He felt they discouraged his investments in AI. “They actually said flat out to us, ‘don't do AI startups like, don't fund AI startups,” he said.
...
keep making petitions, watch the whole thing burn to the ground when Trump decides to channel the Biden ideas in this field.
We should care because if they win they empower others to stand up as well, and not just in the area of AI safety. Courage is contagious, and whatever else you think of Anthropic, they’re showing real courage here.
Yeah, I find it funny how we're now defending these AI companies, when they're clearly still an enemy of the working class.
They've made it incredibly clear their plans are to disenfranchise labor, and welcome in a world of God knows what with their technologies. Like they're making a stand on mass surveillance, this seems a bit like a red herring, cool they stop using their tools for war fighting, but continue to attack their fellow working working class?
All three of these companies are spending hundreds of millions to psyop decision makers across every industry to give your salary to them. Get out of here, with "We will not be divided" OpenAI, Google and Anthropic employees are not friends of labor and should not use our phrases.. or they'd sabotage and or quit.
And why is there no mention of how we caught OpenAI being used in government dashboards through Persona, only two weeks ago, that were directly connected to intelligence organizations and tools to identify if you are politician or high profile personds? OpenAI has been complicit in this since last January when 4o was the first model that qualified for "top secret operations"
(kind of weird how 4o went onto cause a bunch of people to go literally insane and commit crazy acts of violence yet is allowed to be used in the most sensitive aspects of government.. nothing to see here).
I’m reluctant to score an organization as just one data point on a one dimensional line. (Some won’t even do that; they reduce it to a single bit.)
Instead, I look at specific actions in context. What Anthropic did today was “amazing” to a first approximation in my eyes. Yes, even if it was not purely altruistic. (The curator of TED talks about this principle in a recent book, by the way.)
At the same time, I can gesture ant other actions they’ve done that are not good. This is not inconsistent.
Anthropic has enough investment money and enough additional investor interest that they can ride this out longer than this administration. It won’t be good for business, of course, but it’s not the end of their world.
> it will just be perfect proof that you cannot be both moral and successful in the US.
I hate this situation as much as anyone, but it’s a unique, first of its kind challenge. I don’t think it’s generalizable to anything. This is a unique situation.
The only way they survive is if their board fires the CEO and they bend the knee. The other option is they are given the green light to sell to one of the US Governments trusted partners: Microsoft/Oracle/X.
Good luck with that. I just don't see either Google or OpenAI listening to their employees on this. They might have their own reasons for not wanting to help build Skynet, but if they don't, I'm sure those employees can readily be replaced with somebody more compliant.
So big tech wants to court Trump with millions in donations and now that the big bully they supported is bullying them.. we’re supposed to feel some kind of sympathy? Am I missing something here? Why did Anthropic get involved with the military in the first place?
Anthropic appears to be situating themselves where they are set up as the "ethical AI" in the mindspace of, well, anyone paying attention. But I am still trying to figure out where exactly Hegseth, or anyone in DoW, asked Anthropic to conduct illegal domestic spying or launch a system that removes HITL kill chains. Is this all just some big hypothetical that we're all debating (hallucinating)? This[1] appears to be the memo that may (or may not) have caused Hagesth and Dario to go at each other so hard, presumably over this paragraph:
>Clarifying "Responsible Al" at the DoW - Out with Utopian Idealism, In with Hard-Nosed Realism. Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts. The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days. I also direct the CDAO to.ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.
So, the "any lawful use" language makes me think that Dario et al have a basket of uses in their minds that they feel should be illegal, but are not currently, and they want to condition further participation in this defense program on not being required to engage in such activity that they deem ought be illegal.
It is no surprise that the government is reacting poorly to this. Without commenting on the ethics of AI-enabled surveillance or non-HITL kill chains, which are fraught, I understand why a department of government charged with making war is uninterested in debating this as terms of the contract itself. Perhaps the best place for that is Congress (good luck), but to remind: the adversary that these people are all thinking about here is PRC, who does not give a single shit about anyone's feelings on whether it's ethical or not to allow a drone system to drop ordinance on it's own.
In this case I think the opponents made a huge mistake by calling themselves Department of War, and it's something that can be exploited.
Department of Defense was the actual lie, the newspeak term. They were not really defending anything, they were using military power globally for pursuing economic interests. However, it was easy to convince people that the whole endeavor was a good thing, because defending your country against the baddies is good, and you should support anyone doing that (otherwise you'd be a traitor!). Thank you for your service (defending us).
On the other hand, the term Department of War is hard to sell, because most people don't want to participate in a war or support someone who wants to start one. Thank you for your service... invading other countries? killing and raping innocents? ransacking resources?
This is an irrelevant detail, but if I'd read the title "Department of Defense vs. Meta", I'd first think Meta is leaking confidential info to other countries. However, if I'd read "Department of War vs. Meta", I'd think Meta doesn't want to promote an unnecessary war.
It's rather amusing that this is the proverbial 'red line', not y'know, everything else this administration has been tearing up and running roughshod over. Maybe this would've been less of an issue if companies were more proactive about this bullshit in the first place?
That's why it's hard for me to feel bad about companies suddenly finding themselves on the receiving end. They dug their grave inch by inch and are suddenly surprised when they get shoved into it.
My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.
Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.
Anthropic does not object to its use for war. In fact Anthropic explicitly allows its semi-autonomous use in war, e.g. for identifying targets. They just won't permit its use for full autonomous war, yet, because they don't believe it's safe enough.
I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.
I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.
It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
Is it really incredible?
Only if you're naive. I guess most here are.
Governments are paranoid, particularly about losing control and influence over its subjects. This is expected behaviour.
The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.
I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
I don't see how public policy is being "forced" on anyone here? It seems like the system is working as intended: government wants to do X; company A says "I won't allow my product to be used for X"; government refuses to do business with company A. One side thinks the government should be allowed to dictate terms to a private supplier, the other side thinks the private supplier should be allowed to dictate terms to the government. Both are half right.
You can argue that the government refusing to do any business with company A is overreach, I suppose, but I imagine that the next logical escalation in this rhetorical slapfight is going to be the government saying "we cannot guarantee that any particular use will not include some version of X, and therefore we have to prevent working with this supplier"...which I sort of see?
Just to take the metaphor to absurdity, imagine that a maker of canned tomatoes decided to declare that their product cannot be used to "support a war on terror". Regardless of your feelings on wars on terror and/or canned tomatoes, the government would be entirely rational to avoid using that supplier.
I think the bigger insanity here is the labeling of a supply chain risk. It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic. It's another when it actively attempts to isolate Anthropic for political reasons.
> It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.
This is literally the mechanism by which the DoD does what you're suggesting.
Generally speaking, the DoD has to do procurement via competitive bidding. They can't just arbitrarily exclude vendors from a bid, and playing a game of "mother may I use Anthropic?" for every potential government contract is hugely inefficient (and possibly illegal). So they have a pre-defined mechanism to exclude vendors for pre-defined reasons.
Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.
What? I'm not completely familiar with bidding procedures but don't bidding procedures usually have requirements? Why not just list a requirement of unrestricted usage? Or state, we require models to be available for AI murder drones or whatever. Anthropic then can't bid and there's no need to designate then a supply chain risk.
The government declaring a domestic company as a supply chain threat is a tad more than “refusing to do business” don’t you think?
Ignore the (pre-established) name of the rule, and focus only on what it does: it allows the DoD to exclude a supplier from competitive bidding.
It stop any one with government contracts from using anthropic. Not just bidding on government contracts.
The latter is how the former is accomplished. Government employees cannot simply choose not to work with an otherwise winning bidder.
> The Department of War is threatening to […] Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"
This issue is about more than the government blacklisting a company for government procurement purposes.
From what I understand, the government is floating the idea of compelling Anthropic — and, by extension, its employees — to do as the DoD pleases.
If the employees’ resistance is strong enough, there’s no way this will serve the government’s interests.
I mean Secretary of War can not act any other way to be honest. It’s just a fucked up situation.
The government is doing far more than “refusing to do business” here.
The President is crashing out on X because a company didn’t do what they wanted. “Forcing” is not a binary. Do you seriously believe that the government’s behavior here is acceptable and has no chilling effect on future companies?
> I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has
And where would they emigrate? Russia? China? UAE? :-)
The UK and Europe welcome the US Footgun Operation. Plenty of opportunities for those top researchers and engineers over here.
The EU (which is not the same as Europe), is also looking a bit sharper on AI regulation at the moment (for now… not perfect but sharper etc etc).
The EU and UK is a long way from attracting top AI talent purely from opportunity and monetary terms.
Not to mention UK is arguably further down the mass surveillance pipeline than the US. They’ve always had more aggressive domestic intelligence surveillance laws which was made clear during the Snowden years, they’ve had flock style cameras forever, and they have an anti encryption law pitched seemingly yearly.
I’d imagine most top engineers would rather try to push back on the US executive branch overreach than move. At least for the time being.
For sure we’re not currently attracting the talent. There’s more to that than just money, but money is significant factor. When it comes to compensation, AI is too broad a category to have a meaningful debate. Hardware or software or mathematics or what kind of person? Etc.
I’m not gonna dispute the UK being further down some parts of the road.
Not sure what you’d count as top engineers, but I know enough that have been asking about and moving to the UK/EU that it’s been a noticeable reversal of the historic trends. Also, a major slowdown of these kinds of people in the UK/EU wanting to move to the US.
The EU and UK is a long way from attracting top AI talent purely from opportunity and monetary terms.
Which is why people are talking about this -- it's about ideology now.
You may personally be motivated solely by money. Not everybody is you.
I’m not an AI engineer but it’s not hard to imagine why some bright talent would want to work at the most exciting AI companies in the US while also making 3-10x what they’d make in Europe.
Ideology is easy to throw around for internet comments but working on the cutting edge stuff next to the brightest minds in the space will always be a major personal draw. Just look at the Manhattan project, I doubt the primary draw for all of those academics was getting to work on a bomb. It was the science, huge funding, and interpersonal company.
See my other comments around here. This idea that salaries in the US are so much higher than Europe for all these top AI roles just isn’t true. Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.
This also isn’t hypothetical. I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate (which goes beyond just the AI topics).
And you might want to read a few books on the Manhattan project and the people involved before you use that analogy. I don’t think it’s particularly strong.
To make 1/10th the salary they're making now?
You seem to have a very ill-informed view of UK/EU salaries in this particular sector; And also: yeah, people take salary hits to go do things they believe in (this is like, the entire premise of the underpaid American startup founder model) - it should come as no surprise that people are willing to forgo pay for reasons other than just building their own business / making themselves personally wealthy.
That much?
No, of course not.
Do UK and Europe have hardware manufacturing for those researches to work with once US imposes GPU export restrictions to them at the first whiff of competition/threat?
Yes.
And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC. This is part of why there’s funding from the EU to develop Sovereign AI capabilities, currently focused on designing our own hardware. We’re nothing like as far behind as you might expect in terms of tech, just in terms of scale.
Also, while US export restrictions might make things awkward for a short while, it wouldn’t stop European innovation. The chips still flow, our own hardware companies would scale faster due to demand increase, and there’s the adage about adversity being the parent of all innovation (or however it goes).
> And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC
See what happened to Russian Baikal production on TSMC
You mean because of the international sanctions that needed Taiwanese, British and Dutch support to be effective?
Or because of the revoked processor design licenses from the British company Arm (which is still UK headquartered… despite being NASDAQ listed and largely owned by Japanese firm SoftBank)?
Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil? Or could stop us manufacturing RISC-V-based chips (Swiss-headquartered technology)?
The US is weak in digital-logic silicon fabrication and it knows it. That’s why it’s been so panicked about Intel and been trying to get TSMC to build fabs on US soil. They’re pouring tens of billions of dollars into trying to claw back ownership and control of it, but it’s not like Europe or China or others are standing still on it either.
The GPUs and AIUs aren't being manufactured in the US.
The EUV and other factory equipment everyone's using is predominantly European. High-end testing tools used in R&D are largely European.
The fabs aren't, and that is no small thing. The tech stack is there though.
It's pretty tiresome that the HN audience keeps assuming Europe doesn't have "tech" because it doesn't have Facebook. Where do you think all the wealth comes from? Europe is all over everyone's R&D and supply chain.
I sometimes wonder whether people realise which country ASML is based in, and which country their major suppliers are in (e.g. optics: Germany)
I agree. And even if those workers stay in the U.S., there’s absolutely no guarantee that they’ll do their best to favor the government’s interests — quite the opposite, if anything.
At the end of the day it’s a matter of incentives, and good knowledge work can’t simply be forced out of people that are unwilling to cooperate.
Well that's quite a leap to make. Plenty of room in between those options.
Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.
Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.
"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
Yeah, it's a nice gesture, but having watched Google handle the protests in recent years and their culture inching a step closer to Amazon, I do not foresee their leadership being swayed by employee resistance. They'll either quietly sign an agreement and discreetly implement it, or they will go scorched earth on their employees again.
Tech leaders are a joke
Needs a union. With strikes and all that jazz.
Why are employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?
Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given such employers probably the ability to monitor anything you do on your device, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you're anonymous, you might as well use the alternate verification option.
Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.
P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.
Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.
What, then, is this really about?
My understanding is that it’s about the contract allowing Anthropic to refuse service when they deem a red line has been crossed. Hegseth and friends probably don’t want any discussions to even start, about whether a red line may be in the process of being crossed, and having to answer to that. They don’t want the legality or ethicality of any operation to be under Anthropic’s purview at all.
I think you're right, this isn't about a specific request but about defense contractors not getting to draw moral red lines. Palmer Luckey's statement on X/Twitter reflects the same idea: https://x.com/PalmerLuckey/status/2027500334999081294
The thinking seems to be that you can't have every defense contractor coming in with their own, separate set of red lines that they can adjudicate themselves and enforce unilaterally. Imagine if every missile, ship, plane, gun, and defense software builder had their own set of moral red lines and their own remote kill switch for different parts of your defense infrastructure. Palmer would prefer that the President wield these powers through his Constitutional role as commander-in-chief.
> My understanding is that it’s about
What is "it" in your comment?
The refusal to sign a contract with Anthropic, or their designation as a supply chain risk?
I was answering “What, then, is this really about?” By “this”, presumably they meant “the dispute”.
It’s about punishing a company that is not complying. It’s a show of force to deter any future objections on moral grounds from companies that want to do business with the US gov.
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
> This is why you can't gatekeep AI capabilities.
What is why?
You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?
I'm referring to the current situation. How is it not applicable? I think the government wants to eventually nationalize these companies and we have to stop them.
What use are weights without the hardware to run them? That's the gate. Local AI right now is a toy in comparison.
Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.
> hardware to run them
Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.
You're buying what exactly for a few hundred thousand? and running what model on it? to support how many users? at what tps?
I run local models on Mac studios and they are more than capable. Don’t spread fud.
You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.
You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.
Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.
I agree. We will need hardware ownership as well eventually. But the earlier you open-source, the more you slow down the centralization because people will be more likely to buy hardware to run stuff at home and that gives hardware companies an opening to do the right thing.
Sure, but we could have Hetzners and OVHs who just provide the compute for whatever model we want to run.
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.
Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.
There mostly aren't physical barriers. Unlike nukes, where you need specific materials and equipment that we can try to keep tabs on, bioweapons can be made entirely with materials and equipment that would not be out of place in an academic or commercial lab. The largest limitation is knowledge, and the barriers there are falling quickly.
On your second point, see my response to oceanplexian below: https://news.ycombinator.com/item?id=47189385
I’m tired of these bizarre hypothetical gotcha arguments. If AI can create bioweapons, it can equally create vaccines and antidotes to them.
We live in a free society. AI should be democratized like any other technology.
Symmetry is not guaranteed. If someone creates a deadly pathogen with a long pre-symptomatic period (which we know is possible, since HIV works this way) it could infect essentially everyone before discovery. Yes, powerful AI would likely rapidly speed up the process of responding to the threat after detection, especially in designing countermeasures, but if we don't learn about the threat in time we lose.
There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".
This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.
In the alternative, asymmetry is guaranteed.
When you only allow gov and big tech access to powerful AI, you create a much more dangerous and unstable world.
This is just not thinking clearly. There are bad things that are asymmetric in character, dramatically easier to do than to mitigate. There’s no antidote or vaccine to nuclear weapons.
This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.
Centralizing power is dangerous and leads to power struggles and instability.
It is not easy to create weapons. Why do you think the physical and legal barriers that exist today that prevent you from acquiring equipment and creating nuclear weapons will go away when everyone becomes smarter?
I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.
> bioweapons convention was successful
Was it successful? The jury is still out.
The point I would make: there are historical examples of international cooperation that work at least for some lengths of time. This is a good thing, a good tool to strive for, albeit difficult to reach.
Because bioweapons suck, this is why. On the other hand AI sucks too, but it has at least some use
There might be a small percentage of people nihilistic enough to want to unleash a truly devastating bioweapon, but basically everyone wants what AI has to offer.
I think that's a key difference as well.
And how would a treaty like that be enforced? Every country has legitimate uses for GPUs, to make a rendering farm or simulations or do anything else involving matrix operations.
All of the technology involved, in more or less the configuration needed to make your own ChatGPT, is dual use.
because bio-weapons labs take more to run than a workstation pc under your desk with a good graphics card. both in equipment material and training. Its hard to outlaw use of linear algebra and matrix multiplications.
The last part of your post doesn’t necessarily follow or support your argument; the corollary is “It’s hard to outlaw rna”.
Don't compare general intelligence to bioweapons. A bioweapon cannot defend against or reverse the effects of another bioweapon.
I don’t see why you think that AGI can reverse the effects of another AGI?
If it's taken by force, it will stagnate. It makes no sense at all.
The logic used in the treats is that it's a national security risk to not use Claude, but it's also a national security risk to use Claude.
We shouldn't expect these people to consider how the logic breaks down one step ahead when it never made sense in the first place.
Is TikTok stagnating in the US?
This letter and all of this is meaningless.
If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.
But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.
It’s meaningless. Utterly meaningless.
Get what you pay for, I suppose.
What are you talking about? Google employees and the corporation itself in particular overwhelmingly donated to the Harris campaign.
https://www.opensecrets.org/orgs/alphabet-inc/recipients?id=...
The corporation gave millions _after_ Trump had already won. If your criticism is that, then that does not apply to the people signing.
We shouldn't be scammed by people who intend to get back on the Trump train once they've gotten what they want. But if someone's willing to openly oppose the Trump regime, even out of self-interest, I'm happy to let them feign as much ignorance as they'd like. If his power isn't broken the details of who resisted him when won't matter.
They control the compute.
> This is why you can't gatekeep AI capabilities. They will eventually be taken from you by force.
Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.
I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.
When have US corporations (or simply "the US" really) ever done the right thing for humanity?
"What have the Romans ever done for us?" (https://www.youtube.com/watch?v=Qc7HmhrgTuQ)
I am not a fan of Anthropic guys, but this time I stand with it. We all should.
It is a rough precedent that the government can force private citizens to build weapons for them.
The book "On Tyranny: 20 lessons from the 20th century" by the historian Timothy Snyder is an excellent read for these times. The very first lesson is "Do not obey in advance". It's about how authoritarian power often doesn't need to force compliance, people simply bend the knee in anticipation of being forced. This simply emboldens the authoritarians to go further.
I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.
We all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.
The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.
Yes, take disparate sets of employees and like, oh idk unionize while you still have power.
Well, it looks like OpenAI will be working with the Pentagon: https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...
My personal guess is that Sam Altman said he'd let policy violations go without a complaint and Dario Amodei said he wouldn't.
Nicely done. Hold this line — there’s got to be one somewhere.
No surprise to have not heard anything from xAI
Here's the sequence (so far) in reverse order - did I miss any important threads?
Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)
I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)
President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)
Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)
The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)
The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)
US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)
Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)
Sam Altman tells staff at an all-hands that OpenAI is negotiating a deal with the Pentagon, after Trump orders the end of Anthropic contracts - https://news.ycombinator.com/item?id=47188698
Still won't be profitable.
I missing the actual letter. I think that part of the content is hidden behind some javascript. Can someone post it.
They should be collecting signatures from employees at xAI. I think they're probably most likely to fill the space left by Anthropic.
XAI has already announced they are 100% in
https://x.ai/news/us-gov-dept-of-war
All the more reason to collect their employees' signatures.
This kind of screams desperation, but I guess that's what happens when you're niche AI.
niche is a polite way to put it
Bot-ique Mechahitler.
Everyone knows anyone who signs this from xAI will be a former employee by tomorrow.
My guess is their HR is already monitoring it with instant termination processes in place.
You can sign the form anonymously.
Both the automated verification methods depend on Google servers and Google can almost certainly retrieve that data if they want to regardless of if the signers or verifiers delete it.
You're assuming a lot about Elon's ability to assemble and execute a process competently. They will probably end up hiring people off this list and firing them later.
I think what is much more interesting is what OpenAI and Google will do. There's probably some threshold of signatories where the companies in question do not fire everyone when they decide they want the DoD's business, the question will be how many people have to sign to cross it... and will enough people sign.
I don't think Google would bat an eye at firing 500 people to secure a DoD contract, but would they fire 5,000?
There is a specific kind of person that joins xAI over the other companies and it is definitely not a moral one.
How is posting on this website with your full name not career suicide?
That's what taking a stand looks like... if any of these employees lose their job, they are welcome to come crash at my place for as long as they would like; they will have a roof over their head and I will cook them 3 meals a day.
Not all tech employers are total weenies who would refuse to hire someone for taking this stance.
Most are, but not all.
I'd love to see this extended to any American regardless of past/present employment with Google or OpenAI
Would you like to see this extended globally? Could such a spirit exist multinationally? It’s asking a lot, because you’d be asking for a lot of courage from places like China, India, Russia, Middle East … anywhere that’s not Europe basically.
Well yes, but context matters here and this is the US government's decision to take with a US-based company.
While I understand why it matters for folks affiliated with prominent AI companies in particular to sign this, the more the American people stand together, the more pressure I think that puts on our government to act responsibly.
Idealistic and naive? Probably. But sometimes grassroots efforts do spark change, and it's high time the people of the USA start living up to the first word in our country's name.
Anyways, to answer your question directly: I welcome all the fine people of the world everywhere to join in what this open letter stands for.
Unfortunately, it's abundantly clear to many of us Americans that the current administration doesn't care what we think, never mind what people outside our country do. So I'll just start with the group that this department (in theory) is supposed to represent.
Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country. Private companies imposing such demands on our military should not be respected. Having weapons that can randomly detect a false positive and shut themselves down because they think you are using it wrong is a feature I would never want built in.
I have also been against these terms of services of restricting usage of AI models. It is ridiculous that these private companies get to dictate what I can or can't do with the tools. No other tools work like this. Every other tools is going to be governed by the legal system which the people of the country have established.
It sounds like you think that Anthropic is the first company regulating the use of their product. This is not a novelty whatsoever.
No, but I find it obnoxious as an end user.
Taking principled stands should absolutely be respected.
I can respect a stance while simultaneously calling out how much I dislike it.
> Imagine if a gun manufacturer sold a gun that you couldn't use against X or Y country.
The point here, of course, being that Anthropic is very specifically claiming to not be a gun manufacturer, and Hegseth's response is that the DoD (W?) will force anthropic to build guns.
Not using Claude only weakens the state. Just don’t oblige
Does this mean there is a non zero chance we will get some kind of grok+chinese model mix that's used across the entire US military? Ironic isn't it.
This was a brave, heartwarming read. Thank you to the teams
The primary purpose of these products is mass surveillance why else would they be allowed to be built ?
Stand your ground.
Don't tread on me
Ironically the flag flown mostly by the people who voted for this tyranny.
They should reprint it to say "Step on me Daddy."
There's a good one going around with the Anthropic logo replacing the snake
https://bsky.app/profile/verdverm.com/post/3mfuuogxjpk2b
We have international laws and rules of war. We have weapon treaties (well, some of them are expiring). Sure, not everyone is signatory, or even follow the conventions they have ratified, but at least having these things in place makes it even remotely possible to categorize and document violations and start processes towards rulebreakers and antihumanist actions.
So I looked into what they cooked up in 2023, plus which countries signed it (scroll down to a link to the actual text). It's an extraordinarily pathetic text. Insulting even.
https://www.state.gov/bureau-of-arms-control-deterrence-and-...
We will not be divided! United in obeying only orders from woke governments, be it on gender ideology, "misinformation", "fact checking" or takedowns, cancellations, blackouts and bans.
No problem! The DoD^HW will just use DeepSeek!
(I wish this were a joke)
They've already been using Signal - which is "commercial" app, meaning it's not meant to be used like that - for top-secret (or at least highly sensitive) military communications during the military strikes on Yemen. If that was fake, I apologise, I was deceived. I wouldn't be surprised if things turned out that way again, to be honest. That's something to be expected, actually (IMO).
Aren't they using the Israeli version of Signal which backs up messages because the law requires it?
Pretty sure I remember that from the fumble
The legal name of the department is still the Department of Defense. The "Department of War" is a preferred name by the administration.
Identity affirming care now includes avoiding the DODs deadname. What a world.
They are after the models without post training guardrails.
It's good that there are still empathic humans in the decision and build chain when it comes to AI systems...
[flagged]
Personal attacks aren't allowed here.
Perhaps you don't owe AI tycoons whose names start with A better, but you owe this community better if you're participating in it.
https://news.ycombinator.com/newsguidelines.html
I see comments like this all the time on HN, including between community members. Why are you showing up now? Altman may be former YC and friends with Paul Graham, but he’s nevertheless a public figure and does plenty to deserve ridicule.
Are we allowed, for example, to call Trump an insecure man with orange skin and tiny hands? Is that a violation of our allowed speech?
Hegseth shared a Trump tweet a few hours ago saying they're going to quit doing business with Anthropic.
https://x.com/i/status/2027487514395832410
December 14, 2024
>After famed investor Marc Andreessen met with government officials about the future of tech last May, he was “very scared” and described the meetings as “absolutely horrifying.” These meetings played a key role on why he endorsed Trump, he told journalist Bari Weiss this week on her podcast.
>What scared him most was what some said about the government’s role in AI, and what he described as a young staff who were “radicalized” and “out for blood” and whose policy ideas would be “damaging” to his and Silicon Valley’s interests.
>He walked away believing they endorsed having the government control AI to the point of being market makers, allowing only a couple of companies who cooperated with the government to thrive. He felt they discouraged his investments in AI. “They actually said flat out to us, ‘don't do AI startups like, don't fund AI startups,” he said.
...
keep making petitions, watch the whole thing burn to the ground when Trump decides to channel the Biden ideas in this field.
Use the feedback forms within their platforms to let the companies know your thoughts
I hope Anthropic will survive this. If they don’t it will just be perfect proof that you cannot be both moral and successful in the US.
Who cares whether the "company" survives? I've seen this movie. A few of them in fact. We're on the chopping block here, lol.
We should care because if they win they empower others to stand up as well, and not just in the area of AI safety. Courage is contagious, and whatever else you think of Anthropic, they’re showing real courage here.
Yeah, I find it funny how we're now defending these AI companies, when they're clearly still an enemy of the working class.
They've made it incredibly clear their plans are to disenfranchise labor, and welcome in a world of God knows what with their technologies. Like they're making a stand on mass surveillance, this seems a bit like a red herring, cool they stop using their tools for war fighting, but continue to attack their fellow working working class?
All three of these companies are spending hundreds of millions to psyop decision makers across every industry to give your salary to them. Get out of here, with "We will not be divided" OpenAI, Google and Anthropic employees are not friends of labor and should not use our phrases.. or they'd sabotage and or quit.
And why is there no mention of how we caught OpenAI being used in government dashboards through Persona, only two weeks ago, that were directly connected to intelligence organizations and tools to identify if you are politician or high profile personds? OpenAI has been complicit in this since last January when 4o was the first model that qualified for "top secret operations"
(kind of weird how 4o went onto cause a bunch of people to go literally insane and commit crazy acts of violence yet is allowed to be used in the most sensitive aspects of government.. nothing to see here).
I’m reluctant to score an organization as just one data point on a one dimensional line. (Some won’t even do that; they reduce it to a single bit.)
Instead, I look at specific actions in context. What Anthropic did today was “amazing” to a first approximation in my eyes. Yes, even if it was not purely altruistic. (The curator of TED talks about this principle in a recent book, by the way.)
At the same time, I can gesture ant other actions they’ve done that are not good. This is not inconsistent.
Most survive by bending. See e.g. Google and surveillance a decade ago.
Either way, the bribes will flow like wine, the message has been sent loud and clear
Anthropic has enough investment money and enough additional investor interest that they can ride this out longer than this administration. It won’t be good for business, of course, but it’s not the end of their world.
> it will just be perfect proof that you cannot be both moral and successful in the US.
I hate this situation as much as anyone, but it’s a unique, first of its kind challenge. I don’t think it’s generalizable to anything. This is a unique situation.
>> you cannot be both moral and successful in the US.
I assumed the use of massive scraped datasets, with copyrighted material and without consent, to train large AI models, had already established this.
Many people don’t think there is a moral case against training a model on copyrighted data without obtaining a license to do that specifically.
The only way they survive is if their board fires the CEO and they bend the knee. The other option is they are given the green light to sell to one of the US Governments trusted partners: Microsoft/Oracle/X.
Good luck with that. I just don't see either Google or OpenAI listening to their employees on this. They might have their own reasons for not wanting to help build Skynet, but if they don't, I'm sure those employees can readily be replaced with somebody more compliant.
So big tech wants to court Trump with millions in donations and now that the big bully they supported is bullying them.. we’re supposed to feel some kind of sympathy? Am I missing something here? Why did Anthropic get involved with the military in the first place?
This whole episode is very bizarre.
Anthropic appears to be situating themselves where they are set up as the "ethical AI" in the mindspace of, well, anyone paying attention. But I am still trying to figure out where exactly Hegseth, or anyone in DoW, asked Anthropic to conduct illegal domestic spying or launch a system that removes HITL kill chains. Is this all just some big hypothetical that we're all debating (hallucinating)? This[1] appears to be the memo that may (or may not) have caused Hagesth and Dario to go at each other so hard, presumably over this paragraph:
>Clarifying "Responsible Al" at the DoW - Out with Utopian Idealism, In with Hard-Nosed Realism. Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts. The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days. I also direct the CDAO to.ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.
So, the "any lawful use" language makes me think that Dario et al have a basket of uses in their minds that they feel should be illegal, but are not currently, and they want to condition further participation in this defense program on not being required to engage in such activity that they deem ought be illegal.
It is no surprise that the government is reacting poorly to this. Without commenting on the ethics of AI-enabled surveillance or non-HITL kill chains, which are fraught, I understand why a department of government charged with making war is uninterested in debating this as terms of the contract itself. Perhaps the best place for that is Congress (good luck), but to remind: the adversary that these people are all thinking about here is PRC, who does not give a single shit about anyone's feelings on whether it's ethical or not to allow a drone system to drop ordinance on it's own.
[1] https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART...
You’re kinda already conceding to some of your opponents points when you use legally invalid names like “Department of War”
I appreciate the sentiment but don’t preconcede to your opposition by using their framing.
In this case I think the opponents made a huge mistake by calling themselves Department of War, and it's something that can be exploited.
Department of Defense was the actual lie, the newspeak term. They were not really defending anything, they were using military power globally for pursuing economic interests. However, it was easy to convince people that the whole endeavor was a good thing, because defending your country against the baddies is good, and you should support anyone doing that (otherwise you'd be a traitor!). Thank you for your service (defending us).
On the other hand, the term Department of War is hard to sell, because most people don't want to participate in a war or support someone who wants to start one. Thank you for your service... invading other countries? killing and raping innocents? ransacking resources?
This is an irrelevant detail, but if I'd read the title "Department of Defense vs. Meta", I'd first think Meta is leaking confidential info to other countries. However, if I'd read "Department of War vs. Meta", I'd think Meta doesn't want to promote an unnecessary war.
I'm disappointed Anthropic made this mistake as well.
"He will not divide us!"
What's that, a little speaker?
I miss those times :(
Club Penguin was a gem. Now all we get are Roblox.
It's rather amusing that this is the proverbial 'red line', not y'know, everything else this administration has been tearing up and running roughshod over. Maybe this would've been less of an issue if companies were more proactive about this bullshit in the first place?
That's why it's hard for me to feel bad about companies suddenly finding themselves on the receiving end. They dug their grave inch by inch and are suddenly surprised when they get shoved into it.
[flagged]
Please don't fulminate on HN. The guidelines make it clear we're trying for something better here. https://news.ycombinator.com/newsguidelines.html
My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.
Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.
Anthropic does not object to its use for war. In fact Anthropic explicitly allows its semi-autonomous use in war, e.g. for identifying targets. They just won't permit its use for full autonomous war, yet, because they don't believe it's safe enough.