When my sister and I would play monopoly as kids, we had lost the manual so whenever we didn’t like the outcome of whatever happened, we would make up rules about what was right. Technically then, it was very easy stay compliant while still being able to do well because we could rewrite the rules.
Also, since I was older I feel like I was able to get away with those redefinitions a lot more often…
The word "lawful" always seems to get dragged out when people in power are doing some especially heinous rulemaking, like throwing a hissy fit over a single company trying to voluntarily draw a line at domestic surveillance and fully automated killchains.
The big reason it's "obvious" when tech megacorps do it is because big tech is new to the game and doesn't have an existing regulatory capture system already up and running and legitimized like medical, civil engineering, energy, agriculture, chemical, etc, do.
If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, that they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
> If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
My friend, this paragraph needed some periods. I could not follow what you were trying to say - but it seemed interesting enough to consider retyping.
Because, we have pretty convincing historical precedent that 'just following orders' does not work as a defense when your government does something indefensible.
If you add to that the very broad limits of what the current administration considers "legal" (as in "pretty much anything we want to do"), I can understand feeling uneasy as a Google employee...
working to directly advance a product used substantially to oppress people via surveillance or war crimes, when you have many other choices, is immoral. easy.
In a logical or mathematical sense, sure, but when it's the US government and a huge surveillance-tech company it's pretty necessarily immoral (at least in an American context where harming liberty is immoral - other cultures disagree).
Correct. It depends. For example, it might depend on what the collaboration is likely to result in. Perhaps it would be more likely to be moral there were some boundaries in place, like "no mass domestic surveillance" or "no fully autonomous weapons".
Because the US government currently believes it is legal to blow up civilian drug traffickers and wage war without congressional approval. So at some point, yes, collaboration is immoral.
The US military has deployed fully autonomous weapons since at least 1979, and potential adversaries are now doing the same. For better or worse that ship has sailed.
Look, a dumb bomb is a fully autonomous weapon once it's launched. Let's be real: an LLM making decisions on who to target and when and where to launch munitions represents a meaningful change in our concept of autonomous weapons.
So we are wrong to express any opposition or desire to maybe raise the bar here? Aren’t we supposed to be “the good guys”? Or should we just accept a role as the menace of the world, wildly throwing its weight around whenever we have an unscrupulous president?
Those questions are moot. There are situations where it's simply impossible to have a human in the loop because reaction time is too slow or the environment is too dangerous or communication links are unreliable. Russia is deploying fully autonomous weapons to attack Ukraine today and they will be selling those weapons (or licensing the technology) to their allies. There is no option to stop. And let's please not have any nonsense suggestions that we can somehow convince Russia / China / Iran / North Korea to sign a binding, enforceable treaty banning such weapons: that's never going to happen.
Who said otherwise? Clearly it’s about facilitating specific acts by the government. Why are y’all acting like it was so wildly broad? No one said “working with the government is inherently immoral.”
I don't think that was intentional, but invading countries while trying to distract them with negotiations, randomly assassinating leaders and hoping everything just turns out well, threatening to "destroy civilizations", targeting bridges and more, all while aiding and abetting Israel which is intentionally destroying pharmaceutical, educational, and other such civilian institutions is all 100% intentional.
In some ways worse than bombing the school was the effort to implicitly deny it. The school was near a military facility, and itself was a military facility in the past. US intelligence screwed up. They should have simply acknowledged what happened and why. Their response just reeked of cowardice and malice at the highest level.
I'm dripping with sarcasm here, but as far as I know that's actually what macho Pete believes. He believes he blew those girls to hell with god's own fury. Fuck you, Pete, fuck you.
Is it your position that any collateral damage in war is unacceptable and makes the one who caused that harm forever evil? Or that the whole world should adopt pacifism so that war is no longer practiced at all.
If the former, this places a huge incentive on dictatorships like Iran to use the very easy strategy of co-locating all military targets with schools, hospitals, etc. so that any attack on them by anyone is automatically immoral.
I don’t automatically think everything the US has done (either in Iran this year or in history) is good, best, righteous btw. But positions like yours seem to take for granted that it’s never okay to wage any kind of war.
Set aside for a moment whether it’s safe to classify the Islamic Republic as a truly evil regime.
I don’t want to tempt Godwin’s Law, but after seeing how the Left in the US and Europe rallied to the cause of supporting Hamas, I don’t think modern-day “progressives” have the courage to do anything to counter truly bad actors besides to ask them nicely to stop. I’d love to see someone from that political alignment explain where their red lines are, past which they’d morally support a military attack - and yes, even one where we can be nearly certain innocents will also be hurt or killled.
Are you intentionally lumping in all civic service in one moral bucket? Is working at the post office morally equivalent to developing panopticon technology to suppress protest and track citizens?
Idk about morality, but it’s certainly a way to stop dystopian mass surveillance nightmares if everyone capable of building one refuses.
So if you live in the US and don’t want one government agency in the US to have this power (that is ambiguous under current law), one way you can try to avoid it is by refusing to sell it to them and urging others to do the same.
It’s a long shot sure, but it certainly seems more effective than hoping the legislature wakes up and reigns in the executive these days.
Given most government policies and direct engagement in all kind of monstrosities over the last millennia, there is really no reason to limit the case to USA, indeed.
Thankfully Russia, China, etc have the same qualms as we do in the United States and will refused to send their brightest engineers to work on weapons so they don't become "morally compromised"!!!
This was the same logic that was used when building nuclear weapons, and many of the scientists involved in that tried to find a different path (most notably Niels Bohr). I think we would be in a much better world if they had been successful. It's good that we're trying again w/ LLMs.
I don't know if you're being sarcastic(sounds like you are!) but indeed a lot of engineers left Russia after the war in Ukraine started as they didn't want to be drafted and didn't want to contribute to the war effort in some way, even if indirectly. Of course, many stayed or even willingly help. See how many engineers from Iran work abroad too, for moral and other reasons.
The point is - this happens everywhere, it's not just some weird western thing.
It’s funny to me how many progressive people I know and am friends with who work at these AI companies which are marginalized demographics (Trans, Gay, Latino, Black).
Still have faded Bernie stickers on their cars, No Kings organizers, “fuck SF I’m in the east bay for life fuck tech” - and you all make 7 figures Monday - Friday by supporting the death of society and democracy.
I don’t dare say anything though because “money is money”, the bay is expensive..but I do sure as shit judge every single person I know who joined OAI, Anthropic, Google, and Meta.
Preach. The hypocrisy is startling. I think people started at these companies maybe years ago with "good intentions" and are willing to turn a blind eye. But now, given just how glaring clear it is, I don't think it is really excusable anymore. To be clear, people can work wherever they want including these companies but what kills me is the hypocrisy. They are pathological liars to themselves if they somehow think they aren't complicit.
I mean no harm in saying what I said, I love my friends. I just can’t stomach the hypocrisy, it’s what the companies are preying and feeding off of.
My friends are incredibly bright and good at what they do, it’s why they all have the roles they have. It makes me sad (and frustrated) knowing they are lured in by enough money dangling in front of them that makes them swallow their souls and identity, while fuelling the fire in the same breath.
I have a deep amount of respect and gratitude for my friends (and anyone else) who chooses to work at non-profits, and more ethical - mission based companies for less. I hate how much these AI companies and roles are offering people, it’s completely forced lots of gifted people into a war machine.
Do you suspect there is any chance they are fully independent adult human beings with full agency, who have looked at the pros and cons, and chosen to make the choices they did with clear eyes? Do you think there's any context that might square their choices with their own internal principles that don't make them hypocrites? I mean these as real questions. For "friends you love" you really seem to take a dim view of their intelligence.
One of humanity's greatest weaknesses is cognitive dissonance. People can convince themselves of just about anything. And in some ways intelligence is a burden here. A fool will just do something with a reason of 'f you, that's why.' It's only the clever man that will even bother to rationalize the villain into the hero, and we're great at it. An interesting thought experiment is to ask people if they'd be willing to push a button that would randomly kill a person somewhere in the world for a million dollars. They'd have no direct accountability themselves and their action would be unknown to anybody else.
People will rationalize themselves into declaring this moral even though it is obviously one of the most overtly amoral actions possible. One friend I have, a rather intelligent guy otherwise, was even trying to create a utilitarian argument that he'd donate some percent of his 'earnings' to life saving charities meaning he'd be saving more life on the net. The fact that if everybody thought and behaved the same way, the entirety of humanity would cease to exist, was a consideration he didn't have a response for.
I’ll be honest and say it’s made me question and reposition some of my friendships with a number of these friends. Some joined well before we knew the fallout of how AI has affected and impacted society negatively, some have joined in recent years because they were offered 2x their currently already high comp package, and others will take any job they can get (who, admittedly, I judge far less as I know they are just needing to survive in a HCOL city).
My dim view is more on the AI companies being absurdly overvalued, with too much money to know what to do, which feeds downwards into compensation packages, which lure in “innocent” individuals who can’t say no. It’s not been a healthy market to be vulnerable in, most companies outside AI are just not getting the same funding or can compete at all - and it’s a shit storm.
I made another comment above. People contain multitudes. Different contexts, different choices, not everyone is in a box defined by the viewer's world view. You can't really know what's going on with someone else, in their heads, in their context, so give them some grace. Instead, this person's "friends" are "hypocrites" who were "lured" into their choices. It's very condescending. I am suggesting the poster re-examine their own views on other people in light of this.
You're missing the point. They're just observing the contrast between what their friends publicly say (fuck tech, no kings) and what they spend their workweek in service of.
It's pretty simple: if these friends would take a non-society-destroying job at equal pay (who wouldn't?) then their values aren't driving the decision, money is. Fine, that's a choice adults get to make. But then own it, and justify it without getting defensive.
> Any AI researcher who continues to work here is morally compromised.
Arguably it's exactly the opposite. In the same way we ask billionaires to pay their taxes because the regulatory regime is what allowed them the structure to make their billions in the first place, the national security of the country the AI researchers are in is what allows them to make a vast salary to work on interesting, leading edge capabilities like AI. They should feel obligated to help the military.
This all works if you assume that any action the government takes must be “lawful”. The assumption here is that the Pentagon is obeying the law and any unlawful use would go through normal reporting / violation channels - same as any illegal order or violation or whistleblower report.
The Pentagon does not want Google or anyone else deciding what they can and cannot use their AI for. They’re saying we won’t break the law, and that should be enough for you - pinky swear!
And that seems to be enough for Google. Though I might request some auditing capability that is agentic to verify rather than take them at their word.
Next step: is Google FEDRAMP’d yet for this and for classified enclaves? Or do they also go through Palantir’s AI vehicle?
That's presumably the trick, and it's not a subtle one; it's why the article puts it on quotes in the headline. Google gets to claim that it stood up for principles because it boldly insisted that the government obey the law, and the government will claim that whatever it decides to do is lawful. It's the same as what OpenAI did except not handled buffoonishly.
And since the court has no way to physically force anything - that's the executive branch's function, (it's right there in the name) - lawful has no meaning whatsoever if it's the executive branch that wants to break the law.
Especially concerning with the how creative the executive branch can be when it comes to what laws mean. With little oversight, it seems guaranteed that it will be used for unlawful activities (despite whatever tortured argument some lawyer will have put into a memo somewhere).
No it doesn't at all. Private corporations shouldn't be telling the government what it can and can't do. That's the job of the people. You want private corporation overriding your vote?
Please! That ship sailed a long time ago. Sure tell your congressman, who is most likely bribed (lobbying is bribing, lets use the real words) by the same companies to accept the deal. The courts can try, but who is going to enforce it when the people above says that its fine.
It kind of reminds me of a mix of Skynet in Terminator and Minority Report. But nowhere near as interesting. More annoying than anything else.
I am kind of mad at James Cameron here. Skynet was evil but interesting. Reallife controlled by Google is evil but not interesting - it is flat out annoying.
The classified aspect is probably the most concerning. How can I write my representative (and expect a form letter response six weeks later) if I don't know what I'm objecting to or even if I should be objecting?
How well does this hold up in terms of legal scrutiny when previous actions indicate that the Pentagon would retaliate against Google if they didn't accept this "lawful use only" farce?
Could Google back out of this agreement later by arguing that they were coerced?
Not trying to suggest that Google would be opposed to doing evil, but curious about how solid this agreement would be in practice.
Having your work being used by the govt in ways you disagree with feels similar to having your taxes used in ways you disagree.
When you pay taxes you have no say in the bombs acquired with that and where they are dropped. The latter though doesn't seem to provoke the same push back
you answered your own implicit question. You have a choice who you sell your work to, you don't have a choice what your taxes do. Seems pretty straight forward why the former elicits more push back. The government forces you to pay taxes it doesn't force you to build them tools of surveillance or weapons.
The fundamental problem with these "agreements" is that they are utterly nonsensical as written. Google has one idea of "lawful" and what it means; the Pentagon most definitely has a vastly different interpretation meaning "whatever we want". These companies make these agreements because they do not understand (either deliberately or just by the factor of them not understanding the intelligence sector) that when the intelligence community says "we will only use this for lawful purposes," what they are really telling you is something very, very different. With entities like the Pentagon your agreements should probably both define what "lawful" really means and should provide as few ambiguities as you can manage. Ideally you'd provide zero ambiguities but I'm not sure that's achievable in practice.
> We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.
And starts the lying to our faces. The public and private (from your own employees!) consensus is that it should not be used for those things at all, regardless of “human oversight.”
So the rest of the world is fine to spy on, its the domestic part they don't agree with. So go on, destroy lives all around the world, helping the powers at be build the fascist state. Its fine to use Gemini to tell what building to blow up; its fine for Gemini to wrongly identify people and cause hundreds or thousands of deaths based on the telling the military who to attack.
I've had the unfortunate experience of working at a startup that started courting some autonomous weapons companies and HOLY SHIT were they the bottom of the barrel. Levels of incompetence you wouldn't believe, just good ol' boys who wanted to play with energetics. Then the company I was working for also hemorrhaged all their top engineers because they found the work unsettling.
The takeaway is that your refusal to assist these shitheads does have an impact, they have to pay more for talent and they have a much harder time courting good talent.
It's pretty funny how these guys are all becoming some kind of internet version of, like, Halliburton. It seems pretty desperate. B2C and B2B applications didn't pan out I guess?
The thing is we're in a new Cold War, and most of our adversaries have gotten the memo and most of us ... haven't. Yes, becoming a new Halliburton is a rational move if you see the board right now. I don't like it even one tiny bit.
It's one of two identified uses for AI that is profitable today: writing code and blowing up schools. They are desperate to show the market that the technology is anything more than a money pit.
Doubtful it will even get that far, the DoJ will simply draft an appropriate fig leaf memo with a predetermined conclusion and the government will simply plow on ahead.
They simply say they have that memo. Who knows whether they even drafted it for real? And if anyone starts looking, Gemini can quickly draft one itself. Nice!
Unsurprising from Google, but still bad. If Google has no right to object to a particular use, this is equivalent in practice to "any use, lawful or not".
Lawful is meaningless in the context of the Trump administration. Should Google waver (which they won't), they'll be declared a supply chain risk or otherwise bullied into submission.
Google holds immense power in their position. Trump can make their life very difficult but Google can make life for Trump very difficult as well. They have no need to kneel, they are choosing to.
You don't think Google having control over the most used email, most used browser, most used search engine, most used video website, and most used phone OS gives them immense power?
As a big critic of the OpenAI deal, this kinda sounds like a nothingburger to me. Of course Google doesn't get a veto on operational decisions, no customer would ever agree to such a thing. The problem with OpenAI was that they took advantage of Anthropic standing their ground to wedge their way in, which was both bad on its own terms and raises serious concerns about whether they're being honest on the real terms of the deal.
Capital and Big Tech have always been opportunistic enablers, not principled actors. Corporate Values have always been nothing but internal propaganda. "Don't be evil", what a farce.
I don’t get why this is always such a controversial topic. Should we be decrying Microsoft for selling the DoD/DoW Microsoft Office. They could use excel or PowerPoint to plan a strike package.
When my sister and I would play monopoly as kids, we had lost the manual so whenever we didn’t like the outcome of whatever happened, we would make up rules about what was right. Technically then, it was very easy stay compliant while still being able to do well because we could rewrite the rules.
Also, since I was older I feel like I was able to get away with those redefinitions a lot more often…
The word "lawful" always seems to get dragged out when people in power are doing some especially heinous rulemaking, like throwing a hissy fit over a single company trying to voluntarily draw a line at domestic surveillance and fully automated killchains.
The big reason it's "obvious" when tech megacorps do it is because big tech is new to the game and doesn't have an existing regulatory capture system already up and running and legitimized like medical, civil engineering, energy, agriculture, chemical, etc, do.
If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, that they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
> If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.
My friend, this paragraph needed some periods. I could not follow what you were trying to say - but it seemed interesting enough to consider retyping.
Good comment, and I agree lol
I read it twice (admittedly quickly) but couldn't grasp the point even though I felt like it was there.
Who could have seen this one coming. From yesterday: https://www.cbsnews.com/news/google-ai-pentagon-classified-u... ("Hundreds of Google workers urge CEO to refuse classified AI work with Pentagon").
Any AI researcher who continues to work here is morally compromised.
Why is it morally wrong for a US citizen to work with their government?
The acts of the government being wrong in an upsetting amount of cases would be a big reason.
Because, we have pretty convincing historical precedent that 'just following orders' does not work as a defense when your government does something indefensible.
It’s not, but legal is not the same as ethical.
For a long time, and probably still, it was legal for the US to torture enemy combatants. It was never ethical.
If you add to that the very broad limits of what the current administration considers "legal" (as in "pretty much anything we want to do"), I can understand feeling uneasy as a Google employee...
You’d need some shared ethical/moral framework to make that claim, which doesn’t really seem to exist anymore
working to directly advance a product used substantially to oppress people via surveillance or war crimes, when you have many other choices, is immoral. easy.
What makes you think that Googles AI experts are US citizens?
It’s not morally wrong per-se but just because you are working with your government does not mean what you’re doing is necessarily moral
Just because you are working with your government does not mean what you’re doing is necessarily immoral, either.
In a logical or mathematical sense, sure, but when it's the US government and a huge surveillance-tech company it's pretty necessarily immoral (at least in an American context where harming liberty is immoral - other cultures disagree).
Correct. It depends. For example, it might depend on what the collaboration is likely to result in. Perhaps it would be more likely to be moral there were some boundaries in place, like "no mass domestic surveillance" or "no fully autonomous weapons".
Because the US government currently believes it is legal to blow up civilian drug traffickers and wage war without congressional approval. So at some point, yes, collaboration is immoral.
The US military has deployed fully autonomous weapons since at least 1979, and potential adversaries are now doing the same. For better or worse that ship has sailed.
Look, a dumb bomb is a fully autonomous weapon once it's launched. Let's be real: an LLM making decisions on who to target and when and where to launch munitions represents a meaningful change in our concept of autonomous weapons.
So we are wrong to express any opposition or desire to maybe raise the bar here? Aren’t we supposed to be “the good guys”? Or should we just accept a role as the menace of the world, wildly throwing its weight around whenever we have an unscrupulous president?
Those questions are moot. There are situations where it's simply impossible to have a human in the loop because reaction time is too slow or the environment is too dangerous or communication links are unreliable. Russia is deploying fully autonomous weapons to attack Ukraine today and they will be selling those weapons (or licensing the technology) to their allies. There is no option to stop. And let's please not have any nonsense suggestions that we can somehow convince Russia / China / Iran / North Korea to sign a binding, enforceable treaty banning such weapons: that's never going to happen.
So we shouldn’t try is what I’m hearing. Is that accurate?
Who said otherwise? Clearly it’s about facilitating specific acts by the government. Why are y’all acting like it was so wildly broad? No one said “working with the government is inherently immoral.”
Literally the parent comment:
>Any AI researcher who continues to work here is morally compromised.
At openAI doing this kind of work with the federal government. That is clearly what they are saying. You stripped all context from the discussion.
You’re looking for the least defensible, worse interpretation of their comment.
Hegseth bombed a girls school in Iran last month. I think it's fair to doubt the moral worth of anyone assisting this admin.
I don't think that was intentional, but invading countries while trying to distract them with negotiations, randomly assassinating leaders and hoping everything just turns out well, threatening to "destroy civilizations", targeting bridges and more, all while aiding and abetting Israel which is intentionally destroying pharmaceutical, educational, and other such civilian institutions is all 100% intentional.
In some ways worse than bombing the school was the effort to implicitly deny it. The school was near a military facility, and itself was a military facility in the past. US intelligence screwed up. They should have simply acknowledged what happened and why. Their response just reeked of cowardice and malice at the highest level.
It's ok, they weren't Christian girls, so of course they're in hell now. ...where Pete will go!
Hey, I think I'm starting to get how this organized religion thing works. Maybe I'll join a few to make sure I go to allllll the good places
I'm dripping with sarcasm here, but as far as I know that's actually what macho Pete believes. He believes he blew those girls to hell with god's own fury. Fuck you, Pete, fuck you.
You should probably give this a second look:
https://news.ycombinator.com/newsguidelines.html
If speaking vigorously in defense of morality is wrong, I guess that's something I'll just have to live with.
Is it your position that any collateral damage in war is unacceptable and makes the one who caused that harm forever evil? Or that the whole world should adopt pacifism so that war is no longer practiced at all.
If the former, this places a huge incentive on dictatorships like Iran to use the very easy strategy of co-locating all military targets with schools, hospitals, etc. so that any attack on them by anyone is automatically immoral.
I don’t automatically think everything the US has done (either in Iran this year or in history) is good, best, righteous btw. But positions like yours seem to take for granted that it’s never okay to wage any kind of war.
Set aside for a moment whether it’s safe to classify the Islamic Republic as a truly evil regime.
I don’t want to tempt Godwin’s Law, but after seeing how the Left in the US and Europe rallied to the cause of supporting Hamas, I don’t think modern-day “progressives” have the courage to do anything to counter truly bad actors besides to ask them nicely to stop. I’d love to see someone from that political alignment explain where their red lines are, past which they’d morally support a military attack - and yes, even one where we can be nearly certain innocents will also be hurt or killled.
Are you intentionally lumping in all civic service in one moral bucket? Is working at the post office morally equivalent to developing panopticon technology to suppress protest and track citizens?
Idk about morality, but it’s certainly a way to stop dystopian mass surveillance nightmares if everyone capable of building one refuses.
So if you live in the US and don’t want one government agency in the US to have this power (that is ambiguous under current law), one way you can try to avoid it is by refusing to sell it to them and urging others to do the same.
It’s a long shot sure, but it certainly seems more effective than hoping the legislature wakes up and reigns in the executive these days.
Given most government policies and direct engagement in all kind of monstrosities over the last millennia, there is really no reason to limit the case to USA, indeed.
Thankfully Russia, China, etc have the same qualms as we do in the United States and will refused to send their brightest engineers to work on weapons so they don't become "morally compromised"!!!
I don't think the long-term game theory of race to the bottom works out quite how you think.
"Our enemies would have no qualms building a weapon that will end life on earth! We better build it first because we're the good guys!"
This was the same logic that was used when building nuclear weapons, and many of the scientists involved in that tried to find a different path (most notably Niels Bohr). I think we would be in a much better world if they had been successful. It's good that we're trying again w/ LLMs.
I don't know if you're being sarcastic(sounds like you are!) but indeed a lot of engineers left Russia after the war in Ukraine started as they didn't want to be drafted and didn't want to contribute to the war effort in some way, even if indirectly. Of course, many stayed or even willingly help. See how many engineers from Iran work abroad too, for moral and other reasons.
The point is - this happens everywhere, it's not just some weird western thing.
Why is it morally compromising to work with the military of the country you live in?
I'm not anti-military as a rule but... c'mon. Opinions on the US military vary.
In extremis, were the people working for Pol Pot just good patriots with no moral culpability?
We could surely at least agree that there are cases where working for the military of your home country doesn't fully excuse you from your actions.
In fact, I think international tribunals have existed which operated on just those principles.
We can all agree that working for the Nazi government’s military would be morally compromising, right?
You propose that other governments militaries would not be so compromising. Seems reasonable.
But the question then becomes, what is the operative distinction between the two?
I agree that it is immoral to obey some laws. Which ones are you saying are immoral here?
That's what the 7 figure salaries are for.
It’s funny to me how many progressive people I know and am friends with who work at these AI companies which are marginalized demographics (Trans, Gay, Latino, Black).
Still have faded Bernie stickers on their cars, No Kings organizers, “fuck SF I’m in the east bay for life fuck tech” - and you all make 7 figures Monday - Friday by supporting the death of society and democracy.
I don’t dare say anything though because “money is money”, the bay is expensive..but I do sure as shit judge every single person I know who joined OAI, Anthropic, Google, and Meta.
Preach. The hypocrisy is startling. I think people started at these companies maybe years ago with "good intentions" and are willing to turn a blind eye. But now, given just how glaring clear it is, I don't think it is really excusable anymore. To be clear, people can work wherever they want including these companies but what kills me is the hypocrisy. They are pathological liars to themselves if they somehow think they aren't complicit.
Agreed. Just shows that big money doesn't dilude small character.
I would suggest looking inwards if this is how you really feel.
I mean no harm in saying what I said, I love my friends. I just can’t stomach the hypocrisy, it’s what the companies are preying and feeding off of.
My friends are incredibly bright and good at what they do, it’s why they all have the roles they have. It makes me sad (and frustrated) knowing they are lured in by enough money dangling in front of them that makes them swallow their souls and identity, while fuelling the fire in the same breath.
I have a deep amount of respect and gratitude for my friends (and anyone else) who chooses to work at non-profits, and more ethical - mission based companies for less. I hate how much these AI companies and roles are offering people, it’s completely forced lots of gifted people into a war machine.
Do you suspect there is any chance they are fully independent adult human beings with full agency, who have looked at the pros and cons, and chosen to make the choices they did with clear eyes? Do you think there's any context that might square their choices with their own internal principles that don't make them hypocrites? I mean these as real questions. For "friends you love" you really seem to take a dim view of their intelligence.
One of humanity's greatest weaknesses is cognitive dissonance. People can convince themselves of just about anything. And in some ways intelligence is a burden here. A fool will just do something with a reason of 'f you, that's why.' It's only the clever man that will even bother to rationalize the villain into the hero, and we're great at it. An interesting thought experiment is to ask people if they'd be willing to push a button that would randomly kill a person somewhere in the world for a million dollars. They'd have no direct accountability themselves and their action would be unknown to anybody else.
People will rationalize themselves into declaring this moral even though it is obviously one of the most overtly amoral actions possible. One friend I have, a rather intelligent guy otherwise, was even trying to create a utilitarian argument that he'd donate some percent of his 'earnings' to life saving charities meaning he'd be saving more life on the net. The fact that if everybody thought and behaved the same way, the entirety of humanity would cease to exist, was a consideration he didn't have a response for.
I’ll be honest and say it’s made me question and reposition some of my friendships with a number of these friends. Some joined well before we knew the fallout of how AI has affected and impacted society negatively, some have joined in recent years because they were offered 2x their currently already high comp package, and others will take any job they can get (who, admittedly, I judge far less as I know they are just needing to survive in a HCOL city).
My dim view is more on the AI companies being absurdly overvalued, with too much money to know what to do, which feeds downwards into compensation packages, which lure in “innocent” individuals who can’t say no. It’s not been a healthy market to be vulnerable in, most companies outside AI are just not getting the same funding or can compete at all - and it’s a shit storm.
I'm curious what is that you're suggesting, exactly.
I made another comment above. People contain multitudes. Different contexts, different choices, not everyone is in a box defined by the viewer's world view. You can't really know what's going on with someone else, in their heads, in their context, so give them some grace. Instead, this person's "friends" are "hypocrites" who were "lured" into their choices. It's very condescending. I am suggesting the poster re-examine their own views on other people in light of this.
You're missing the point. They're just observing the contrast between what their friends publicly say (fuck tech, no kings) and what they spend their workweek in service of.
It's pretty simple: if these friends would take a non-society-destroying job at equal pay (who wouldn't?) then their values aren't driving the decision, money is. Fine, that's a choice adults get to make. But then own it, and justify it without getting defensive.
> Any AI researcher who continues to work here is morally compromised.
Arguably it's exactly the opposite. In the same way we ask billionaires to pay their taxes because the regulatory regime is what allowed them the structure to make their billions in the first place, the national security of the country the AI researchers are in is what allows them to make a vast salary to work on interesting, leading edge capabilities like AI. They should feel obligated to help the military.
Is it any less moral than surveilling your neighbors and/or turning your neighbors against each other with social media?
This all works if you assume that any action the government takes must be “lawful”. The assumption here is that the Pentagon is obeying the law and any unlawful use would go through normal reporting / violation channels - same as any illegal order or violation or whistleblower report.
The Pentagon does not want Google or anyone else deciding what they can and cannot use their AI for. They’re saying we won’t break the law, and that should be enough for you - pinky swear!
And that seems to be enough for Google. Though I might request some auditing capability that is agentic to verify rather than take them at their word.
Next step: is Google FEDRAMP’d yet for this and for classified enclaves? Or do they also go through Palantir’s AI vehicle?
Who defines "lawful" if Google and the Pentagon disagree?
> The classified deal apparently doesn’t allow Google to veto how the government will use its AI models.
Seems concerning?
That's presumably the trick, and it's not a subtle one; it's why the article puts it on quotes in the headline. Google gets to claim that it stood up for principles because it boldly insisted that the government obey the law, and the government will claim that whatever it decides to do is lawful. It's the same as what OpenAI did except not handled buffoonishly.
Lawful is presumably defined in the usual, common sense, ie we can do whatever the f we want until a court physically forces us not to.
And since the court has no way to physically force anything - that's the executive branch's function, (it's right there in the name) - lawful has no meaning whatsoever if it's the executive branch that wants to break the law.
And the Pentagon has historically gotten away with damn near everything even in the judicial branch by appealing to national security.
"who watches watchmen"
question as old as time itself
Especially concerning with the how creative the executive branch can be when it comes to what laws mean. With little oversight, it seems guaranteed that it will be used for unlawful activities (despite whatever tortured argument some lawyer will have put into a memo somewhere).
No it doesn't at all. Private corporations shouldn't be telling the government what it can and can't do. That's the job of the people. You want private corporation overriding your vote?
> Private corporations shouldn't be telling the government what it can and can't do.
So Google can't tell the government it needs a warrant to perform a search? Google can't sue over something the government did?
It's Google's product they want to buy.
I'm talking about lawful, like it written in the terms.
But Google isn't, apparently, permitted to object "that's not lawful".
And again, it's Google's product. Why can't they set conditions? If I pay Google to host my email, I'm still subject to their policies.
Just follow the orders, man!
don't worry about the people getting sent to camps. it's lawful so it's okay.
now follow orders.
There's big air quotes energy in their statement
Google should never be determining what is lawful or not.
One thing is sure, they don't have international law in mind...
This has to be one of the strangest "debates" in history.
Congress and the courts obviously.
If you think there's a hole in the law tell your congressman, don't, for some reason, try and put Google or any Ai company above the government.
> Congress and the courts obviously.
The first is fully neutered. The second is far too slow.
"Nothing unlawful" needing to be in the contract is inherently concerning, as it's typically the default, assumed state of such a thing.
"follow the law" in contracts IMO is there to be able to claim a "breach of contract" by one party.
Please! That ship sailed a long time ago. Sure tell your congressman, who is most likely bribed (lobbying is bribing, lets use the real words) by the same companies to accept the deal. The courts can try, but who is going to enforce it when the people above says that its fine.
It kind of reminds me of a mix of Skynet in Terminator and Minority Report. But nowhere near as interesting. More annoying than anything else.
I am kind of mad at James Cameron here. Skynet was evil but interesting. Reallife controlled by Google is evil but not interesting - it is flat out annoying.
The classified aspect is probably the most concerning. How can I write my representative (and expect a form letter response six weeks later) if I don't know what I'm objecting to or even if I should be objecting?
Why would you write a letter if you don't know what you're objecting to or even if you should be objecting?
Can't I object to not knowing?
No, that's what classified means.
Surely I can complain about overclassification of things that should not be classified?
and the pentagon determines the law?
How well does this hold up in terms of legal scrutiny when previous actions indicate that the Pentagon would retaliate against Google if they didn't accept this "lawful use only" farce?
Could Google back out of this agreement later by arguing that they were coerced?
Not trying to suggest that Google would be opposed to doing evil, but curious about how solid this agreement would be in practice.
there is 0 reason that the definitions of 'lawful' for the purposes of these agreements should be classified.
There's a reason, you just won't like it.
One observation.
Having your work being used by the govt in ways you disagree with feels similar to having your taxes used in ways you disagree.
When you pay taxes you have no say in the bombs acquired with that and where they are dropped. The latter though doesn't seem to provoke the same push back
> When you pay taxes you have no say in the bombs acquired with that and where they are dropped.
Vote in elections, local and general.
you answered your own implicit question. You have a choice who you sell your work to, you don't have a choice what your taxes do. Seems pretty straight forward why the former elicits more push back. The government forces you to pay taxes it doesn't force you to build them tools of surveillance or weapons.
IF the feds are a sufficiently large market your viability as a business might depend on keeping them happy.
btw i am not making a judgement call on the ai usage issue itself, just saying that this and taxes are more equivalent than it might seem
The fundamental problem with these "agreements" is that they are utterly nonsensical as written. Google has one idea of "lawful" and what it means; the Pentagon most definitely has a vastly different interpretation meaning "whatever we want". These companies make these agreements because they do not understand (either deliberately or just by the factor of them not understanding the intelligence sector) that when the intelligence community says "we will only use this for lawful purposes," what they are really telling you is something very, very different. With entities like the Pentagon your agreements should probably both define what "lawful" really means and should provide as few ambiguities as you can manage. Ideally you'd provide zero ambiguities but I'm not sure that's achievable in practice.
> We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.
And starts the lying to our faces. The public and private (from your own employees!) consensus is that it should not be used for those things at all, regardless of “human oversight.”
I hate this part: `domestic mass surveillance`
So the rest of the world is fine to spy on, its the domestic part they don't agree with. So go on, destroy lives all around the world, helping the powers at be build the fascist state. Its fine to use Gemini to tell what building to blow up; its fine for Gemini to wrongly identify people and cause hundreds or thousands of deaths based on the telling the military who to attack.
Refusing to participate WORKS.
I've had the unfortunate experience of working at a startup that started courting some autonomous weapons companies and HOLY SHIT were they the bottom of the barrel. Levels of incompetence you wouldn't believe, just good ol' boys who wanted to play with energetics. Then the company I was working for also hemorrhaged all their top engineers because they found the work unsettling.
The takeaway is that your refusal to assist these shitheads does have an impact, they have to pay more for talent and they have a much harder time courting good talent.
Is Iran already a vibe war or those are just coming?
Reminder that this administration has some absolute howler theories about what constitutes lawful behavior[1].
[1] https://www.nytimes.com/2025/09/20/us/politics/tom-homan-fbi...
Snakes. All of them
It's pretty funny how these guys are all becoming some kind of internet version of, like, Halliburton. It seems pretty desperate. B2C and B2B applications didn't pan out I guess?
The thing is we're in a new Cold War, and most of our adversaries have gotten the memo and most of us ... haven't. Yes, becoming a new Halliburton is a rational move if you see the board right now. I don't like it even one tiny bit.
It's one of two identified uses for AI that is profitable today: writing code and blowing up schools. They are desperate to show the market that the technology is anything more than a money pit.
Huh. I never realized the T-800 runs on Android. Makes sense, I guess.
Will lawful use be determined in secret courts a la NSA and FISA?
Doubtful it will even get that far, the DoJ will simply draft an appropriate fig leaf memo with a predetermined conclusion and the government will simply plow on ahead.
https://en.wikipedia.org/wiki/Torture_Memos
They simply say they have that memo. Who knows whether they even drafted it for real? And if anyone starts looking, Gemini can quickly draft one itself. Nice!
Don't be silly.
"When the president does it, that means that it is not illegal." - Richard Nixon
Also the Supreme Court, half of Congress, and apparently something like 40% of the American populace.
The sign contract for any lawful use ?? Can you sign a contract with US government for some unlawful use??
And that is news-worthy because unlawful use is normal?
Unsurprising from Google, but still bad. If Google has no right to object to a particular use, this is equivalent in practice to "any use, lawful or not".
Lawful is meaningless in the context of the Trump administration. Should Google waver (which they won't), they'll be declared a supply chain risk or otherwise bullied into submission.
Google holds immense power in their position. Trump can make their life very difficult but Google can make life for Trump very difficult as well. They have no need to kneel, they are choosing to.
what immense power?
You don't think Google having control over the most used email, most used browser, most used search engine, most used video website, and most used phone OS gives them immense power?
Do no evil. Well don't make anything illegal at least. I mean, let's not do what is different from whatever we wish at the moment.
As a big critic of the OpenAI deal, this kinda sounds like a nothingburger to me. Of course Google doesn't get a veto on operational decisions, no customer would ever agree to such a thing. The problem with OpenAI was that they took advantage of Anthropic standing their ground to wedge their way in, which was both bad on its own terms and raises serious concerns about whether they're being honest on the real terms of the deal.
"don't be evil"
What a handy word "lawful".
One source: https://www.reuters.com/technology/google-signs-classified-a... (https://news.ycombinator.com/item?id=47931336)
https://archive.ph/FyzNS
The beginning of Skynet 6.0.
There's a lot of money in genocide.
See also: https://en.wikipedia.org/wiki/IBM_and_the_Holocaust
Capital and Big Tech have always been opportunistic enablers, not principled actors. Corporate Values have always been nothing but internal propaganda. "Don't be evil", what a farce.
I don’t get why this is always such a controversial topic. Should we be decrying Microsoft for selling the DoD/DoW Microsoft Office. They could use excel or PowerPoint to plan a strike package.