Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.
Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?
These kind of people have highly paid emoliyees surrounding them on all sides propping them up and very likely making it very easy for them to actually believe it.
It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.
It's never OK to physically attack someone like this. Full stop.
Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.
I find it interesting that Altman's fans seem to keep skipping past this fact. I'd love to hear their defense as to why one person potentially being responsible for hundreds or thousands of deaths is acceptable, but attacking that one person isn't. If violence is never the answer, they should be condemning Altman with even more vigor.
> Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.
>> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model
The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
The thing about the rich is that they have access to sufficient levels of abstraction that they can commit terrible, disproportionate violence without it looking that way. And then fools who crave the simplistic safe comfort of moral absolutes come to their aid.
Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
‘Working towards prosperity for everyone’ was extremely hollow as well. If he believed this, he would be running his company as a cooperative and not as a for-profit company.
Is it okay to profit off of a machine that kills innocent people? Would it be immoral to attack the builder of that machine, if it stopped the operation of the machine?
I'm on the skeptic side of "AI" and find this entire industry obnoxious, but your argument doesn't hold any water.
Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.
Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.
Technology itself is inert. What humans do with technology should be regulated.
IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
He's saying that just so he can use if another company gets bigger than OpenAI ("you can't have all the power"). If OpenAI were the top dog by a large margin, you wouldn't hear him say a peep about this (as was demonstrated by his actions with the charter).
I've never understood this specific taboo against physical violence. Firing a thousand people or stealing their wages, ruining their life and their families', passing unjust laws that threaten the well-being and happiness of a million, that's ok! A punch in the nose, that's not ok!
There are far worse things than physical violence against one person, and with the end of the rule of law there isn't any other recourse. The one value that is common across all cultures is that the wicked must be punished for their wickedness; expect to see violence against oligarchs and CEOs spread like fire.
The idea that firing you or stealing your wages is the worst a CEO can do to you is itself a product of the taboo against physical violence. There are a number of famous incidents from the late 1800s and early 1900s, when the taboo was weaker, of CEOs sending private armies to shoot inconvenient labor movements. It's not an equilibrium you should defect from lightly.
A CEO can choose physical, mental, legal or financial violence against the common man. The common man only has the choice of physical violence. Without it he is impotent.
Agreed. Sam's full of crap and the way we tackle that is with conversations, not violence. He deserves to grow old like anyone else, violence isn't an answer.
I don't condone violence, but the contract he's signed with the US military is a credible threat to everyone in the US. OpenAI will now certainly be called on to assist in domestic mass surveillance, under threat of the kind of severe penalties Anthropic has faced. So why did he agree to that contract, unless he's will to provide that assistance? So it's gone well beyond conversation, though not to a point where violence is appropriate. Boycotts and hostility are definitely appropriate at this point IMO, though.
> the way we tackle that is with conversations, not violence
I think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.
A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?
There's still a meaningful difference between violence wielded by a single individual who feels angry or unheard, and violence wielded by a large representative group who has invested genuine effort in conversation before collectively deciding violence is required.
Yes, fully agree. Nonetheless, I suspect violence can be used more effectively and more minimally if it's considered and performed by a group rather than haphazardly by individuals. I recognise that's a very simplistic view.
It's pretty amazing to observe people experience the past ten years in American history and continue to think that we can out-talk the bad people in the world.
Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)
When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.
Violence is language that needs no translation. Everyone across the world, every culture, every country, every social group - from elites to homeless can converse in it using the same vocabulary.
It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.
I categorically reject that assertion. Two simple examples: 1) when you see someone assaulting someone else, it's absolutely ok to attack them, and 2) the American revolution!
It's like that old joke:
A man offers a young woman $1,000,000 to sleep with him for one night.
“For a million dollars? Sure, I’ll sleep with you.”
He smiles at her, “How about $50, then?”
“How dare you! I’m not a whore!”
“Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”
Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.
So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you can't just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.
Are you familiar with the details of the French Revolution? Some of the eventual outcomes were indeed positive, but a lot of what actually went on was pretty horrific.
It was horrific. Revolutions tend to be. Yet our institutions continue consolidating money and power in fewer and fewer hands. If that doesn't stop, we'll be headed there again. It will probably be even worse this time.
A lot of what happened during the French revolution was horrific... This is such a bewildering sentence in this context. Yes, killing the rulers is horrific. Revolutions are horrific. Wars are horrific. It seems irrelevant to what the parent is (sarcastically) saying.
At the same time considering the people participating, there wasn't a way out of the problems that didn't involve violence. Different outcomes would require different choices that require different people.
Violence like this is not the answer. However, this post feels like a thinly veiled attempt at using this alarming attack to reclaim public goodwill after the New Yorker article the other day.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.
Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
Altman wants to seem relatable and personable even though he’s one of the wealthiest and most powerful people in the world. You don’t get that option when you control a technology that has the potential to alter so many lives, especially when you just sold said technology to the US military. All the talk around democratizing AI rings hollow.
The implication of Altman’s blog seems to be “stop writing critical articles about me because it will cause more violence.” However, the rich and powerful cannot use this excuse to escape objective scrutiny.
> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.
For context his blog post seems to be a response to this deep-dive New Yorker article:
"Sam Altman May Control Our Future—Can He Be Trusted?"
Ronan Farrow, one of the journalists who worked on this article, talked to Katie Couric on her YouTube channel about this. They worked on this across ~18 months. I thought this interview was illuminating.
Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.
If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
Who tf is dumb enough to pay for an AI bootcamp, genuinely curious. If you're selling AI bootcamps, or whoever is, they are just as much a scam artist as Sam.
If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?
Its neural network autocomplete that helps you write text a little faster, chill with "the most revolutionary technology of the last decade/century" talk. You're offending a lot of experts in way more important areas of research.
10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".
I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.
The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".
OpenAI has also repeatedly and quietly lobbied against them.
You linked a vague PDF whose promised actions are:
> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.
Welcoming and organizing feedback!
A pilot!
Convening discussions!
This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.
I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.
> Working towards prosperity for everyone, empowering all people
> We have to get safety right
> AI has to be democratized; power cannot be too concentrated
None of these statements, IMO, reflect his actions over the past 5 years.
> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future
I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.
Just my opinion, but it comes off as very insincere.
To be clear, what happened is still awful and there's absolutely no justification for it.
Historically, was it always so common for powerful or famous people to seem to purposefully garner hatred like he, and others, have been for the past decade? To speak in a petty, self-important, "trolling" manner, to a very broad audience? To embrace traits that are intrinsically negative? Or are we living in a rare time?
I think it's exacerbated by the internet. It didn't invent the idea of a proud/unapologetic asshole, but it amplified reach and emboldened them.
There is always an audience online for whatever you have to say if you're famous, and attention is always good. There will be tens of thousands of people vocally cheering on your least popular and most controversial takes, however fucked they are. And then people lose themselves in that bubble. Seems to be what happened to Elon (from afar).
The world is full of ignorance and crazy, and they're all online. This is downstream of that fact.
New England colonists had a habit of ransacking and burning down the houses of government officials throughout the 1760s and during the Revolutionary War. Got bad enough that most did not sleep in their government housing.
I find myself resenting him and his ilk on a daily basis for what they did to the computing space which was once sacred to me with their profiteering. But nothing justifies violence, not even close. Simple as that.
In all seriousness, what is the game plan for society moving forward as AI takes more jobs? The government doesn't seem to care. The AI labs don't seem to care.
What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...
I don't advocate for violence, but I do foresee more headlines like this as things get worse.
I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.
What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero).
There is no specialisation re. models at this moment in time so it is very likely to be the case.
OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.
There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.
The game plan is the same as it was for globalization and previous rounds of automation: gaslight workers into thinking that they are the problem. Push all the taxes into the labor economy and all the money into the capital economy and use the inevitable budget shortfall to justify skimping on social services. That'll work until it doesn't, at which point the Ellison strategy will be employed: pay 10% of the poors to keep the other 90% in line.
> The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and *making sure democratic system stays in control.*
OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
It’s funny how this happens the very same moment we get to read about Claude’s Mythos and a New-Yorker article. I really doubt the attacker is up to date with either…
The only thing surprising here is how naive you guys are. He is a marketing&sales guy in the first place.
Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.
Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.
Genuinely surprised at the extreme comments against sama here. I don’t think he’s a good steward of the technology, but I don’t think violence is funny or justified. I also don’t think it’s justified for him to use it to say that a negative article about him is correlated to this event. Seems to imply that an “incendiary article” led to this and that criticism is tantamount to calls to violence. He drives the conversation with apocalyptic terms, and both investors and crazy people buy into it.
1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.
2) AI will be the most powerful tool, etc. - see point 1.
3) It will not all go well, etc. - probably should have thought about that before you released it on the world.
4) AI has to democratized, etc. - true, won't happen. See point 1.
5) Adaptability is critical, etc. - Yes. Fully agree.
The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.
Same as it ever was, Mr. Altman. Same as it ever was.
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me."
"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!
To be clear, I don’t want anyone’s house to get firebombed by any means. But the “I’m just a humble guy making mistakes and trying the best I can” attitude of this article strikes me as extremely inauthentic based on everything I know about the guy.
The post itself is authentic in that it's a set narrative for this moment. When you see the world as Sam does, this event is a specific opportunity to humanize him. Through that lens, the humility is both performative (it is!) and necessary. To be truthful would be inauthentic.
The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)
People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"
"Our product can destroy humanity, and it's not some crank telling you this, it's the company and CEO making it themselves, but we'll continue to make it anyway, so suck it up" but also "I'm just a humble guy, why can't we all live in peace?"
Everything about Altman makes me think "scammer". If he has one super-power, it is to convince people of his own importance.
OpenAi doesn't have much time left before they are shuffled off into bankruptcy, and they certainly aren't ruling the fate of man or anything like that. It's like the CEO of Enron claiming to hold the key to the future of mankind's energy resources, and people writing ponderous articles about it and debating whether Ken Lay will be a benevolent dictator or not.
I can't help but be reminded of last year, when our landlords (chill boomers) sold the house my girlfriend and I were renting the basement of (to presumably rich asshole millenials). The demographic doesn't really matter, but the old landlords kept us in us in the loop throughout the process, we knew as much as we could going into the new year. Apparently the new buyers wanted to keep us as tenants. Day 2 of them taking possession, the man came down with his innocent toddler and introduced themselves. He seemed friendly enough, and on Day 3 he came down in the middle of the day and handed me eviction notice papers.
I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.
I also believe that there will be more casualties in the AI Wars. We should be prepared for that. Capitalism, AI, and human life are mutually incompatible and I'm still not sure which two will survive the conflict.
Sam Altman has written, and probably still believes,
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]
This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.
Yes, very ironic. OpenAI was declared commercial through words and narratives, AI itself is hyped up with words and narratives. His Trump sycophancy are words and narratives. And that is just the start.
It isn't just irony---It's lack of self awareness! (sorry for increasing the pain that Altman et al. inflict on us.)
Responses in this thread are embarrassing. Cat's out of the bag and needs a steward. People acting like Altman can just turn the machines off and this all stops are deluded.
> The world deserves huge amounts of AI and we must figure out how to make it happen.
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.
Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.
I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
So all these things he's saying are going to leave people scared and afraid, on that we agree. What's the disingenuous part here?
Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.
But what, specifically, do you see? What am I blind to?
* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams
He's pretending to care about the negative effects AI will have on society at large, but goes on to say it's necessary and "must" happen. If he actually cared, he wouldn't continue down that path. He also wouldn't be lobbying the DoD for contracts to use his AI to help kill people.
That's about the least controversial thing I've heard recently. Luigi murdered a guy specifically because he was a health insurance CEO. Not because of something he did in particular, but because of the role he assumed. Terrorizing other CEOs is precisely what he intended to do. It is why there are so many Luigi fans, it is what they want too.
Firebombing homes is completely uncivilized, but I'm not going to believe a single public word from Altman about anything. He's a lying sociopath and will say whatever gets himself ahead.
At this point it's probably far more productive to think of what he's saying as the necessary means he uses to make you believe what he wants you to believe. From that point you can work backwards and try to understand what he wants you to believe.
So there's one photo. Of one family. Now what about millions of photos of all the other families possibly affected by him? That doesn't have power?
It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..
> Now what about millions of photos of all the other families possibly affected by him?
His name allegedly isn't even clear on his own! Ongoing lawsuit brought by his sister. (Amended as recently as a week ago and discussed in a flagged submission here: https://news.ycombinator.com/item?id=47640048 ).
I think he's just trying to remind people that someone can both be a CEO of a powerful company you might disagree with/hate as well as a real human with a husband and child and that trying to set fire to his house could kill those people.
I personally wouldn't go as far as to say the Farrow article caused this but it seems fair game to respond to an article that had an over the top cover image of an animated Sam Altan picking and choosing faces with a photo reminding people he's human like everyone else.
> This is quite valid, and we welcome good-faith criticism and debate.
It's always funny when they pull out this argument when they've been working overtime to pull up the ladder and embed themselves in the MIC.
Listen, for people unaware of history things used to be a lot more violent as workers had to earn their rights with blood. The state had to respond by first attempting to squash it violently and second compromising in such a way as to ensure workers had a bit more power in the system.
As long as AI shit continues to consume the economy, kicking out people who can no longer find a job and survive while the government also removes any remaining safety nets, the end result is going to be violence. This doesn't make the violence right or just, but rather completely predictable. And if people don't learn from history then it will be repeated, unfortunately.
What the hell is up with this thread? It seems half the people here are saying they get molotoved on a weekly basis,Sam is a such and such for not taking it like a man, while the other half appears to mourn the lack of casualties?
Wtf is wrong with you people?
Get off my lawn and go back to Reddit where you belong!
OpenAI will end up the hero of this whole AI saga. I actually believe what he wrote there. Anthropic just took a left turn when they chose to lock up mythos. That was a pivotal move that proved Anthropic’s mindset is dangerous. They just changed the trajectory of AI completely, for the worst.
OpenAI just needs to learn to manage products. They need to start finishing things rather than just shutting down projects without putting real effort into iterating on them to create viable business models. They are undisciplined. They’ve done this phony version of looking disciplined by shutting down Sora and nixing adult mode, but that’s superficial. The things they’re pivoting to are no more serious. They just sound serious. They gotta learn to create desire in consumers and design viral AI products. Like Apple. Consumer facing pop culture products. That’s the market that’s wide tf open. They can print if they get good at that.
Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.
Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?
These kind of people have highly paid emoliyees surrounding them on all sides propping them up and very likely making it very easy for them to actually believe it.
It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.
Some people think there will be an exponential takeoff, which means that a 6 month lead effectively rounds up to infinity.
I have the same feelings
It's never OK to physically attack someone like this. Full stop.
Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
If only that sentiment was reciprocal!
When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.
I find it interesting that Altman's fans seem to keep skipping past this fact. I'd love to hear their defense as to why one person potentially being responsible for hundreds or thousands of deaths is acceptable, but attacking that one person isn't. If violence is never the answer, they should be condemning Altman with even more vigor.
Yeah, it's kind of terrifying, how this incident seems to have faded from people's memories.
> Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.
>> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model
The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
The thing about the rich is that they have access to sufficient levels of abstraction that they can commit terrible, disproportionate violence without it looking that way. And then fools who crave the simplistic safe comfort of moral absolutes come to their aid.
Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
"It's never OK to physically attack someone like this. Full stop."
Why?
If they are causing such pains on humanity, wouldn't it be better if they didn't exist?
> Those who make peaceful revolution impossible will make violent revolution inevitable."
- John F Kennedy, 1962.
Assuming this is a serious question, here are some ideas you could read about!
- https://en.wikipedia.org/wiki/Vigilantism
- https://en.wikipedia.org/wiki/Law
- https://en.wikipedia.org/wiki/Bill_(law)
- https://en.wikipedia.org/wiki/Trial
Ideas.
Now back to reality.
Law: Epstein. ICE, Geneva Convention
Bill: Going once, going twice, highest bidder wins
Trial: OJ Simpson. Many miscarriages.
Vigilantism: Revolutions
We'd be stuck in the Stone Age with your mentality.
If the American Colonies would just have petitioned King George just a few more times…
I didn't think Hacker News needed an explicit "calls for violence are bad" guideline but the comments here have shown otherwise.
Do you feel the same way about comments that support the US military action in Iran? Why or why not?
‘Working towards prosperity for everyone’ was extremely hollow as well. If he believed this, he would be running his company as a cooperative and not as a for-profit company.
"Like this" is doing some serious work in that statement!
Is it okay to profit off of a machine that kills innocent people? Would it be immoral to attack the builder of that machine, if it stopped the operation of the machine?
I'm on the skeptic side of "AI" and find this entire industry obnoxious, but your argument doesn't hold any water.
Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.
Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.
Technology itself is inert. What humans do with technology should be regulated.
IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
Like this, for sure not. And Sam has not, even with that article, done anything to warrant violence.
He's saying that just so he can use if another company gets bigger than OpenAI ("you can't have all the power"). If OpenAI were the top dog by a large margin, you wouldn't hear him say a peep about this (as was demonstrated by his actions with the charter).
Knowing Sam, this entire event was fabricated or done at his behest.
I've never understood this specific taboo against physical violence. Firing a thousand people or stealing their wages, ruining their life and their families', passing unjust laws that threaten the well-being and happiness of a million, that's ok! A punch in the nose, that's not ok!
There are far worse things than physical violence against one person, and with the end of the rule of law there isn't any other recourse. The one value that is common across all cultures is that the wicked must be punished for their wickedness; expect to see violence against oligarchs and CEOs spread like fire.
We'd have never progressed as a species with your mentality. Change is painful and it's part and parcel of progress.
Humans would be suffering far more today if we weren't willing to accept short term pains for progress.
The idea that firing you or stealing your wages is the worst a CEO can do to you is itself a product of the taboo against physical violence. There are a number of famous incidents from the late 1800s and early 1900s, when the taboo was weaker, of CEOs sending private armies to shoot inconvenient labor movements. It's not an equilibrium you should defect from lightly.
A CEO can choose physical, mental, legal or financial violence against the common man. The common man only has the choice of physical violence. Without it he is impotent.
What a disgusting mindset that trivializes the immense achievements of "the common man" over the course of millennia.
If Sam disperses his power, we can believe him. So long as he's just concentrating wealth and power, he's just another tech bro.
Agreed. Sam's full of crap and the way we tackle that is with conversations, not violence. He deserves to grow old like anyone else, violence isn't an answer.
I don't condone violence, but the contract he's signed with the US military is a credible threat to everyone in the US. OpenAI will now certainly be called on to assist in domestic mass surveillance, under threat of the kind of severe penalties Anthropic has faced. So why did he agree to that contract, unless he's will to provide that assistance? So it's gone well beyond conversation, though not to a point where violence is appropriate. Boycotts and hostility are definitely appropriate at this point IMO, though.
He isn't going to suddenly grow a conscience from a riveting, intellectually stimulating conversation.
Everyone else deserves to grow old, too...
> the way we tackle that is with conversations, not violence
I think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.
A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?
There's still a meaningful difference between violence wielded by a single individual who feels angry or unheard, and violence wielded by a large representative group who has invested genuine effort in conversation before collectively deciding violence is required.
They aren't mutually exclusive. Often the former and latter, in that order, are two parts of the same historical event.
Yes, fully agree. Nonetheless, I suspect violence can be used more effectively and more minimally if it's considered and performed by a group rather than haphazardly by individuals. I recognise that's a very simplistic view.
It's pretty amazing to observe people experience the past ten years in American history and continue to think that we can out-talk the bad people in the world.
Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)
When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.
That's not true.
As a defense contractor Altman is a legitimate target for a country that the US has attacked like Iran.
The US is engaging in military action against many countries and has threatened to annex or invade allies.
In that context Altman is 100% a legitimate target to those whose sovereignty is threatened and whose people are being killed.
Violence is language that needs no translation. Everyone across the world, every culture, every country, every social group - from elites to homeless can converse in it using the same vocabulary.
It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.
I categorically reject that assertion. Two simple examples: 1) when you see someone assaulting someone else, it's absolutely ok to attack them, and 2) the American revolution!
It's like that old joke:
A man offers a young woman $1,000,000 to sleep with him for one night.
“For a million dollars? Sure, I’ll sleep with you.”
He smiles at her, “How about $50, then?”
“How dare you! I’m not a whore!”
“Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”
Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.
So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you can't just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.
It's always OK to punch a Nazi.
AGI will be democratized when its discovered.... just right after AWS, Microsoft and Oracle finish their 6 month beta test.
> It's never OK to physically attack someone like this. Full stop.
I agree. The French Revolution was really, really mean.
Are you familiar with the details of the French Revolution? Some of the eventual outcomes were indeed positive, but a lot of what actually went on was pretty horrific.
It was horrific. Revolutions tend to be. Yet our institutions continue consolidating money and power in fewer and fewer hands. If that doesn't stop, we'll be headed there again. It will probably be even worse this time.
A lot of what happened during the French revolution was horrific... This is such a bewildering sentence in this context. Yes, killing the rulers is horrific. Revolutions are horrific. Wars are horrific. It seems irrelevant to what the parent is (sarcastically) saying.
At the same time considering the people participating, there wasn't a way out of the problems that didn't involve violence. Different outcomes would require different choices that require different people.
Violence like this is not the answer. However, this post feels like a thinly veiled attempt at using this alarming attack to reclaim public goodwill after the New Yorker article the other day.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.
Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
Altman wants to seem relatable and personable even though he’s one of the wealthiest and most powerful people in the world. You don’t get that option when you control a technology that has the potential to alter so many lives, especially when you just sold said technology to the US military. All the talk around democratizing AI rings hollow.
The implication of Altman’s blog seems to be “stop writing critical articles about me because it will cause more violence.” However, the rich and powerful cannot use this excuse to escape objective scrutiny.
> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.
For context his blog post seems to be a response to this deep-dive New Yorker article:
"Sam Altman May Control Our Future—Can He Be Trusted?"
https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
https://news.ycombinator.com/item?id=47659135
Ronan Farrow, one of the journalists who worked on this article, talked to Katie Couric on her YouTube channel about this. They worked on this across ~18 months. I thought this interview was illuminating.
Yes, it was good. It seems clear that Farrow and his co-author approached it in a methodical, fair-minded way.
https://www.youtube.com/watch?v=wr_sB1Hl0oM
Unserious answer about a very serious event.
I don't believe a word of Sam's "I believe" section.
Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.
If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
Who tf is dumb enough to pay for an AI bootcamp, genuinely curious. If you're selling AI bootcamps, or whoever is, they are just as much a scam artist as Sam.
Who tf is dumb enough to not do it, though?
If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?
Its neural network autocomplete that helps you write text a little faster, chill with "the most revolutionary technology of the last decade/century" talk. You're offending a lot of experts in way more important areas of research.
>write text a little faster
You might actually need to attend an AI bootcamp. This is not 2022's GPT, AI can deliver plenty of value for a business owner these days.
Yeah, people learning new technology is terrible. /s
10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".
I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
[0] https://news.ycombinator.com/item?id=47717587
I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.
The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".
It has worked for him, repeatedly.
No, I don't think that's accurate. Altman has repeatedly and loudly demanded for these to be created, including a new detailed policy proposal just this month (https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440...).
OpenAI has also repeatedly and quietly lobbied against them.
You linked a vague PDF whose promised actions are:
> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.
Welcoming and organizing feedback!
A pilot!
Convening discussions!
This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.
Please don't fall for this stuff.
unpopular opinion but i think it's written quite well
I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.
> Working towards prosperity for everyone, empowering all people
> We have to get safety right
> AI has to be democratized; power cannot be too concentrated
None of these statements, IMO, reflect his actions over the past 5 years.
> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future
I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.
Just my opinion, but it comes off as very insincere.
To be clear, what happened is still awful and there's absolutely no justification for it.
Perhaps by ChatGPT
It seems a bit stilted to be LLM'd.
Yes, clearly not written with his own product.
If that's the case, why doesn't he trust his own product enough to write this?
Historically, was it always so common for powerful or famous people to seem to purposefully garner hatred like he, and others, have been for the past decade? To speak in a petty, self-important, "trolling" manner, to a very broad audience? To embrace traits that are intrinsically negative? Or are we living in a rare time?
I think it's exacerbated by the internet. It didn't invent the idea of a proud/unapologetic asshole, but it amplified reach and emboldened them.
There is always an audience online for whatever you have to say if you're famous, and attention is always good. There will be tens of thousands of people vocally cheering on your least popular and most controversial takes, however fucked they are. And then people lose themselves in that bubble. Seems to be what happened to Elon (from afar).
The world is full of ignorance and crazy, and they're all online. This is downstream of that fact.
New England colonists had a habit of ransacking and burning down the houses of government officials throughout the 1760s and during the Revolutionary War. Got bad enough that most did not sleep in their government housing.
Can you explain the petty, self important, trolling manner? Which traits are intrinsically negative?
Genuine Q
Of Altman, Trump et al, Elon, the Nvidia guy, etc? Or am I not understanding the question?
Sure, he's sleazy. Doesn't matter. It's not ok to firebomb jerks or saints. Rich or poor. It's both a criminal and an immoral act.
I find myself resenting him and his ilk on a daily basis for what they did to the computing space which was once sacred to me with their profiteering. But nothing justifies violence, not even close. Simple as that.
In all seriousness, what is the game plan for society moving forward as AI takes more jobs? The government doesn't seem to care. The AI labs don't seem to care.
What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...
I don't advocate for violence, but I do foresee more headlines like this as things get worse.
Out of curiosity... why do you think this?
I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.
What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero).
There is no specialisation re. models at this moment in time so it is very likely to be the case.
OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.
There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.
The game plan is the same as it was for globalization and previous rounds of automation: gaslight workers into thinking that they are the problem. Push all the taxes into the labor economy and all the money into the capital economy and use the inevitable budget shortfall to justify skimping on social services. That'll work until it doesn't, at which point the Ellison strategy will be employed: pay 10% of the poors to keep the other 90% in line.
> what is the game plan for society moving forward as AI takes more jobs
> What happens when more and more people can't afford housing, kids, food, health insurance, etc.?
What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?
Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?
this is probably orchestrated by sam altman himself or one of his lackeys
> The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and *making sure democratic system stays in control.*
OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
It’s funny how this happens the very same moment we get to read about Claude’s Mythos and a New-Yorker article. I really doubt the attacker is up to date with either…
The only thing surprising here is how naive you guys are. He is a marketing&sales guy in the first place.
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me.
How so? What is your theory of morality Sam? What I hear is Google: "Don't Be Evil".
Did Claude Mythos escape containment?
“I couldn’t find vulnerabilities in Sam’s devices so I contracted a rando over the internet to Molotov his house” sounds fairly implausible :)
Must've been one rare instance of AI creating jobs
This is actually happening without the Ai
https://www.lemonde.fr/en/france/article/2026/04/07/the-stra...
This is both horrible and not at all surprising.
Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.
Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.
Still horrible and not right.
Haven't read the article but hope next time they don't miss!
Scum, not people, like this are much better off not existing.
Genuinely surprised at the extreme comments against sama here. I don’t think he’s a good steward of the technology, but I don’t think violence is funny or justified. I also don’t think it’s justified for him to use it to say that a negative article about him is correlated to this event. Seems to imply that an “incendiary article” led to this and that criticism is tantamount to calls to violence. He drives the conversation with apocalyptic terms, and both investors and crazy people buy into it.
None of the things you believe are working out.
1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.
2) AI will be the most powerful tool, etc. - see point 1.
3) It will not all go well, etc. - probably should have thought about that before you released it on the world.
4) AI has to democratized, etc. - true, won't happen. See point 1.
5) Adaptability is critical, etc. - Yes. Fully agree.
The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.
Same as it ever was, Mr. Altman. Same as it ever was.
Is the underground bunker in New Zealand ready yet? Better check on it.
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me."
"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!
To be clear, I don’t want anyone’s house to get firebombed by any means. But the “I’m just a humble guy making mistakes and trying the best I can” attitude of this article strikes me as extremely inauthentic based on everything I know about the guy.
The post itself is authentic in that it's a set narrative for this moment. When you see the world as Sam does, this event is a specific opportunity to humanize him. Through that lens, the humility is both performative (it is!) and necessary. To be truthful would be inauthentic.
The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)
People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"
"Our product can destroy humanity, and it's not some crank telling you this, it's the company and CEO making it themselves, but we'll continue to make it anyway, so suck it up" but also "I'm just a humble guy, why can't we all live in peace?"
Everything about Altman makes me think "scammer". If he has one super-power, it is to convince people of his own importance.
OpenAi doesn't have much time left before they are shuffled off into bankruptcy, and they certainly aren't ruling the fate of man or anything like that. It's like the CEO of Enron claiming to hold the key to the future of mankind's energy resources, and people writing ponderous articles about it and debating whether Ken Lay will be a benevolent dictator or not.
I can't help but be reminded of last year, when our landlords (chill boomers) sold the house my girlfriend and I were renting the basement of (to presumably rich asshole millenials). The demographic doesn't really matter, but the old landlords kept us in us in the loop throughout the process, we knew as much as we could going into the new year. Apparently the new buyers wanted to keep us as tenants. Day 2 of them taking possession, the man came down with his innocent toddler and introduced themselves. He seemed friendly enough, and on Day 3 he came down in the middle of the day and handed me eviction notice papers.
I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.
I hate the people that did this to you as much as I hate your hybris.
No one deserves to be attacked.
I also believe that there will be more casualties in the AI Wars. We should be prepared for that. Capitalism, AI, and human life are mutually incompatible and I'm still not sure which two will survive the conflict.
The FOBO here smells.
You might as well say it's bad to be human.
What FOBO smells like, is what's happening.
Sam Altman has written, and probably still believes,
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]
This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.
[0] https://blog.samaltman.com/machine-intelligence-part-1
Sam had this pulled off the front page, because the whole charade obviously isn't getting him the positive attention he was looking for.
It most likely tripped the flame war detector heuristic (comments > points), and there is definitely a flame war here.
EDIT: Looks like a mod rescued it (surprisingly) and it is now back to #2.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.
I am glad you feel my pain, Mr. Altman.
Yes, very ironic. OpenAI was declared commercial through words and narratives, AI itself is hyped up with words and narratives. His Trump sycophancy are words and narratives. And that is just the start.
It isn't just irony---It's lack of self awareness! (sorry for increasing the pain that Altman et al. inflict on us.)
I wonder if this is the first time in recent history (or ever?) that he has felt this way. Must be nice.
Do you frequently get Molotov cocktails thrown at your house?
I must admit, I've been spared the experience, and I was under the impression that was true for most people!
> Do you frequently get Molotov cocktails thrown at your house?
Luckily, no. Do you frequently wade into comment threads shitting on others’ statements of their lived experiences?
“I’m just trying to make the world a better place for my child by ensuring millions won’t be able to afford to feed their children.”
Responses in this thread are embarrassing. Cat's out of the bag and needs a steward. People acting like Altman can just turn the machines off and this all stops are deluded.
> The world deserves huge amounts of AI and we must figure out how to make it happen.
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.
Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.
I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
So all these things he's saying are going to leave people scared and afraid, on that we agree. What's the disingenuous part here?
Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.
But what, specifically, do you see? What am I blind to?
* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams
He's pretending to care about the negative effects AI will have on society at large, but goes on to say it's necessary and "must" happen. If he actually cared, he wouldn't continue down that path. He also wouldn't be lobbying the DoD for contracts to use his AI to help kill people.
The Epstein regime all seem really manic and probably fearing the French bourgeoisie treatment. They tried to get Luigi on "terrorism" charges
> They tried to get Luigi on "terrorism" charges
That's about the least controversial thing I've heard recently. Luigi murdered a guy specifically because he was a health insurance CEO. Not because of something he did in particular, but because of the role he assumed. Terrorizing other CEOs is precisely what he intended to do. It is why there are so many Luigi fans, it is what they want too.
Worth noting the legal system did not find it to reach the requirements for terrorism.
https://www.pbs.org/newshour/nation/luigi-mangione-due-in-co...
My understanding is that it was personal
Firebombing homes is completely uncivilized, but I'm not going to believe a single public word from Altman about anything. He's a lying sociopath and will say whatever gets himself ahead.
At this point it's probably far more productive to think of what he's saying as the necessary means he uses to make you believe what he wants you to believe. From that point you can work backwards and try to understand what he wants you to believe.
If the billionaire is “awake in the middle of the night and pissed”, it means you’re doing it right.
Everytime I read a low intelligence comment like this, I’m glad I urge my friends to vote Republican.
[delayed]
So there's one photo. Of one family. Now what about millions of photos of all the other families possibly affected by him? That doesn't have power?
It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..
> Now what about millions of photos of all the other families possibly affected by him?
His name allegedly isn't even clear on his own! Ongoing lawsuit brought by his sister. (Amended as recently as a week ago and discussed in a flagged submission here: https://news.ycombinator.com/item?id=47640048 ).
I think he's just trying to remind people that someone can both be a CEO of a powerful company you might disagree with/hate as well as a real human with a husband and child and that trying to set fire to his house could kill those people.
I personally wouldn't go as far as to say the Farrow article caused this but it seems fair game to respond to an article that had an over the top cover image of an animated Sam Altan picking and choosing faces with a photo reminding people he's human like everyone else.
AI is great. But it seems like those that wield its power only do so to create massive unemployment and benefits to the top 1%.
> This is quite valid, and we welcome good-faith criticism and debate.
It's always funny when they pull out this argument when they've been working overtime to pull up the ladder and embed themselves in the MIC.
Listen, for people unaware of history things used to be a lot more violent as workers had to earn their rights with blood. The state had to respond by first attempting to squash it violently and second compromising in such a way as to ensure workers had a bit more power in the system.
As long as AI shit continues to consume the economy, kicking out people who can no longer find a job and survive while the government also removes any remaining safety nets, the end result is going to be violence. This doesn't make the violence right or just, but rather completely predictable. And if people don't learn from history then it will be repeated, unfortunately.
What a tone deaf response. Sounds like he learned nothing at all from this.
From someone Molotoving his house? What do you think he should have learned from that?
That his security is inadequate.
TIL Sam Altman is gay
What the hell is up with this thread? It seems half the people here are saying they get molotoved on a weekly basis,Sam is a such and such for not taking it like a man, while the other half appears to mourn the lack of casualties?
Wtf is wrong with you people? Get off my lawn and go back to Reddit where you belong!
Daamn, you were too fast to share the story haha.
Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.
It's like a baby on board bumper sticker. But for your house.
Gross man, get help. Living with your family isn't using them as a sheild.
Yeah it’s like they don’t want their children murdered, crazy
OpenAI will end up the hero of this whole AI saga. I actually believe what he wrote there. Anthropic just took a left turn when they chose to lock up mythos. That was a pivotal move that proved Anthropic’s mindset is dangerous. They just changed the trajectory of AI completely, for the worst.
OpenAI just needs to learn to manage products. They need to start finishing things rather than just shutting down projects without putting real effort into iterating on them to create viable business models. They are undisciplined. They’ve done this phony version of looking disciplined by shutting down Sora and nixing adult mode, but that’s superficial. The things they’re pivoting to are no more serious. They just sound serious. They gotta learn to create desire in consumers and design viral AI products. Like Apple. Consumer facing pop culture products. That’s the market that’s wide tf open. They can print if they get good at that.