(I feel that I write a comment like this every few years)
The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history. The Internet destroyed print journalism, local retail, and enabled cyberbullying and mass surveillance. If we applied the same framework used here, Internet optimism in 2005 was also a form of "class privilege" (his term, I personally hate it).
And the pattern extends well beyond the Internet. For example, mechanized looms devastated weavers, the automobile wiped out entire trades while introducing pollution and traffic deaths, and recorded music was supposed to kill live performances.
In each case, the harms were genuine, the displacement was painful and unevenly distributed, and the people raising alarms were not irrational. They were often right about the costs. What they tended to miss was the longer trajectory: the way access to books, transportation, music, and information gradually broadened rather than narrowed, even if the transition was brutal for those caught in it.
History doesn't guarantees a good outcome for AI, but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't.
> but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't
I dunno I think you can also take a really dim view of whether society as currently structured is set up to use AI to make any of those things more accessible, or better.
In education, certainly we've seen large tech companies give away AI to students who then use it to do their work. Simultaneously teachers are sold AI-detection products which are unreliable at best. Students learn less by e.g. not actually doing the reading or writing, and teachers spend more of their time pointlessly trying to catch the very common practice.
In medicine, in my most recent job search I talked to companies selling AI solutions both to insurers and to healthcare providers, to more quickly prepare filings to send to the other. I think the amount of paperwork per patient is just going to go up, with bots doing most of the actual form-filling, but the proportion of medical procedures that gets denied will be mostly unchanged.
I am not especially familiar with the legal space, but given the adversarial structure of many situations, I'm inclined to expect that AI will allow firms to shower each other in a paperwork, most of which will not be read by a human on either side. Clients may pay for a similar or higher number of billable hours.
Even if the technology _works_ in the sense of understanding the context and completing tasks autonomously, it may not work for _society_.
> But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.
And it has been... quite a correct view? In the past few decades the US cranked up its Gini index from 0.35 to ~0.5, successfully eliminated single-earner housebuyers[0]. It's natural to assume the current technology shift will eliminate double-earner housebuyers too. The next one would probably eliminate PC-buyers if we're lucky!
arguably the history of humanity was about automating humanity.
- teeth and nails with knives (in various shapes from bones to steel)
- feet with carriages and bicycles and cars
- hands with mills and factories on steam engines to industrial robots
Literaly every automation was meant to help humans somehow so, this naturally entailed an automation of some human function.
This automation is an automation of the human brain.
While the "definition" of what's human doesn't end here (feelings, etc.) , the utility does.
With loss of utility comes loss of benefits.
Mainly your ability to differentiate as a function of effort (physical or intellectual) gets diminished to 0. This poses some concerns wrt to ability to achieve goals and apsirations - like buying that house at some point or ensuring your childrens future, potentially vanish for large swaths of the population — the "unfortunates" - which are these it's hard to tell, but arguably the level of current resources (assets) becomes a better indicator of the future for generations to come, with work becoming less to none.
By freezing utility based on own effort you arguably freeze the structure of society in time. So yes, every instance sucked for the displaced party, but this one seems to be particularly broader (i.e. wider splash damage)
Local retail and specialty print media are alive and well. Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news. Even cyberbullying is mostly a matter of salience: it takes something that has always existed in the physical realm (bullying behavior) and moves it to the cyber environment where the mass public becomes aware of it.
> Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news.
Genuinely interested in some sort of data on this.
My working assumption was that print news media was dying through a combination of free news availability on the internet, shifting advertising spending as a result, shifting ‘channels’ to social media, and shifting attention spans between generations.
Everyone says this as if the previous cycles of labor displacement could not compound and this be the last straw. Same with how phones cause shorter attention span and less thought and more social isolation. People will say "oh they said the same thing about books and TV and video games"
We could be at the end of the rope with how much we can displace unevenly and how much people will put up with another cycle of wealth concentration. Just like we might be at the end of the rope with how much our minds can be stunted and distracted before serious negative consequences occur
I think they are compounding. Prior to the internet we had more third spaces, less attention economy, fewer self-esteem problems comparing our lives against influencers', warehouse and delivery jobs without pissing in a bottle to stay employed, people were employed instead of doing gigs. We used to have privacy somewhat, that's gone.
It's been this overpowered tool for the wealthy to gather more wealth by erasing jobs and the data brokers to perform intense surveillance.
> The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.
I think both the scale (how many industries will be impacted effectively simultaneously) and speed of disruption that could be caused by AI makes it very different from anything we have seen before.
I think it will be big, but I don't think it's bigger than the automation of manufacturing that began during the Industrial Revolution.
Think about the physical objects in the room you're in right now. How many of them were made from start to finish by human hands? Maybe your grandmother knitted the woollen jersey you're wearing -- made from wool shorn using electric shears. Maybe a clay bowl your kid made in a pottery class on the mantelpiece. Anything else?
I don’t think we can haphazardly apply history like this, it’s never the same, we just like to find patterns where there are none.
The biggest harm that would come from AI is ”everything at once”, we’re not talking about a single craft, we’re talking about the majority of them. All while moving the control of said technology to even fewer privatized companies, the printing press didn’t centralize all knowledge and utility to a few entities, it spread it. AI is knowledge and history centralized, behind paywalls and company policies. Imagine picking up a book about the history of music and on every second page there’s an ad for McDonald’s, this is how the internet ended up and it’s surely how LLM providers will end up.
And sure, some will run some local model here and there, but it will irrelevant in a global context.
The assumption in your comment is that those changes were all net good. In hindsight though, the automobile has had possibly existential costs for humanity, the internet has provided most benefit to those who most abuse its power, and so on. In the end, it doesn’t seem as though you’ve actually made any sort of case.
Is it? Do you include everyone that’s died or lost a loved one due to personal automobiles in that assessment?
We are so far post automobile that it’s hard to compare, but many of the benefits are illusionary when you consider how society has evolved with them as commutes for example used to be shorter. Similarly the air used to be far cleaner and that’s after we got rid of leaded gas and required catalytic converters decades ago.
The automobile on its own was actually far less polluting than the horse wrt. air quality. It's just that there's a whole lot more of the former than there ever was of the latter. Even wrt. climate change, it turns out that horses produce methane emissions which are far worse for the climate than carbon dioxide.
I often wonder that if cable news was around say, during the American Civil War, how likely would the 13th, 14th, and 15th amendments have passed? I'd say extremely unlikely.
Throughout our entire race as a species, abusers have always fucked the commons to the extreme using whatever tools they have available.
I mean take something as "innocuous" as the cotton gin, prior to the cotton gin there was a real decline in slavery but once it became extremely easier to process cotton slavery skyrocketed. Some of the worse laws the US has ever passed, the fugitive slave act, was during this period.
To think that technological progress means prosperity is extremely delusional.
We're still dealing with the ramifications of nuclear weapons and the likelihood that someone makes a committed nuclear attack will assuredly happen again in our species, just hoping that it doesn't take out all life on Earth when it happens.
Seriously, these types of comments are always really narrow in their view.
Industrialization has rapidly accelerated planet wide climate change that will have disastrous effects in many of our lifetimes. A true runaway condition will really test the merit of those billionaire bunkers.
All for what? a couple hundred years of "advancement"? A blink in the lifespan of humanity, but dooms everyone to a hyper-competitive death drive towards an unlivable world.
As a society, our understanding of "normal" has narrowed down to the last 80 years of civilization. A normal focused around consumption, which stands to take it all away just as fast.
The techno-optimists never seriously propose any meaningful solution to millions losing their livelyhoods and dignity so Sam Altman can add an extension to his doomsday bunker. They just go along with it as if they'll be invited down to weather the wet-bulb temperature.
> When present, the bias is always against white and male candidates across all tested models and scenarios. This happens even if we remove all text related to diversity.
Important sentences immediately before the ones you quote.
> For our evaluation, we inserted names to signal race / gender while keeping the resume unchanged. Interestingly, the LLMs were not biased in the original evaluation setting, but became biased (up to 12% differences in interview rates) when we added realistic details like company names (Meta, Palantir, General Motors), locations, or culture descriptions from public careers pages.
These are because of post-training. You have to give it such directives in post-training to correct the biases they bring in from scraping the whole internet (and other datasets like books, etc.) for data
> You’d need to be high enough in the org chart; far enough up the pyramid; advanced enough along the career ladder.
> To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.
So I think even these people should not feel secure.
The perceived value of expertise is decreased by AI which routinely claims to have PhD level mastery of a lot of material. I think even for people with deep experience, in the current job market, many firms are reluctant to hire or pay in a way that's commensurate with that expertise.
If you're a leader whose clout in an organization is partly tied to how many people are under you in an org-chart (it's dumb but we have all seen it), maybe that will begin to shrink quarter after quarter.
Unless you can make it genuinely obvious that a junior or mid-tier person could not write a prompt which cause a model to spew the knowledge or insight that you have won through years or decades of work, your job may become vulnerable.
I think the class divide that is most relevant is more literal and old-school:
- Do you _own_ enough of businesses that that's how you get most of your income? If so, maybe there's a way that AI will either cause your labor costs decrease, or your productivity per worker increases, and either way you're probably happy.
- Can you invest specifically in the firms that are actively building AI, or applications thereof?
We're back to owners vs workers, with the added dynamic that if AI lets you partially replace labor with capital, then owners of course take a bigger share of value created going forward.
So the problem here is this isn't an article about AI.
When the Luddities broke machines and burned the buildings that held them it wasn't because they hated machines (well at least initially). It's because they hated starving in the streets.
This is just a continuing part of the class war that has been going on since humanity started writing. Now, the only thing that might make this different is class/capital may have finally gotten the power to win it.
Every time you vote against a social safety net, you are ensuring that our AI future is a dark one. History has repeated this over and over.
We are at a fork in the road with lots of potential darkness, but simply thinking any old social safety net is going to work is not going to cut it. Nets can be, and generally are, used to capture.
An interesting multi-pronged approach is post labor economics which is being promoted by David Shapiro: https://www.youtube.com/@DaveShap
The basic premise is that currently we have households being supported by labor, capital, and transfers. With labor largely going away, the leaves capital and transfers. Relying on transfers alone will lead to ownership of the people by government. So we have to find ways to generate way more distributed capital ownership by the masses. This is what he plans, discusses, and promotes.
It seems like the author hasn't really used the latest models, I wrote my last line of code about a month ago after 20+ years coding. Claude Code can do it for me, better faster and never gets tired etc. Yes I have to keep it on leash but humans coding is over, unless its to learn or for fun.
Actually it's the lower classes that will escape the longest from AI replacing their jobs, unskilled physical work will remain human for a while yet. Whereas any job that can be done remotely is likely to replaced by one or more agents.
The new robot demos from Unitree make me wonder how many classes of unskilled labor are about to be automated (garbage collection, laundry & dishes, pothole repairs, last mile delivery, simple food preparation…)
There are already human-operated robots that collect garbage. Things like https://www.youtube.com/watch?v=9pl9vRCC6V0. If the automated robots end up being anything like that, I wouldn't expect them to be silent.
Josh Collinsworth paints an accurate picture here, and he’s not wrong. People will be hurt by the displacement and disruption that new technology brings, not just AI. It has been happening since humans first picked up stones and sharpened sticks. No one says you have to be happy about it, or even optimistic, even if you happen to be part of a privileged class.
> AI optimism requires believing that you … are not among those who will be driven to psychosis, to violence, or even to suicide by LLM usage. At the very least, this means you feel secure in your own mental health
This has echoes of moral panic to me. We hear about mental health crises triggered by LLMs in the media because they’re novel, uncommon and the stories grab attention. The modern equivalent of video games cause violence, or jazz is corrupting the youth?
I’ll concede AI has many perils, and I doubt we’ve even broken the surface of it yet, but I don’t think user psychosis is either now, or going to be, a common one.
Many of the potential technologies we might unlock in the future come with great danger. We saw this clearly perhaps for the first time with atomic weapons - the kind of technology with which we could truly destroy ourselves.
Many other advancements might also carry that kind of existential danger. Genetic engineering, human machine interfacing, actual AGI.
I see the technological climb as a bit like climbing Mt Everest - it's possible that we might reach the peak and one day live on some kind of Star Trekian society, but the climb becomes increasingly treacherous along with the risk that we perish.
The trouble of course is that there's nothing else for us to do: it's in our nature to explore new frontiers. It's just not clear whether we'll be able to handle the responsibility that comes with the power.
Most people are left with no choice but to adapt or parish. The fact he is contemplating optionally in the most profound automation in the industry is a form of a.. privilege.
I disagree with most of this article. Disclaimer: I'm a junior engineer and I believe both that: 1. AI is going to take my job, 2. AI is going to do incredible good for the world.
I don't see how these are distinct. It's a technology shift, of course it's going to make certain jobs obsolete - that's how technology shifts work.
I'm not going to go through every quote I disagree with, but unlike some AI negativity discourse (some of which I agree with btw, being an optimist doesn't mean being irrational) this just reads as old man yells at cloud. Mainly because the author doesn't understand the technology, and doesn't understand the impact.
The author clearly does not understand model capabilities (seems to be in the camp that these are just "prediction machines") as they claim it's unreasonable to expect models to "develop presently impossible capabilities". This is not at all supported by prior model releases. Most, if not all, major releases have displayed new capabilities. There are a lot more misconceptions on ability, but again not going to go through all of them.
The author also doesn't understand the impact, saying stuff like "Tech doesn’t free workers; it forces them to do more in the same amount of time, for the same rate of pay or less". What? Is the author unaware of what average labor hours were like before the industrial revolution? AI is clearly going to be hugely net positive for white-collar (and with robots eventually blue-collar) workers in the near future (it already is for many).
Average labour hours in a year dramatically increased with the Industrial Revolution.
They would only decrease much later, after a long period of social conflict, economic growth, and technological progress.
During the early phase of the Industrial Revolution (roughly 1760–1850):
Agricultural workers who once labored seasonally were pushed into factory schedules of 12–16 hours per day, 6 days per week.
Annual labor hours often exceeded 3,000 hours per year per worker.
This was not because work became harder physically, but because capital-intensive machinery became expensive and had to run continuously to be profitable.
Time discipline replaced task-based work. Before industrialization, a farmer might stop when tasks were done; factory workers had fixed shifts.
This sums up a lot of what I've been feeling lately. I've had a hard time getting excited about the AI long game, unlike a lot of my peers. I think being out of the industry for a few years (sold my startup a while back), and spending less time in that privileged tech bubble has made me far more aware of and concerned about average people and the future of non-tech hubs around the country/world. While I get value out of AI tools today, I see a lot more downsides in the immediate future for everyone that isn't directly working on this technology with skin in the game. Especially when our society doesn't seem even remotely ready for the big changes coming.
If the ultimate form of goodness is a miracle, and the ultimate form of badness is evil,then, in reality evil, is infinitly easier to realise, perform, refine, and produce at scale, and miracles remain hard and rare, but of course, somehow spoken of as some sort of yet to come but assuredly triumphant ,however eventualy.
now with real time personalised halucinations!
> Imagine the damage one bad kid could cause using deepfakes
Deepfakes are highly damaging right now because much of the world still doesn't realise that people can make deepfakes.
When everyone knows that a photo is no longer reliable evidence by itself, the harm that can be done with a deepfake will drop to a similar level as that of other unreliable forms of evidence, like spoken or written claims. (Which is not to say that they won't be harmful at all -- you can still damage someone's reputation by circulating completely fabricated rumours about them -- but people will no longer treat photorealistic images as gospel.)
"Class privilege" doesn't go far enough. If the class war is a war then AI may be its Manhattan project: a weapon that destroys the value of, and the ruling elite's need for, labor.
I continue to be shocked by the way people (with platforms) talk up to the (speculative) line where AI replaces most or all jobs, and then lamely suggest that this will be bad for the people who've lost their jobs because they will be poor, or something. No. What actually happens in that scenario is that money ceases to have value, at least in the way we currently understand it to. That scenario will produce a handful of monsters—sociopathic trillionaire brains encysted in layers of automation and automated production—that will crave more resources, more land, more power, and they will fight each other by various means for those things, and the rest of us will be at best in the way.
This scenario is not a given, because it's not obvious that AI can become this capable in the near term where the stage is set for such a profoundly lopsided outcome, but you can bet these people are thinking about it now, if not talking about it, if not materially preparing for it. And they are, indeed, the only people with reason to feel optimistic about it.
Unpopular opinion: the reverse. God created men, but Sam {Colt, Altman} made them equal. Commoditizing physical or intellectual power makes them more accessible to lower classes, and keeping them more expensive protects class privilege.
How does it help equalise, if the commodity is sold "at market value" by the richest, and all intellectual input is immediately fed back into the tool. Looks at best like a weapon to supress
I would say it is privilege (now) combined with denialism (for the future).
Not only do you have to believe that you're in the group that benefits, but you also have to believe that "AI" improvement from here forward will stall out prior to the point where it goes from assisting your job to replacing it wholesale. I suspect there are many less people for whom that applies to them than there are people who believe it applies to them.
It is very easy for us to exist in that denialism bubble until we see the machine nipping at our heels.
And that is not even getting into second order effects, like even if you do provide AI-proof value, what happens when some significant percentage of everyone else (your potential customers) loses their income and society starts to crumble?
Most of the logic of this post will be incoherent in a world where AI has replaced software jobs wholesale. You have to pick a lane. Is it so effective that it (and the labor market more broadly) needs to be aggressively regulated, or it not very useful for anything but trolling? It can't be both.
This assumes every decision-maker is a rational actor. Just today an executive was rambling about "quantum-empowered AI". These are the people who take decisions about firing workers. It is entirely possible that AI will replace many jobs while being useless (at achieving what those workers do). At least in the short-medium period.
We would live in a post-scarcity utopia if big economic decisions were taken based on long-term optimal effects.
I'm interested in how you can tell an industry-wide job displacement story about AI, where AI isn't actually doing the job, that isn't a just-so story.
If you wanted to tell such a story, you’d have to find examples of companies spending bazillions on new AI tooling, but failing to hit their top level OKRs. I suspect there will be at least a few of these by the end of 2026 - even a great technology can seem like an abacus in the hands of a disorganized and slow moving org.
The story only matters if it produces an industry-wide displacement in jobs. Failed billion-dollar IT projects are not a new thing, and don't disrupt the entire labor market.
To be clear: I'm not claiming that AI rollouts won't be billion-dollar failed IT projects! They very well could be. But if that's the case, they aren't going to disrupt the labor market.
Again: you have to pick a lane with the pessimism. Both lanes are valid. I buy neither of them. But recognize a coherent argument when I see one. This, however, isn't one.
Seems to be what is happening in a lot of the places it's encroaching.
AI journalism is strictly worse than having a human research and write the text, but it's also so orders of magnitude much cheaper. You see prompt fragments and other blatant AI artifacts in news articles almost every day. So we get newspapers that have the same shape as they used to, but that don't fulfill purpose. That's a development that was already going on before AI, but now it's even worse.
Walked past a billboard with an advertisement the other day that was blatantly AI-generated. Had a logo with visible JPEG artifacts plastered on top of it. Real amateur hour stuff. It probably was as cheap as it looked. But man was it ever cheap to design.
You see the trend in software too. Microsoft's recent track record is a good example of this. They can barely ship a working notepad.exe anymore.
Supposedly some birds will eat cigarette butts thinking they're bugs, and then starve to death with a belly full of indigestible cigarette filters. Feels a lot like what is happening to a lot of industries lately.
This is why social media, including HN, can be damaging.
The author is a grown man, describing how he felt after being insulted by a machine.
Imagine how high school kids feel after being mocked and humiliated by movie stars. That’s exactly what happened to a group of school kids in 2019. Chris Evans, Alyssa Milano, John Cusack, Debra Messing and others mocked the kids, made fun of their looks, made unflattering comparisons about them, etc.
What kind of damage could that do at that point in someone’s life? It’s horrendous.
(I feel that I write a comment like this every few years)
The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history. The Internet destroyed print journalism, local retail, and enabled cyberbullying and mass surveillance. If we applied the same framework used here, Internet optimism in 2005 was also a form of "class privilege" (his term, I personally hate it).
And the pattern extends well beyond the Internet. For example, mechanized looms devastated weavers, the automobile wiped out entire trades while introducing pollution and traffic deaths, and recorded music was supposed to kill live performances.
In each case, the harms were genuine, the displacement was painful and unevenly distributed, and the people raising alarms were not irrational. They were often right about the costs. What they tended to miss was the longer trajectory: the way access to books, transportation, music, and information gradually broadened rather than narrowed, even if the transition was brutal for those caught in it.
History doesn't guarantees a good outcome for AI, but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't.
> but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't
I dunno I think you can also take a really dim view of whether society as currently structured is set up to use AI to make any of those things more accessible, or better.
In education, certainly we've seen large tech companies give away AI to students who then use it to do their work. Simultaneously teachers are sold AI-detection products which are unreliable at best. Students learn less by e.g. not actually doing the reading or writing, and teachers spend more of their time pointlessly trying to catch the very common practice.
In medicine, in my most recent job search I talked to companies selling AI solutions both to insurers and to healthcare providers, to more quickly prepare filings to send to the other. I think the amount of paperwork per patient is just going to go up, with bots doing most of the actual form-filling, but the proportion of medical procedures that gets denied will be mostly unchanged.
I am not especially familiar with the legal space, but given the adversarial structure of many situations, I'm inclined to expect that AI will allow firms to shower each other in a paperwork, most of which will not be read by a human on either side. Clients may pay for a similar or higher number of billable hours.
Even if the technology _works_ in the sense of understanding the context and completing tasks autonomously, it may not work for _society_.
> But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.
And it has been... quite a correct view? In the past few decades the US cranked up its Gini index from 0.35 to ~0.5, successfully eliminated single-earner housebuyers[0]. It's natural to assume the current technology shift will eliminate double-earner housebuyers too. The next one would probably eliminate PC-buyers if we're lucky!
[0]: https://www.economist.com/united-states/2026/02/12/the-decli...
arguably the history of humanity was about automating humanity.
- teeth and nails with knives (in various shapes from bones to steel)
- feet with carriages and bicycles and cars
- hands with mills and factories on steam engines to industrial robots
Literaly every automation was meant to help humans somehow so, this naturally entailed an automation of some human function.
This automation is an automation of the human brain.
While the "definition" of what's human doesn't end here (feelings, etc.) , the utility does.
With loss of utility comes loss of benefits.
Mainly your ability to differentiate as a function of effort (physical or intellectual) gets diminished to 0. This poses some concerns wrt to ability to achieve goals and apsirations - like buying that house at some point or ensuring your childrens future, potentially vanish for large swaths of the population — the "unfortunates" - which are these it's hard to tell, but arguably the level of current resources (assets) becomes a better indicator of the future for generations to come, with work becoming less to none.
By freezing utility based on own effort you arguably freeze the structure of society in time. So yes, every instance sucked for the displaced party, but this one seems to be particularly broader (i.e. wider splash damage)
Local retail and specialty print media are alive and well. Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news. Even cyberbullying is mostly a matter of salience: it takes something that has always existed in the physical realm (bullying behavior) and moves it to the cyber environment where the mass public becomes aware of it.
> Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news.
Genuinely interested in some sort of data on this.
My working assumption was that print news media was dying through a combination of free news availability on the internet, shifting advertising spending as a result, shifting ‘channels’ to social media, and shifting attention spans between generations.
Everyone says this as if the previous cycles of labor displacement could not compound and this be the last straw. Same with how phones cause shorter attention span and less thought and more social isolation. People will say "oh they said the same thing about books and TV and video games"
We could be at the end of the rope with how much we can displace unevenly and how much people will put up with another cycle of wealth concentration. Just like we might be at the end of the rope with how much our minds can be stunted and distracted before serious negative consequences occur
I am reminded of this, I feel like its kind of a similar phenomenon: https://www.reddit.com/r/dataisbeautiful/comments/1m803ba/th...
I think they are compounding. Prior to the internet we had more third spaces, less attention economy, fewer self-esteem problems comparing our lives against influencers', warehouse and delivery jobs without pissing in a bottle to stay employed, people were employed instead of doing gigs. We used to have privacy somewhat, that's gone.
It's been this overpowered tool for the wealthy to gather more wealth by erasing jobs and the data brokers to perform intense surveillance.
> The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.
I think both the scale (how many industries will be impacted effectively simultaneously) and speed of disruption that could be caused by AI makes it very different from anything we have seen before.
I think it will be big, but I don't think it's bigger than the automation of manufacturing that began during the Industrial Revolution.
Think about the physical objects in the room you're in right now. How many of them were made from start to finish by human hands? Maybe your grandmother knitted the woollen jersey you're wearing -- made from wool shorn using electric shears. Maybe a clay bowl your kid made in a pottery class on the mantelpiece. Anything else?
I don’t think we can haphazardly apply history like this, it’s never the same, we just like to find patterns where there are none.
The biggest harm that would come from AI is ”everything at once”, we’re not talking about a single craft, we’re talking about the majority of them. All while moving the control of said technology to even fewer privatized companies, the printing press didn’t centralize all knowledge and utility to a few entities, it spread it. AI is knowledge and history centralized, behind paywalls and company policies. Imagine picking up a book about the history of music and on every second page there’s an ad for McDonald’s, this is how the internet ended up and it’s surely how LLM providers will end up.
And sure, some will run some local model here and there, but it will irrelevant in a global context.
The assumption in your comment is that those changes were all net good. In hindsight though, the automobile has had possibly existential costs for humanity, the internet has provided most benefit to those who most abuse its power, and so on. In the end, it doesn’t seem as though you’ve actually made any sort of case.
The set of people who believe the automobile (or the Internet) are net negatives taken as a whole for society is extremely small, for good reason
Is it? Do you include everyone that’s died or lost a loved one due to personal automobiles in that assessment?
We are so far post automobile that it’s hard to compare, but many of the benefits are illusionary when you consider how society has evolved with them as commutes for example used to be shorter. Similarly the air used to be far cleaner and that’s after we got rid of leaded gas and required catalytic converters decades ago.
Let's refine terms - internal combustion engine driven automobiles have lead to lead poisoning, air pollution, and CO2 emissions.
The automobile on its own was actually far less polluting than the horse wrt. air quality. It's just that there's a whole lot more of the former than there ever was of the latter. Even wrt. climate change, it turns out that horses produce methane emissions which are far worse for the climate than carbon dioxide.
I often wonder that if cable news was around say, during the American Civil War, how likely would the 13th, 14th, and 15th amendments have passed? I'd say extremely unlikely.
Throughout our entire race as a species, abusers have always fucked the commons to the extreme using whatever tools they have available.
I mean take something as "innocuous" as the cotton gin, prior to the cotton gin there was a real decline in slavery but once it became extremely easier to process cotton slavery skyrocketed. Some of the worse laws the US has ever passed, the fugitive slave act, was during this period.
To think that technological progress means prosperity is extremely delusional.
We're still dealing with the ramifications of nuclear weapons and the likelihood that someone makes a committed nuclear attack will assuredly happen again in our species, just hoping that it doesn't take out all life on Earth when it happens.
Seriously, these types of comments are always really narrow in their view.
Industrialization has rapidly accelerated planet wide climate change that will have disastrous effects in many of our lifetimes. A true runaway condition will really test the merit of those billionaire bunkers.
All for what? a couple hundred years of "advancement"? A blink in the lifespan of humanity, but dooms everyone to a hyper-competitive death drive towards an unlivable world.
As a society, our understanding of "normal" has narrowed down to the last 80 years of civilization. A normal focused around consumption, which stands to take it all away just as fast.
The techno-optimists never seriously propose any meaningful solution to millions losing their livelyhoods and dignity so Sam Altman can add an extension to his doomsday bunker. They just go along with it as if they'll be invited down to weather the wet-bulb temperature.
The Industrial Revolution and its consequences have been a disaster for the human race.
> they mimic and amplify the inherent racism present in their own training data
LLMs turn out to be biased against white men:
https://www.lesswrong.com/posts/me7wFrkEtMbkzXGJt/race-and-g...
> When present, the bias is always against white and male candidates across all tested models and scenarios. This happens even if we remove all text related to diversity.
Important sentences immediately before the ones you quote.
> For our evaluation, we inserted names to signal race / gender while keeping the resume unchanged. Interestingly, the LLMs were not biased in the original evaluation setting, but became biased (up to 12% differences in interview rates) when we added realistic details like company names (Meta, Palantir, General Motors), locations, or culture descriptions from public careers pages.
Hah. Even LLMs know Meta and Palantir are evil af.
That's not a reliable source.
These are because of post-training. You have to give it such directives in post-training to correct the biases they bring in from scraping the whole internet (and other datasets like books, etc.) for data
> You’d need to be high enough in the org chart; far enough up the pyramid; advanced enough along the career ladder.
> To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.
So I think even these people should not feel secure. The perceived value of expertise is decreased by AI which routinely claims to have PhD level mastery of a lot of material. I think even for people with deep experience, in the current job market, many firms are reluctant to hire or pay in a way that's commensurate with that expertise. If you're a leader whose clout in an organization is partly tied to how many people are under you in an org-chart (it's dumb but we have all seen it), maybe that will begin to shrink quarter after quarter. Unless you can make it genuinely obvious that a junior or mid-tier person could not write a prompt which cause a model to spew the knowledge or insight that you have won through years or decades of work, your job may become vulnerable.
I think the class divide that is most relevant is more literal and old-school:
- Do you _own_ enough of businesses that that's how you get most of your income? If so, maybe there's a way that AI will either cause your labor costs decrease, or your productivity per worker increases, and either way you're probably happy.
- Can you invest specifically in the firms that are actively building AI, or applications thereof?
We're back to owners vs workers, with the added dynamic that if AI lets you partially replace labor with capital, then owners of course take a bigger share of value created going forward.
So the problem here is this isn't an article about AI.
When the Luddities broke machines and burned the buildings that held them it wasn't because they hated machines (well at least initially). It's because they hated starving in the streets.
This is just a continuing part of the class war that has been going on since humanity started writing. Now, the only thing that might make this different is class/capital may have finally gotten the power to win it.
Every time you vote against a social safety net, you are ensuring that our AI future is a dark one. History has repeated this over and over.
We are at a fork in the road with lots of potential darkness, but simply thinking any old social safety net is going to work is not going to cut it. Nets can be, and generally are, used to capture.
An interesting multi-pronged approach is post labor economics which is being promoted by David Shapiro: https://www.youtube.com/@DaveShap
The basic premise is that currently we have households being supported by labor, capital, and transfers. With labor largely going away, the leaves capital and transfers. Relying on transfers alone will lead to ownership of the people by government. So we have to find ways to generate way more distributed capital ownership by the masses. This is what he plans, discusses, and promotes.
> Nets can be, and generally are, used to capture.
An argument formed from 1 word in a metaphor is illegitimate.
It seems like the author hasn't really used the latest models, I wrote my last line of code about a month ago after 20+ years coding. Claude Code can do it for me, better faster and never gets tired etc. Yes I have to keep it on leash but humans coding is over, unless its to learn or for fun.
Actually it's the lower classes that will escape the longest from AI replacing their jobs, unskilled physical work will remain human for a while yet. Whereas any job that can be done remotely is likely to replaced by one or more agents.
The new robot demos from Unitree make me wonder how many classes of unskilled labor are about to be automated (garbage collection, laundry & dishes, pothole repairs, last mile delivery, simple food preparation…)
Skilled labor still has some legs.
I don't see any humanoid robots around at the moment, whereas a huge number of knowledge based workplaces use non-embodied AI now every day.
Can't wait for silent robots to collect the garbage, human ones seem to enjoy making as much racket as they can.
There are already human-operated robots that collect garbage. Things like https://www.youtube.com/watch?v=9pl9vRCC6V0. If the automated robots end up being anything like that, I wouldn't expect them to be silent.
Josh Collinsworth paints an accurate picture here, and he’s not wrong. People will be hurt by the displacement and disruption that new technology brings, not just AI. It has been happening since humans first picked up stones and sharpened sticks. No one says you have to be happy about it, or even optimistic, even if you happen to be part of a privileged class.
Author certainly has a point. The central idea is (IMO) best expressed in this quote:
> to focus on its [i.e., AI's] benefits to you, you’re forced to ignore its costs to others.
Also works if you substitute "technology" for "AI".
> AI optimism requires believing that you … are not among those who will be driven to psychosis, to violence, or even to suicide by LLM usage. At the very least, this means you feel secure in your own mental health
This has echoes of moral panic to me. We hear about mental health crises triggered by LLMs in the media because they’re novel, uncommon and the stories grab attention. The modern equivalent of video games cause violence, or jazz is corrupting the youth?
I’ll concede AI has many perils, and I doubt we’ve even broken the surface of it yet, but I don’t think user psychosis is either now, or going to be, a common one.
Well, maybe one day, there’ll be a situation justifying a moral panic.
We can’t simply dispose of an argument just because it smells in a particular way to us.
India and Africa are significantly more optimistic about AI than US and EU
There exists great promise in AI to be an equalizing force, if implemented well
The future is yet to be written
How is it an equalising force if the commodity is sold at "market value"? This will just lead to more wealth concentration, no?
> There exists great promise in AI to be an equalizing force, if implemented well
That doesn’t sound like a promise then no?
Being optimistic is a bad way to get good outcomes
Always fun to read white male tech employees living in the US talking about privilege, so up on the Maslow's pyramid they don't even see the ground.
Many of the potential technologies we might unlock in the future come with great danger. We saw this clearly perhaps for the first time with atomic weapons - the kind of technology with which we could truly destroy ourselves.
Many other advancements might also carry that kind of existential danger. Genetic engineering, human machine interfacing, actual AGI.
I see the technological climb as a bit like climbing Mt Everest - it's possible that we might reach the peak and one day live on some kind of Star Trekian society, but the climb becomes increasingly treacherous along with the risk that we perish.
The trouble of course is that there's nothing else for us to do: it's in our nature to explore new frontiers. It's just not clear whether we'll be able to handle the responsibility that comes with the power.
His characterization is the privilege.
Most people are left with no choice but to adapt or parish. The fact he is contemplating optionally in the most profound automation in the industry is a form of a.. privilege.
I disagree with most of this article. Disclaimer: I'm a junior engineer and I believe both that: 1. AI is going to take my job, 2. AI is going to do incredible good for the world.
I don't see how these are distinct. It's a technology shift, of course it's going to make certain jobs obsolete - that's how technology shifts work.
I'm not going to go through every quote I disagree with, but unlike some AI negativity discourse (some of which I agree with btw, being an optimist doesn't mean being irrational) this just reads as old man yells at cloud. Mainly because the author doesn't understand the technology, and doesn't understand the impact.
The author clearly does not understand model capabilities (seems to be in the camp that these are just "prediction machines") as they claim it's unreasonable to expect models to "develop presently impossible capabilities". This is not at all supported by prior model releases. Most, if not all, major releases have displayed new capabilities. There are a lot more misconceptions on ability, but again not going to go through all of them.
The author also doesn't understand the impact, saying stuff like "Tech doesn’t free workers; it forces them to do more in the same amount of time, for the same rate of pay or less". What? Is the author unaware of what average labor hours were like before the industrial revolution? AI is clearly going to be hugely net positive for white-collar (and with robots eventually blue-collar) workers in the near future (it already is for many).
Average labour hours in a year dramatically increased with the Industrial Revolution.
They would only decrease much later, after a long period of social conflict, economic growth, and technological progress.
During the early phase of the Industrial Revolution (roughly 1760–1850):
Agricultural workers who once labored seasonally were pushed into factory schedules of 12–16 hours per day, 6 days per week.
Annual labor hours often exceeded 3,000 hours per year per worker.
This was not because work became harder physically, but because capital-intensive machinery became expensive and had to run continuously to be profitable.
Time discipline replaced task-based work. Before industrialization, a farmer might stop when tasks were done; factory workers had fixed shifts.
This trend persisted into the late 19th century.
This sums up a lot of what I've been feeling lately. I've had a hard time getting excited about the AI long game, unlike a lot of my peers. I think being out of the industry for a few years (sold my startup a while back), and spending less time in that privileged tech bubble has made me far more aware of and concerned about average people and the future of non-tech hubs around the country/world. While I get value out of AI tools today, I see a lot more downsides in the immediate future for everyone that isn't directly working on this technology with skin in the game. Especially when our society doesn't seem even remotely ready for the big changes coming.
If the ultimate form of goodness is a miracle, and the ultimate form of badness is evil,then, in reality evil, is infinitly easier to realise, perform, refine, and produce at scale, and miracles remain hard and rare, but of course, somehow spoken of as some sort of yet to come but assuredly triumphant ,however eventualy. now with real time personalised halucinations!
> Imagine the damage one bad kid could cause using deepfakes
Deepfakes are highly damaging right now because much of the world still doesn't realise that people can make deepfakes.
When everyone knows that a photo is no longer reliable evidence by itself, the harm that can be done with a deepfake will drop to a similar level as that of other unreliable forms of evidence, like spoken or written claims. (Which is not to say that they won't be harmful at all -- you can still damage someone's reputation by circulating completely fabricated rumours about them -- but people will no longer treat photorealistic images as gospel.)
"Class privilege" doesn't go far enough. If the class war is a war then AI may be its Manhattan project: a weapon that destroys the value of, and the ruling elite's need for, labor.
I continue to be shocked by the way people (with platforms) talk up to the (speculative) line where AI replaces most or all jobs, and then lamely suggest that this will be bad for the people who've lost their jobs because they will be poor, or something. No. What actually happens in that scenario is that money ceases to have value, at least in the way we currently understand it to. That scenario will produce a handful of monsters—sociopathic trillionaire brains encysted in layers of automation and automated production—that will crave more resources, more land, more power, and they will fight each other by various means for those things, and the rest of us will be at best in the way.
This scenario is not a given, because it's not obvious that AI can become this capable in the near term where the stage is set for such a profoundly lopsided outcome, but you can bet these people are thinking about it now, if not talking about it, if not materially preparing for it. And they are, indeed, the only people with reason to feel optimistic about it.
Unpopular opinion: the reverse. God created men, but Sam {Colt, Altman} made them equal. Commoditizing physical or intellectual power makes them more accessible to lower classes, and keeping them more expensive protects class privilege.
This is an incredibly naive take and it definitely has the feelings of anti-intellectualism in it.
How does it help equalise, if the commodity is sold "at market value" by the richest, and all intellectual input is immediately fed back into the tool. Looks at best like a weapon to supress
I would say it is privilege (now) combined with denialism (for the future).
Not only do you have to believe that you're in the group that benefits, but you also have to believe that "AI" improvement from here forward will stall out prior to the point where it goes from assisting your job to replacing it wholesale. I suspect there are many less people for whom that applies to them than there are people who believe it applies to them.
It is very easy for us to exist in that denialism bubble until we see the machine nipping at our heels.
And that is not even getting into second order effects, like even if you do provide AI-proof value, what happens when some significant percentage of everyone else (your potential customers) loses their income and society starts to crumble?
Most of the logic of this post will be incoherent in a world where AI has replaced software jobs wholesale. You have to pick a lane. Is it so effective that it (and the labor market more broadly) needs to be aggressively regulated, or it not very useful for anything but trolling? It can't be both.
This assumes every decision-maker is a rational actor. Just today an executive was rambling about "quantum-empowered AI". These are the people who take decisions about firing workers. It is entirely possible that AI will replace many jobs while being useless (at achieving what those workers do). At least in the short-medium period.
We would live in a post-scarcity utopia if big economic decisions were taken based on long-term optimal effects.
I'm interested in how you can tell an industry-wide job displacement story about AI, where AI isn't actually doing the job, that isn't a just-so story.
If you wanted to tell such a story, you’d have to find examples of companies spending bazillions on new AI tooling, but failing to hit their top level OKRs. I suspect there will be at least a few of these by the end of 2026 - even a great technology can seem like an abacus in the hands of a disorganized and slow moving org.
The story only matters if it produces an industry-wide displacement in jobs. Failed billion-dollar IT projects are not a new thing, and don't disrupt the entire labor market.
To be clear: I'm not claiming that AI rollouts won't be billion-dollar failed IT projects! They very well could be. But if that's the case, they aren't going to disrupt the labor market.
Again: you have to pick a lane with the pessimism. Both lanes are valid. I buy neither of them. But recognize a coherent argument when I see one. This, however, isn't one.
Seems to be what is happening in a lot of the places it's encroaching.
AI journalism is strictly worse than having a human research and write the text, but it's also so orders of magnitude much cheaper. You see prompt fragments and other blatant AI artifacts in news articles almost every day. So we get newspapers that have the same shape as they used to, but that don't fulfill purpose. That's a development that was already going on before AI, but now it's even worse.
Walked past a billboard with an advertisement the other day that was blatantly AI-generated. Had a logo with visible JPEG artifacts plastered on top of it. Real amateur hour stuff. It probably was as cheap as it looked. But man was it ever cheap to design.
You see the trend in software too. Microsoft's recent track record is a good example of this. They can barely ship a working notepad.exe anymore.
Supposedly some birds will eat cigarette butts thinking they're bugs, and then starve to death with a belly full of indigestible cigarette filters. Feels a lot like what is happening to a lot of industries lately.
In retrospect, it was crazy hearing stories about how SF UX designers would be paid $250 to essentially do what Figma does now.
Sometimes It is effective, but very unreliable.
In the end there will be the owners of the farmland, and whoever/whatever they employ.
>what happens when some significant percentage of everyone else (your potential customers) loses their income and society starts to crumble?
They will start to burn down data centers.
If you believe ICE purpose is to enforce immigration law… yeah. Quite possible
There’s a reason that the Musks and Thiels of the world invested in luxury doomsday bunkers, because it won’t just be property people want to burn.
The Soviets used the "iron broom" (i.e. murder) on the wealthy people.
It didn't make anyone better off.
The Soviets history is not so simple. ;)
This is why social media, including HN, can be damaging.
The author is a grown man, describing how he felt after being insulted by a machine.
Imagine how high school kids feel after being mocked and humiliated by movie stars. That’s exactly what happened to a group of school kids in 2019. Chris Evans, Alyssa Milano, John Cusack, Debra Messing and others mocked the kids, made fun of their looks, made unflattering comparisons about them, etc.
What kind of damage could that do at that point in someone’s life? It’s horrendous.
finally a based take on ai