> comparative advantage tells you that some human labor will remain valuable in some configuration, but nothing about the wages, number of jobs, or the distribution of gains. You can have comparative advantage and still have massive displacement, wage collapse, and concentration of returns to capital. A world where humans retain “comparative advantage” in a handful of residual tasks at a fraction of the current wages is technically consistent with Oks’ framework, but obviously is worth worrying about and is certainly not fine.
If the worst predictions about AI's effect on employment turn out to be correct, then I'd expect to see movements to force government regulation of AI. Particularly if it becomes the case that profits are accruing to a few massive corporations who run the AI.
There is no reason people have to tolerate a technology that is destructive to society, anymore than they have to tolerate companies selling fentanyl at 7/11.
>I'd expect to see movements to force government regulation of AI.
I agree. It will be an interesting debate to watch play out, because a) lots of end-users love using AI and will be loath to give it up, and b) advances in compute will almost certainly allow us to run current frontier models (or better) locally on our laptops and phones, which means that profits no longer accrue to a few massive AI labs. It would also would make regulating it a lot tricker, since kneecapping the AI labs would no longer effectively regulate the technology.
I'd expect it to be more dramatic. The worst predictions are something like "50% of all jobs will be completely eliminated in two years." That's violent uprising territory, not pressure on governments to improve regulation.
> There is no reason people have to tolerate a technology that is destructive to society,
All evidence to the contrary. Aside from the French occasionally burning down some cars the western populations (me unfortunately included) have become remarkably relaxed about such things.
Even very extreme examples like blatant refusal by government to investigate absolutely horrific stuff like Epstein gets at most some mildly upset TikTok reels
Add some aggressive lobbying by big tech and perhaps a sprinkle of palantir population monitoring and I don’t think we’ll see a refusal to tolerate at scale
> For young software developers specifically, employment fell almost 20% from its 2022 peak.
Employment in the 2020-2022 range was highly unusual due to COVID stimulus the resulting unprecedented hiring. Tech companies were hiring anyone they could and after some time juniors were the only way to feed the insatiable demand for more headcount.
Comparing to this time without taking that into account is going to be misleading.
This period was also a strange time for remote work. I’ve been remote since before then, but COVID era WFH felt like a turning point when bad behavior during remote work became normalized. That’s when we started having remote hires trying to work two jobs (and giving us half an effort / not getting their work done), and there was a rise of “quiet quitting” as a news media meme because everyone thought they could always just walk out and get a new job if they got fired for not working. We also weren’t doing juniors any favors by hiring them in high numbers without a sufficient ratio of seniors to mentor and lead them.
That also coincided with the rise of GitHub Copilot and ChatGPT. These tools were not great at the time, but if you were a junior who was over-hired into a company that didn’t have capacity to mentor you and you were working remote in the age when Reddit was promoting quiet quitting and overemployment on your feed every day, banging out PRs with GitHub Copilot for a couple hours a day and then going about your life for a $135K salary right out of college felt like you just hit the jackpot of historical confluences for work-life balance.
I saw this exact story play out at multiple companies who got burned out on the idea of hiring juniors due to the risk. Combine that with the rapid improvement of the LLM tools and the idea quickly became that you just hire seniors and treat the LLMs as juniors rather than paying another salary for them to pilot Claude Code around. The seniors had to review the Claude Code output anyway, so why not cut out the middleman?
Then add the economic downturn and the chaos of whatever this administration is doing this month and now there are so many qualified seniors on the market that hiring juniors is hard to justify. This is the part that would have happened with or without AI.
All things considered, being down only 20% from the 2022 peak seems not that bad.
This thread inspired me to write an article because from my perspective, the debate over AI and jobs is missing a crucial question: not whether employment will exist, but what holds communities together as systems erode?
> AI replaces codified knowledge – the kind of learning you get from classrooms or textbooks – but struggles with tacit knowledge, the experiential judgement that accumulates over years on the job. This is why seniors are spared and juniors are not. But Oks’ thesis treats this as reassurance: see, humans with deep knowledge still have comparative advantage! I believe this is more of a senior worker’s luxury, and the protection for “seniors” will move up and up the hierarchy over time.
Times change, the ladders you and I climbed to success may not be around in the same forms for our children. That's not new. But will there be any ladders to climb if the bottom rungs are all gone?
All the kids already wanted be influencers, ironically turns out that's one of the safer career paths when AI is factored into the equation. Still not plumber or electrician safe, but the potential upsides are much higher.
I will consume less of it, and have actively blocked or unsubscribed from orgs that promote it, but the generation behind us won't have these scruples.
Besides AI what other commenters pointed out, commercialized / "production line" influencers are already a thing and have been for years. I've seen some pretty dystopian studios being posted on social media (take that with a grain of salt because internet).
>> All the kids already wanted be influencers, ironically turns out that's one of the safer career paths when AI is factored into the equation
This is quite false. It is trivial to generate UGC (user-generated content) using AI now, and the resulting short-form videos are virtually indistinguishable from the real thing.
Yes electricians are definitely safer than those of us who work in front of a computer all day, but I don’t think AI is good for them either. First of all, more young people might try to become one, potentially crowding the sector. Second, if the rest of us are poorer we’ll also spend less in housing and other things that require an electrician.
if software engineers get displaced, they will eventually drift into other jobs that benefit from people who solve problems like software engineers. and if the former software engineers are willing to work at a rate reduced to their former positions, and if they bring better efficiencies with them, then there will be a slow cascade.
it doesn't follow that all software engineers are excellent at other work, please don't take that from my quip. but i could see the pattern, over time, being large enough to identify.
since software engineering jobs historically are very well paid, it does give some plausibility that former engineers working for less money would have this displacing effect.
its all icky no matter what i think, maybe someone else can tell me why i'm wrong and cheer me up
> Brynjolfsson analyzed millions of ADP payroll records and found a 13% relative decline in employment for early-career workers (ages 22-25) in AI-exposed occupations since late 2022.
> So what’s the mechanism at play? AI replaces codified knowledge
Many job postings peaked in 2022 due to the pandemic. The original paper tries to account for this but falls short in my opinion.
Original paper said[1]:
> One possibility is that our results are explained by a general slowdown in technology hiring from 2022 to 2023 as firms recovered from the COVID-19 Pandemic...
> Figure A12 shows employment changes by age and exposure quintile after excluding computer occupations...
> Figure A13 shows results when excluding firms in information technology or computer systems design...
> ... These results indicate that our findings are not specific to technology roles.
Excluding computer and IT jobs is not enough in my opinion. Look at all these other occupations which had peak hiring in 2022.
In every discussion of AI eliminating or dramatically reducing the compensation for <some large double digit percentage> of “white collar” jobs (and probably “blue collar” too). It’s unclear to me what the end state is - the vast majority of the economy works on volume. You need large numbers of people with enough money to buy your product/service. As wealth concentrates there are fewer potential buyers and economies of scale start working against producers. (And governments need people with money to tax…)
The economy becomes a palace economy, where the money fountains are owned by a few and the loot slowly flows through rings of gatekeepers while the outer rungs are plagued by desperation and poverty despite living in the shadow of abundance. These are common around the world and through time. It's the Star Wars fate.
> found a 13% relative decline in employment for early-career workers (ages 22-25) in AI-exposed occupations since late 2022. For young software developers specifically, employment fell almost 20% from its 2022 peak
This is confounding AI-exposed white collar occupations with occupations that were overrepresented with extended remote work.
I am on multiple boards and that was a major factor that disincentivized new grad hiring in the US, because a new grad salary in a white collar profession in the US is a mid-career salary in the rest of the world.
AI is used as an excuse, but even most executives when polled agree that we do expect to see the amount of employees being hired at least in software adjacent roles to increase.
I cannot justify hiring a mediocre new grad in Seattle for $120k who will end up using Claude Code anyhow when I can hire an early-career employee doing something similar in Romania or India for around $20k.
The reality is a large portion of new grads and mid-career types who started their careers after 2020 are too mediocre for the TC paid.
---
Edit: pulling a comment of mine from downthread
> Why are then so many US developers still employed
Becuase unlike the HN hivemind, a large portion of experienced developers in the US have found ways to realistically adopt new technologies where they are relevant.
Reflexively being an AI fanatic or Luddite is stupid, but being a SWE who is able to to recognize and explain the value of these tools and their limits is extremely valuable.
I can justify paying $300-400k TCs if you are not a code monkey. This means being able to architect, manage upwards, do basic design and program management, hop onto customer calls, and keep upskilling on top of writing, testing, and reviewing code.
We are not hiring SWEs to only push code. We hire SWEs in order to translate and implement business requirements into software.
A developer who has a mindset like that is worth their weight in gold, and there are still plenty of these kinds of experienced developers in the US.
> I cannot justify hiring a mediocre new grad in Seattle for $120k who will end up using Claude Code anyhow when I can hire an early-career employee doing something similar in Romania or India for around $20k.
Of course it's not justified, but I don't think it has anything to do with Claude Code or AI. It has always been true that you can hire competent programmers from eastern European at a discounted price, since forever.
If you believe (whether if this belief is based on reality or not) American programmers have "better working ethnics," "easier time to communicate with," or "skin in the game," then they still have these traits in AI era. If you don't then you should outsource anyway.
It's also more than a little misleading to compare to the 2022 peak. Anybody who was hiring software engineers in 2020-2022 or being hired as one knows that was a wild and unsustainable period.
What, you mean a person who has only previous interacted with computers via smartphones taking a 6 month "JavaScript bootcamp" and getting a $150k/year salary on the other side isn't sustainable?
> I cannot justify hiring a mediocre new grad in Seattle for $120k who will end up using Claude Code anyhow when I can hire an early-career employee doing something similar in Romania or India for around $20k
Yes you can. Life and business is not about profit. It’s about bettering the lives of people. Make it a priority to hire American because you’re an American company.
You’re making a choice to prioritize profit (or foreign countries) over the country that you benefit from. This is an immoral and short sighted business decision, as you will eventually see a backlash from the host countries you’re effectively operating as a parasite in.
Not trying to persuade you, just laying out there are alternatives that’ll be a reality eventually. Take a look at the current political swings in Japan, Restore Britain, etc.
> You’re making a choice to prioritize profit (or foreign countries) over the country that you benefit from. This is an immoral and short sighted business decision, as you will eventually see a backlash from the host countries you’re effectively operating as a parasite in.
I have the vague sense we're far enough into e.g. offshoring that it's not purely about "profits" but about being competitive because all your competitors are doing the same thing.
But, then again, wealth inequality increase doesn't seem to be slowing (so profits /are/ being achieved), and I mostly think about businesses in robotics (and I don't spend that much time pondering it) where there's a lot of complexity in the stack, needing more "manpower", and being smart with money spent is maybe /more/ important. Robotics is a smallll sliver of software dev companies... (thus, "vague sense")
>Life and business is not about profit. It’s about bettering the lives of people.
This mentality results in the grass at the Taj Mahal being cut with hand tools [0], or Japan having a whole category of "useless jobs" like elevator operators [1, 2] that simply exist to provide employment. Taken to an extreme, this is the broken windows makework fallacy. If I smash a lot of windows, the local glazier gets paid handsomely, at the expense of everyone who had to pay for window replacements.
Anything taken to an extreme is extreme, that includes capitalism.
We know that turning everyone and everything into a product has it's own set of negative outcomes. Trying to play this off as a binary situation is a form of extremism in itself.
There is already the term Bullshit Jobs [1] for service economies like the US where huge numbers of people are employed as part of company bureaucracy rather than representing the most efficient outcome.
Simply put trying to run a society like a business is going to ensure that you get such a large number of people unhappy that you start a revolution that tries to burn everything down and leads to a lot of death.
Are those people cutting the grass/operating the elevators happier/unhappier than they would be otherwise? (I don't know, but perhaps you do). You seem to be strongly implying that this is in some way "wrong" rather than a subjectively different view of the purpose of human existence - for what reason? (I'll ignore the glazier example as it seems quite extreme, and also comes with more obvious/specific "victims").
>Are those people cutting the grass/operating the elevators happier/unhappier than they would be otherwise?
There are numerous studies that show menial labor leads to poor mental health. Perhaps these people employed as makework automatons are happier than they would be if they had no employment whatsoever and were destitute on the street, but these are not the only two alternatives.
>I'll ignore the glazier example as it seems quite extreme, and also comes with more obvious/specific "victims"
The "victims" at the Taj Mahal/department store are the visitors/customers who have to pay slightly higher prices as a result. While not as extreme as the glazier in the broken window fallacy, the grass cutters/elevator operators exist on the exact same spectrum.
Have you ever run a business? Literally all anyone cares about is profit. When I talk to potential investors, banks for loans, even the government for grants, all they're interested in is cash-on-hand, revenue, projections, and expenses. I have never once had a bank ask me if I was bettering the lives of my employees when applying for a loan.
I'm not saying it should be this way, or defending capitalism here, but until there's massive changes to the Western economic system... yes, businesses are about profit.
Capitalism really is like a disease of the mind. The idea that you absolutely have to and there are no alternatives to extracting as much wealth from a system as possible.
>>Brynjolfsson found a 13% relative decline in employment for early-career workers (ages 22-25) in AI-exposed occupations since late 2022. For young software developers specifically, employment fell almost 20% from its 2022 peak
>This is confounding AI-exposed white collar occupations with occupations that were overrepresented with extended remote work.
Yup. If you look at Brynjolfsson's actual publication [0], you'll see that precipitous decline in hiring juniors in "AI-exposed occupations" starts in late 2022. This is when ChatGPT first came out, and far too early to see any effects of AI on the job market.
You know what else happened in late 2022? The end of ZIRP and Section 174, which immediately put a stop to the frantic post-COVID overhiring of bootcamp juniors just to pad headcount and signal growth. The problem with Brynjolfsson's paper is that it doesn't effectively deconvolve "AI-exposed occupations" from "ZIRP/Section 174-exposed occupations," which overlap significantly.
It is not AI becuase employees hired in Romania and the US are both expected to be able to know how to use AI, which papers over performance issues in most cases that matter for a business (time to delivery), but I cannot justify hiring a deskilled NCG for $120k in the US.
Edit: cannot reply
> Why are then so many US developers still employed
Becuase unlike the HN hivemind, a large portion of experienced developers in the US have found ways to realistically adopt new technologies where they are relevant.
Reflexively being an AI fanatic or Luddite is stupid, but being a SWE who is able to tin recognize and explain the value of these tools and their limits is extremely valuable.
I can justify paying $300-400k TCs if you are not a code monkey. This means being able to architect, manage upwards, do basic design and program management, hop onto customer calls, and keep upskilling on top of writing and reviewing code.
We are not hiring SWEs to only push code. We hire SWEs in order to translate and implement business requirements into software.
A developer who has a mindset like that is worth their weight in gold, and there are still plenty of these kinds of experienced developers in the US.
When I turn the brakes on my train why doesn't it stop instantly?
The other thing is that regulations and tax related employment agreements between corporations and local governments are designed to prevent some offshoring of workers.
The article lays out the problems and why they are real. But I wish it would be bold and just say what’s needed to solve the issue. Without fair distribution of gains and with rapid concentration of wealth, the only viable solution in this jobless future is radically different taxation.
Speaking as a fellow job-displacement-worrier, I don’t think people have answers. But contrary to what a lot of people say, there is a ton of utility in pointing out a problem without having a solution. In this case, I think a lot of people who might have good ideas are currently under the mistaken impression that this isn’t a problem.
To the extent that I’ve heard people propose solutions, many of them have pretty big flaws:
- Retraining - AI will likely swoop in quickly and automate many of the brand new jobs it creates. Also retraining has a bit of a messy history, it was pretty ineffective at stopping the bleeding when large numbers of manufacturing jobs were offshored/automated in the past.
- “Make work” programs - I think these are pretty silly on the face of it, although something like this might be mecessary in the really short term if there’s very sudden massive job loss and we haven’t figured out a solution.
- Universal Basic Income - Probably the best system I’ve heard anyone propose. However there are 3 huge issues: 1 - politically this is a huge no-go at the moment (after watching the massive Covid stimulus happen in 2020 I have a sliver of hope, but not much). 2 - Even a pretty good UBI probably wouldn’t be enough to cushion the landing for people who make a lot right now and have made financial decisions (number of kids, purchasing a house, etc) on the basis of their current salary. 3 - Even if this happens in America (presumably redistributing the wealth accruing to American AI companies) it would leave non-Americans out in the cold, and we currently have no globally powerful institution with the trust and capability to manage a worldwide UBI.
I feel like people underrate make work a bit. If you look around at our infrastructure in the us, the number of roads and bridges with flaws, decaying buildings, the lack of housing in areas...
It's clear there's some things out there that aren't economically very profitable to do but would be nice to have done. So public works programs could soak up a lot of that and turn labor power on various stuff pretty easily I think.
Yup, there's a huge number of entirely physical/analogue ways that "many hands" could make the world a significantly nicer and more sustainable place. Public works, environmental works, having the capacity to do more than the bare minimum for the quality of the built environment - there is no shortage of things worth doing, just things worth doing profitably.
I think those are the same people that ignored the history of the https://en.wikipedia.org/wiki/New_Deal and the massive amount of infrastructure it built in the US that we still use to this day.
A simpler answer would simply be that, if you lay someone off on the basis that an AI can replace their entire job functionality, you have to keep paying their salary dollar for dollar until they find something else to do. This incentivizes companies to try and figure out creative ways to continue using their existing workforce to maximize the value they get out of AI systems.
You’d counterbalance that - and solve the other problem - by offering massive tax relief for companies who hire junior employees. In the same way that we use tax relief to encourage real estate and infrastructure investment in underserved areas, we can use it to tip the scales of economic rationality toward continuing to employ young people with no experience or specialized expertise.
Notice that neither of these proposals requires redistribution as such (seizing wealth).
> A simpler answer would simply be that, if you lay someone off on the basis that an AI can replace their entire job functionality, you have to keep paying their salary dollar for dollar until they find something else to do.
This just incentivize them to find different official reason for firing. Like missed deadlines (that sudently became shorter) or in computing job code quality (due to reduced deadlines).
> This incentivizes companies to try and figure out creative ways to continue using their existing workforce to maximize the value they get out of AI systems.
This doesnothing for the current issue of job market entry positions, where there is the most pressure from AI. Only help people only in position.
So then the corps find a way to fire you for something other than AI displacement, replace you with AI anyway, and you’re on your own. Basically identical to firing someone in a clever way that avoids having to pay unemployment, which already happens quite frequently.
I don’t understand why taxation is so off limits to this crowd. We seem to live in a death cult where avoiding a slight inconvenience to 100 people is more important than providing a decent standard of living for the other 345 million people. You can invent whatever clever little solution you want in the meantime but eventually the chickens will come home to roost.
>I don’t understand why taxation is so off limits to this crowd.
HN is filled with lots of temporarily depressed millionaires and many actual millionaires too. These are the ones that have bought into zero tax, government is all bad, free market capitalism for me Rand'ian ideas without any systematic thought on how their ideas would work out in practice.
Add to this that a lot of media, and pretty much everything on TV, is owned by billionaires these days that use the news as their platform to propagandize on why they should own more of everything and become richer, so it's not exactly surprising we're at this place.
>> Universal Basic Income - Probably the best system I’ve heard anyone propose.
I can't understand how that would work. If you put an income floor under everyone, their rents and other basic bills will simply increase to eat the free money. None of the experiments on how people will use UBI have taken that into account since the experiments were on relatively few people in an area. The other issue is how to pay for it - it has to come from taxes somewhere.
Doesn't that kinda show that these services are not actually based on not creating any genuine value, but are rather just parasites that squeeze as much money from their victims as they can based on the victim's income, rather than the product they can offer?
It is always possible to attribute whatever harms we come to as a result of some technology to one underlying issue or another, never to the technology itself—regarding any technology, be it LLMs or guns, this can be considered technically correct 100% of the time, because no technology is inherently good or bad.
That said, in face of a particularly disastrous (and yet predictable) outcome it is not enough to call for solving of such underlying issues; it is vital to solve such underlying issues before we introduce respective technology all over the place—and if that is not possible, make corresponding adjustments of how that technology is rolled out.
Mass extermination through famine, genocide or plague is another outcome. An Elysium earth worked by robots is a vision tech bro billionaires are rooting for and building towards.
As for your idea, I see no signs of their striving for redistributing their wealth.
Imagine wasting so much time on this AI-taking-jobs, while US job creation is tanking due very bad policies. Anything but tackling real issues I guess.
It's simple, if you refuse to adapt you will be replaced by those who will.
Writing code by hand is not going to be the default mode going forward. You either do the majority of your work controlling autonomous agents and reviewing their work or you get surpassed by all of your colleagues.
Are you going to be the farmer who refuses to buy a plow?
I also do not have sympathy for those who refuse to adapt. These people hold back organizations by appealing to tradition and resisting any form of change.
You can have my job I guess, I'm not going to sit at a prompt all day being a manager for a computer. It's not an appeal to tradition, I genuinely enjoy programming. Keep up that grindset young pup!
This take seems to require that models stop getting better at some capability level a little above where they are now. Is this a future event that you are very confident of?
It's not about adapting. I use AI a bit in my work, not fully agentic. I've seen what it does, I've seen what it's good at.
The problem begins people see this as a know everything magic orb and trust it blindly for everything. It's still a pattern matching model, it's not sentient. It should remain a tool rather than one you should be asking for decisions.
Also I've seen people waste massive amount of tokens to add two tabs to a line of code to fix indents. Said they don't want to click the damn line. Bro you just typed a larger prompt, sent the complete file, instead of two keystrokes. And guess what it took multiple attempts. It's like watching someone type google into google 3 times before typing what they want.
Not all software are simple crud from your standard consulting business which makes more money the quicker something is finished. Some software runs critical life threatening infra everywhere.
We need people who have the skills to build these, and they're discouraged from the school level not to thanks to AI bros.
> comparative advantage tells you that some human labor will remain valuable in some configuration, but nothing about the wages, number of jobs, or the distribution of gains. You can have comparative advantage and still have massive displacement, wage collapse, and concentration of returns to capital. A world where humans retain “comparative advantage” in a handful of residual tasks at a fraction of the current wages is technically consistent with Oks’ framework, but obviously is worth worrying about and is certainly not fine.
If the worst predictions about AI's effect on employment turn out to be correct, then I'd expect to see movements to force government regulation of AI. Particularly if it becomes the case that profits are accruing to a few massive corporations who run the AI.
There is no reason people have to tolerate a technology that is destructive to society, anymore than they have to tolerate companies selling fentanyl at 7/11.
>I'd expect to see movements to force government regulation of AI.
I agree. It will be an interesting debate to watch play out, because a) lots of end-users love using AI and will be loath to give it up, and b) advances in compute will almost certainly allow us to run current frontier models (or better) locally on our laptops and phones, which means that profits no longer accrue to a few massive AI labs. It would also would make regulating it a lot tricker, since kneecapping the AI labs would no longer effectively regulate the technology.
That would be an interesting scenario.
I'd expect it to be more dramatic. The worst predictions are something like "50% of all jobs will be completely eliminated in two years." That's violent uprising territory, not pressure on governments to improve regulation.
Yes, and that’s the least catastrophic option. I get the sense the boosters don’t read a lot of history.
> There is no reason people have to tolerate a technology that is destructive to society,
All evidence to the contrary. Aside from the French occasionally burning down some cars the western populations (me unfortunately included) have become remarkably relaxed about such things.
Even very extreme examples like blatant refusal by government to investigate absolutely horrific stuff like Epstein gets at most some mildly upset TikTok reels
Add some aggressive lobbying by big tech and perhaps a sprinkle of palantir population monitoring and I don’t think we’ll see a refusal to tolerate at scale
> For young software developers specifically, employment fell almost 20% from its 2022 peak.
Employment in the 2020-2022 range was highly unusual due to COVID stimulus the resulting unprecedented hiring. Tech companies were hiring anyone they could and after some time juniors were the only way to feed the insatiable demand for more headcount.
Comparing to this time without taking that into account is going to be misleading.
This period was also a strange time for remote work. I’ve been remote since before then, but COVID era WFH felt like a turning point when bad behavior during remote work became normalized. That’s when we started having remote hires trying to work two jobs (and giving us half an effort / not getting their work done), and there was a rise of “quiet quitting” as a news media meme because everyone thought they could always just walk out and get a new job if they got fired for not working. We also weren’t doing juniors any favors by hiring them in high numbers without a sufficient ratio of seniors to mentor and lead them.
That also coincided with the rise of GitHub Copilot and ChatGPT. These tools were not great at the time, but if you were a junior who was over-hired into a company that didn’t have capacity to mentor you and you were working remote in the age when Reddit was promoting quiet quitting and overemployment on your feed every day, banging out PRs with GitHub Copilot for a couple hours a day and then going about your life for a $135K salary right out of college felt like you just hit the jackpot of historical confluences for work-life balance.
I saw this exact story play out at multiple companies who got burned out on the idea of hiring juniors due to the risk. Combine that with the rapid improvement of the LLM tools and the idea quickly became that you just hire seniors and treat the LLMs as juniors rather than paying another salary for them to pilot Claude Code around. The seniors had to review the Claude Code output anyway, so why not cut out the middleman?
Then add the economic downturn and the chaos of whatever this administration is doing this month and now there are so many qualified seniors on the market that hiring juniors is hard to justify. This is the part that would have happened with or without AI.
All things considered, being down only 20% from the 2022 peak seems not that bad.
This thread inspired me to write an article because from my perspective, the debate over AI and jobs is missing a crucial question: not whether employment will exist, but what holds communities together as systems erode?
https://www.robpanico.com/articles/display/the-answer-isnt-m...
> AI replaces codified knowledge – the kind of learning you get from classrooms or textbooks – but struggles with tacit knowledge, the experiential judgement that accumulates over years on the job. This is why seniors are spared and juniors are not. But Oks’ thesis treats this as reassurance: see, humans with deep knowledge still have comparative advantage! I believe this is more of a senior worker’s luxury, and the protection for “seniors” will move up and up the hierarchy over time.
Times change, the ladders you and I climbed to success may not be around in the same forms for our children. That's not new. But will there be any ladders to climb if the bottom rungs are all gone?
All the kids already wanted be influencers, ironically turns out that's one of the safer career paths when AI is factored into the equation. Still not plumber or electrician safe, but the potential upsides are much higher.
AI Influencers are already making inroads. I don't think it's as safe as you think.
> ironically turns out that's one of the safer career paths
When I was doing mentoring there were dozens of young people pursuing influencer goals.
Zero of them made it anywhere.
It’s not a safe career path unless you ignore the 99.99% of influencers who don’t get traction and only look at the couple who become famous.
Yep, it's not any different than being a musician. Lots of people are good at writing/singing/playing music, very very few get anywhere with it.
AI is already taking over content generation
Eventually created and consumed by AI, fewer and fewer humans will consume it.
I will consume less of it, and have actively blocked or unsubscribed from orgs that promote it, but the generation behind us won't have these scruples.
Besides AI what other commenters pointed out, commercialized / "production line" influencers are already a thing and have been for years. I've seen some pretty dystopian studios being posted on social media (take that with a grain of salt because internet).
>> All the kids already wanted be influencers, ironically turns out that's one of the safer career paths when AI is factored into the equation
This is quite false. It is trivial to generate UGC (user-generated content) using AI now, and the resulting short-form videos are virtually indistinguishable from the real thing.
Once demand drives electrician salaries up to $200k/year, the influencer grind will lose some of its shine
Why would demand go up?
Yes electricians are definitely safer than those of us who work in front of a computer all day, but I don’t think AI is good for them either. First of all, more young people might try to become one, potentially crowding the sector. Second, if the rest of us are poorer we’ll also spend less in housing and other things that require an electrician.
if software engineers get displaced, they will eventually drift into other jobs that benefit from people who solve problems like software engineers. and if the former software engineers are willing to work at a rate reduced to their former positions, and if they bring better efficiencies with them, then there will be a slow cascade.
it doesn't follow that all software engineers are excellent at other work, please don't take that from my quip. but i could see the pattern, over time, being large enough to identify.
since software engineering jobs historically are very well paid, it does give some plausibility that former engineers working for less money would have this displacing effect.
its all icky no matter what i think, maybe someone else can tell me why i'm wrong and cheer me up
> Brynjolfsson analyzed millions of ADP payroll records and found a 13% relative decline in employment for early-career workers (ages 22-25) in AI-exposed occupations since late 2022.
> So what’s the mechanism at play? AI replaces codified knowledge
Many job postings peaked in 2022 due to the pandemic. The original paper tries to account for this but falls short in my opinion.
Original paper said[1]:
> One possibility is that our results are explained by a general slowdown in technology hiring from 2022 to 2023 as firms recovered from the COVID-19 Pandemic...
> Figure A12 shows employment changes by age and exposure quintile after excluding computer occupations...
> Figure A13 shows results when excluding firms in information technology or computer systems design...
> ... These results indicate that our findings are not specific to technology roles.
Excluding computer and IT jobs is not enough in my opinion. Look at all these other occupations which had peak hiring in 2022.
Nursing jobs in the US: https://fred.stlouisfed.org/series/IHLIDXUSTPNURS
Sales jobs in the US: https://fred.stlouisfed.org/series/IHLIDXUSTPSALE
Scientific research & development jobs in the US: https://fred.stlouisfed.org/series/IHLIDXUSTPSCREDE
Baking & finance jobs in the US: https://fred.stlouisfed.org/series/IHLIDXUSTPBAFI
[1] https://digitaleconomy.stanford.edu/app/uploads/2025/12/Cana...
In every discussion of AI eliminating or dramatically reducing the compensation for <some large double digit percentage> of “white collar” jobs (and probably “blue collar” too). It’s unclear to me what the end state is - the vast majority of the economy works on volume. You need large numbers of people with enough money to buy your product/service. As wealth concentrates there are fewer potential buyers and economies of scale start working against producers. (And governments need people with money to tax…)
Sadly, that matters very little!
https://finance.yahoo.com/news/top-10-earners-drive-nearly-1...
If this is to be believed, regular consumer goods won't matter anymore, and instead you just cater to the wealthy.
The economy becomes a palace economy, where the money fountains are owned by a few and the loot slowly flows through rings of gatekeepers while the outer rungs are plagued by desperation and poverty despite living in the shadow of abundance. These are common around the world and through time. It's the Star Wars fate.
Wealth likely won't mean much, it will be a concentration of power.
The question is if the AI will respect that concentration of power, or if it will just do it's own thing.
The concentration of power will own the AI so it will be at their whim
The end state is economic collapse/feudalism - quite desired by various current oligarchs.
> found a 13% relative decline in employment for early-career workers (ages 22-25) in AI-exposed occupations since late 2022. For young software developers specifically, employment fell almost 20% from its 2022 peak
This is confounding AI-exposed white collar occupations with occupations that were overrepresented with extended remote work.
I am on multiple boards and that was a major factor that disincentivized new grad hiring in the US, because a new grad salary in a white collar profession in the US is a mid-career salary in the rest of the world.
AI is used as an excuse, but even most executives when polled agree that we do expect to see the amount of employees being hired at least in software adjacent roles to increase.
I cannot justify hiring a mediocre new grad in Seattle for $120k who will end up using Claude Code anyhow when I can hire an early-career employee doing something similar in Romania or India for around $20k.
The reality is a large portion of new grads and mid-career types who started their careers after 2020 are too mediocre for the TC paid.
---
Edit: pulling a comment of mine from downthread
> Why are then so many US developers still employed
Becuase unlike the HN hivemind, a large portion of experienced developers in the US have found ways to realistically adopt new technologies where they are relevant.
Reflexively being an AI fanatic or Luddite is stupid, but being a SWE who is able to to recognize and explain the value of these tools and their limits is extremely valuable.
I can justify paying $300-400k TCs if you are not a code monkey. This means being able to architect, manage upwards, do basic design and program management, hop onto customer calls, and keep upskilling on top of writing, testing, and reviewing code.
We are not hiring SWEs to only push code. We hire SWEs in order to translate and implement business requirements into software.
A developer who has a mindset like that is worth their weight in gold, and there are still plenty of these kinds of experienced developers in the US.
> I cannot justify hiring a mediocre new grad in Seattle for $120k who will end up using Claude Code anyhow when I can hire an early-career employee doing something similar in Romania or India for around $20k.
Of course it's not justified, but I don't think it has anything to do with Claude Code or AI. It has always been true that you can hire competent programmers from eastern European at a discounted price, since forever.
If you believe (whether if this belief is based on reality or not) American programmers have "better working ethnics," "easier time to communicate with," or "skin in the game," then they still have these traits in AI era. If you don't then you should outsource anyway.
It's also more than a little misleading to compare to the 2022 peak. Anybody who was hiring software engineers in 2020-2022 or being hired as one knows that was a wild and unsustainable period.
What, you mean a person who has only previous interacted with computers via smartphones taking a 6 month "JavaScript bootcamp" and getting a $150k/year salary on the other side isn't sustainable?
> I cannot justify hiring a mediocre new grad in Seattle for $120k who will end up using Claude Code anyhow when I can hire an early-career employee doing something similar in Romania or India for around $20k
Yes you can. Life and business is not about profit. It’s about bettering the lives of people. Make it a priority to hire American because you’re an American company.
You’re making a choice to prioritize profit (or foreign countries) over the country that you benefit from. This is an immoral and short sighted business decision, as you will eventually see a backlash from the host countries you’re effectively operating as a parasite in.
Not trying to persuade you, just laying out there are alternatives that’ll be a reality eventually. Take a look at the current political swings in Japan, Restore Britain, etc.
> You’re making a choice to prioritize profit (or foreign countries) over the country that you benefit from. This is an immoral and short sighted business decision, as you will eventually see a backlash from the host countries you’re effectively operating as a parasite in.
I have the vague sense we're far enough into e.g. offshoring that it's not purely about "profits" but about being competitive because all your competitors are doing the same thing.
But, then again, wealth inequality increase doesn't seem to be slowing (so profits /are/ being achieved), and I mostly think about businesses in robotics (and I don't spend that much time pondering it) where there's a lot of complexity in the stack, needing more "manpower", and being smart with money spent is maybe /more/ important. Robotics is a smallll sliver of software dev companies... (thus, "vague sense")
>Life and business is not about profit. It’s about bettering the lives of people.
This mentality results in the grass at the Taj Mahal being cut with hand tools [0], or Japan having a whole category of "useless jobs" like elevator operators [1, 2] that simply exist to provide employment. Taken to an extreme, this is the broken windows makework fallacy. If I smash a lot of windows, the local glazier gets paid handsomely, at the expense of everyone who had to pay for window replacements.
[0] https://www.youtube.com/shorts/wAH8jj9cm_o
[1] https://www.taipeitimes.com/News/editorials/archives/2015/06...
[2] http://www.ageekinjapan.com/elevator-operator/
Anything taken to an extreme is extreme, that includes capitalism.
We know that turning everyone and everything into a product has it's own set of negative outcomes. Trying to play this off as a binary situation is a form of extremism in itself.
There is already the term Bullshit Jobs [1] for service economies like the US where huge numbers of people are employed as part of company bureaucracy rather than representing the most efficient outcome.
Simply put trying to run a society like a business is going to ensure that you get such a large number of people unhappy that you start a revolution that tries to burn everything down and leads to a lot of death.
[1] https://en.wikipedia.org/wiki/Bullshit_Jobs
Are those people cutting the grass/operating the elevators happier/unhappier than they would be otherwise? (I don't know, but perhaps you do). You seem to be strongly implying that this is in some way "wrong" rather than a subjectively different view of the purpose of human existence - for what reason? (I'll ignore the glazier example as it seems quite extreme, and also comes with more obvious/specific "victims").
>Are those people cutting the grass/operating the elevators happier/unhappier than they would be otherwise?
There are numerous studies that show menial labor leads to poor mental health. Perhaps these people employed as makework automatons are happier than they would be if they had no employment whatsoever and were destitute on the street, but these are not the only two alternatives.
>I'll ignore the glazier example as it seems quite extreme, and also comes with more obvious/specific "victims"
The "victims" at the Taj Mahal/department store are the visitors/customers who have to pay slightly higher prices as a result. While not as extreme as the glazier in the broken window fallacy, the grass cutters/elevator operators exist on the exact same spectrum.
> business is not about profit.
Have you ever run a business? Literally all anyone cares about is profit. When I talk to potential investors, banks for loans, even the government for grants, all they're interested in is cash-on-hand, revenue, projections, and expenses. I have never once had a bank ask me if I was bettering the lives of my employees when applying for a loan.
I'm not saying it should be this way, or defending capitalism here, but until there's massive changes to the Western economic system... yes, businesses are about profit.
> Have you ever run a business?
yes
> Literally all anyone cares about is profit.
I would agree there's a lot of people that this is the case for
but it is not everyone
if my back was up against the wall I would rather shut down than e.g. dump PFAS into watercourses (3M style)
or fly-tip
or use AI
Capitalism really is like a disease of the mind. The idea that you absolutely have to and there are no alternatives to extracting as much wealth from a system as possible.
>>Brynjolfsson found a 13% relative decline in employment for early-career workers (ages 22-25) in AI-exposed occupations since late 2022. For young software developers specifically, employment fell almost 20% from its 2022 peak
>This is confounding AI-exposed white collar occupations with occupations that were overrepresented with extended remote work.
Yup. If you look at Brynjolfsson's actual publication [0], you'll see that precipitous decline in hiring juniors in "AI-exposed occupations" starts in late 2022. This is when ChatGPT first came out, and far too early to see any effects of AI on the job market.
You know what else happened in late 2022? The end of ZIRP and Section 174, which immediately put a stop to the frantic post-COVID overhiring of bootcamp juniors just to pad headcount and signal growth. The problem with Brynjolfsson's paper is that it doesn't effectively deconvolve "AI-exposed occupations" from "ZIRP/Section 174-exposed occupations," which overlap significantly.
[0] https://digitaleconomy.stanford.edu/app/uploads/2025/11/Cana...
I am confused by your last paragraph. Is it AI or not? First three paragraphs sounded like it’s not…
It is not AI becuase employees hired in Romania and the US are both expected to be able to know how to use AI, which papers over performance issues in most cases that matter for a business (time to delivery), but I cannot justify hiring a deskilled NCG for $120k in the US.
Edit: cannot reply
> Why are then so many US developers still employed
Becuase unlike the HN hivemind, a large portion of experienced developers in the US have found ways to realistically adopt new technologies where they are relevant.
Reflexively being an AI fanatic or Luddite is stupid, but being a SWE who is able to tin recognize and explain the value of these tools and their limits is extremely valuable.
I can justify paying $300-400k TCs if you are not a code monkey. This means being able to architect, manage upwards, do basic design and program management, hop onto customer calls, and keep upskilling on top of writing and reviewing code.
We are not hiring SWEs to only push code. We hire SWEs in order to translate and implement business requirements into software.
A developer who has a mindset like that is worth their weight in gold, and there are still plenty of these kinds of experienced developers in the US.
Why are then so many US developers still employed? Some of them might be the best in the world, but they are a minority.
When I turn the brakes on my train why doesn't it stop instantly?
The other thing is that regulations and tax related employment agreements between corporations and local governments are designed to prevent some offshoring of workers.
It's not a binary situation.
The article lays out the problems and why they are real. But I wish it would be bold and just say what’s needed to solve the issue. Without fair distribution of gains and with rapid concentration of wealth, the only viable solution in this jobless future is radically different taxation.
Speaking as a fellow job-displacement-worrier, I don’t think people have answers. But contrary to what a lot of people say, there is a ton of utility in pointing out a problem without having a solution. In this case, I think a lot of people who might have good ideas are currently under the mistaken impression that this isn’t a problem.
To the extent that I’ve heard people propose solutions, many of them have pretty big flaws:
- Retraining - AI will likely swoop in quickly and automate many of the brand new jobs it creates. Also retraining has a bit of a messy history, it was pretty ineffective at stopping the bleeding when large numbers of manufacturing jobs were offshored/automated in the past.
- “Make work” programs - I think these are pretty silly on the face of it, although something like this might be mecessary in the really short term if there’s very sudden massive job loss and we haven’t figured out a solution.
- Universal Basic Income - Probably the best system I’ve heard anyone propose. However there are 3 huge issues: 1 - politically this is a huge no-go at the moment (after watching the massive Covid stimulus happen in 2020 I have a sliver of hope, but not much). 2 - Even a pretty good UBI probably wouldn’t be enough to cushion the landing for people who make a lot right now and have made financial decisions (number of kids, purchasing a house, etc) on the basis of their current salary. 3 - Even if this happens in America (presumably redistributing the wealth accruing to American AI companies) it would leave non-Americans out in the cold, and we currently have no globally powerful institution with the trust and capability to manage a worldwide UBI.
I feel like people underrate make work a bit. If you look around at our infrastructure in the us, the number of roads and bridges with flaws, decaying buildings, the lack of housing in areas...
It's clear there's some things out there that aren't economically very profitable to do but would be nice to have done. So public works programs could soak up a lot of that and turn labor power on various stuff pretty easily I think.
Yup, there's a huge number of entirely physical/analogue ways that "many hands" could make the world a significantly nicer and more sustainable place. Public works, environmental works, having the capacity to do more than the bare minimum for the quality of the built environment - there is no shortage of things worth doing, just things worth doing profitably.
>I feel like people underrate make work a bit.
I think those are the same people that ignored the history of the https://en.wikipedia.org/wiki/New_Deal and the massive amount of infrastructure it built in the US that we still use to this day.
A simpler answer would simply be that, if you lay someone off on the basis that an AI can replace their entire job functionality, you have to keep paying their salary dollar for dollar until they find something else to do. This incentivizes companies to try and figure out creative ways to continue using their existing workforce to maximize the value they get out of AI systems.
You’d counterbalance that - and solve the other problem - by offering massive tax relief for companies who hire junior employees. In the same way that we use tax relief to encourage real estate and infrastructure investment in underserved areas, we can use it to tip the scales of economic rationality toward continuing to employ young people with no experience or specialized expertise.
Notice that neither of these proposals requires redistribution as such (seizing wealth).
> A simpler answer would simply be that, if you lay someone off on the basis that an AI can replace their entire job functionality, you have to keep paying their salary dollar for dollar until they find something else to do.
This just incentivize them to find different official reason for firing. Like missed deadlines (that sudently became shorter) or in computing job code quality (due to reduced deadlines).
> This incentivizes companies to try and figure out creative ways to continue using their existing workforce to maximize the value they get out of AI systems.
This doesnothing for the current issue of job market entry positions, where there is the most pressure from AI. Only help people only in position.
So then the corps find a way to fire you for something other than AI displacement, replace you with AI anyway, and you’re on your own. Basically identical to firing someone in a clever way that avoids having to pay unemployment, which already happens quite frequently.
I don’t understand why taxation is so off limits to this crowd. We seem to live in a death cult where avoiding a slight inconvenience to 100 people is more important than providing a decent standard of living for the other 345 million people. You can invent whatever clever little solution you want in the meantime but eventually the chickens will come home to roost.
>I don’t understand why taxation is so off limits to this crowd.
HN is filled with lots of temporarily depressed millionaires and many actual millionaires too. These are the ones that have bought into zero tax, government is all bad, free market capitalism for me Rand'ian ideas without any systematic thought on how their ideas would work out in practice.
Add to this that a lot of media, and pretty much everything on TV, is owned by billionaires these days that use the news as their platform to propagandize on why they should own more of everything and become richer, so it's not exactly surprising we're at this place.
>> Universal Basic Income - Probably the best system I’ve heard anyone propose.
I can't understand how that would work. If you put an income floor under everyone, their rents and other basic bills will simply increase to eat the free money. None of the experiments on how people will use UBI have taken that into account since the experiments were on relatively few people in an area. The other issue is how to pay for it - it has to come from taxes somewhere.
Doesn't that kinda show that these services are not actually based on not creating any genuine value, but are rather just parasites that squeeze as much money from their victims as they can based on the victim's income, rather than the product they can offer?
[dead]
My personal worry about UBI is that it will simply be transferred to landlords. We need to figure out how to solve the housing problem.
It is always possible to attribute whatever harms we come to as a result of some technology to one underlying issue or another, never to the technology itself—regarding any technology, be it LLMs or guns, this can be considered technically correct 100% of the time, because no technology is inherently good or bad.
That said, in face of a particularly disastrous (and yet predictable) outcome it is not enough to call for solving of such underlying issues; it is vital to solve such underlying issues before we introduce respective technology all over the place—and if that is not possible, make corresponding adjustments of how that technology is rolled out.
Mass extermination through famine, genocide or plague is another outcome. An Elysium earth worked by robots is a vision tech bro billionaires are rooting for and building towards.
As for your idea, I see no signs of their striving for redistributing their wealth.
Imagine wasting so much time on this AI-taking-jobs, while US job creation is tanking due very bad policies. Anything but tackling real issues I guess.
> why worry about two problems when we could just worry about one?
because one is not actually a problem
The author clearly disagrees with that statement.
US unemployment rate is really low right now. It's low in terms of the history data of the US, and it's super low compared to europe.
It's simple, if you refuse to adapt you will be replaced by those who will.
Writing code by hand is not going to be the default mode going forward. You either do the majority of your work controlling autonomous agents and reviewing their work or you get surpassed by all of your colleagues.
Are you going to be the farmer who refuses to buy a plow?
I also do not have sympathy for those who refuse to adapt. These people hold back organizations by appealing to tradition and resisting any form of change.
You can have my job I guess, I'm not going to sit at a prompt all day being a manager for a computer. It's not an appeal to tradition, I genuinely enjoy programming. Keep up that grindset young pup!
Go back to Linkedin.
This take seems to require that models stop getting better at some capability level a little above where they are now. Is this a future event that you are very confident of?
AGI is mathematically impossible given our current electrical infrastructure constraints and basic laws, so yes it is.
We're already seeing a dramatic slowdown in relative improvements.
The gap between Sonnet 3.5 (June 2024) and Opus 4.6 (Feb 2026) is large, but it's not 1,000x. Not even 100x.
Well, I hope you're right. Good luck to us all.
If your review of AI generated code is not comparable to writing it yourself, I have some real concerns about the quality of your reviews.
Alr bro we're making software not going to war.
Now you know how these guys see everything.
Like when Trump tells Canada (paraphrasing) "You think we're going to let China eat you first and just have the scraps?"
Everything is a zero-sum game to them. They are philosophically void.
It's not about adapting. I use AI a bit in my work, not fully agentic. I've seen what it does, I've seen what it's good at.
The problem begins people see this as a know everything magic orb and trust it blindly for everything. It's still a pattern matching model, it's not sentient. It should remain a tool rather than one you should be asking for decisions.
Also I've seen people waste massive amount of tokens to add two tabs to a line of code to fix indents. Said they don't want to click the damn line. Bro you just typed a larger prompt, sent the complete file, instead of two keystrokes. And guess what it took multiple attempts. It's like watching someone type google into google 3 times before typing what they want.
Not all software are simple crud from your standard consulting business which makes more money the quicker something is finished. Some software runs critical life threatening infra everywhere. We need people who have the skills to build these, and they're discouraged from the school level not to thanks to AI bros.
Or: you'll be fine for a year or two.
Then the models will put you out of work. Nobody will need you.
We'll have a world full of largely useless humans.
[flagged]