People like Altman and Musk are saying that Universal Basic Income will be necessary once AI has fully automated away most jobs, but at the same time they aggressively fight against any kind of tax policy that would allow UBI to function.
I am convinced that their talk of UBI is just handwaving; they're trying to convince us that there will be a solution to the destruction of the economy as we know it, so that we'll just let them do whatever they want.
It isn't the backlash against AI that will get ugly, it will be the backlash against the ten people who suddenly own the entire world's money supply
Given how resistant American voters and politicians are against any sort of welfare or social assistance I doubt UBI would ever be possible here. Remember the backlash against "ObamaPhones" and "welfare queens!" We can't even get mandatory paid parental leave approved; UBI would be a non-starter.
Americans are fine with low taxes for billionaires and don't mind high inequality as one of their core beliefs is that upward class mobility is achievable and they might also get rich.
Elon's companies famously pay very little in taxes, he spent last year attempting to gut the federal government, he complains constantly about how much he pays in taxes, and he's been very vocal about California's recent efforts to tax very wealthy people.
Support for Trump, or even Republicans writ large, means support for reducing taxes (both estate and income) on the wealthy, while increasing them on consumers (via tariffs). Musk has been an ardent supporter of Trump.
Haven't really paid attention to Altman, so can't comment there, but on the Musk file I will say it is insane that anyone relies upon his future benevolence. And they do rely upon it given that America is 100% a plutocracy now and is run in the service of the ultra-rich who hold complete and utter control over government.
Musk's entire history on this planet betrays him to be a profoundly selfish individual with perilously little regard for anyone else. Musk and his ilk (Trump, Bezos, Page, Ellison, Thiel, etc) are more likely to see you ground up into Soylent Green than to offer largess like UBI.
> Already, as many as a quarter of Americans seem accepting of violence as a tool for achieving political change.
I'm surprised it's only a quarter: violence as a tool for achieving political change is the entire point of the right to bear arms.
EDIT: I'm not arguing for or against political violence, just noting an apparent inconsistency between Americans' views and one of the documents that they talk about as though it's holy writ.
Some friends and I read "A People's History of the United States" a while back and were surprised at how true this is. US classroom history textbooks hold civil disobedience up as the One True Way to bring change, but it's alarming how often the backdrop of famous acts of civil disobedience was in fact incredible violence.
Our conclusion in our impromptu book club was that made sense: why would the state schools give students lots of examples of how violence against the state was an effective negotiating tool? It was extremely jarring to reconcile with the image of US history we'd been imbued with up to that point, which of course was also a reflection of our socioeconomic status at the time.
As a counterpoint, "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants" is also taught in schools, so it's possible I'm just selectively remembering things.
I don’t condone it but I’m also expecting it to escalate. I grew up extremely poor and remained so until I dug myself out (through an absolutely ridiculous amount of work that no one should have to do this is not pro bootstraps).
Every week was a struggle to eat and the cost of living has significantly increased since then.
I guess the question is what is the terminal percentage of people who can’t afford to exist?
Ignoring CEO predictions, can anyone point to any major revolutionary technology that had a net negative impact on quality of life and employment statistics? AI is an incredibly powerful technological shift in our way of life but where is the net employment hit taking place? Unemployment numbers remain stable. Revolutions like this do create widening inequality while also increasing long run productivity. Yes inequality rises but what you should care about is your quality of life and that will also improve over time. There will be suffering during transition and there will be many that don’t fare well but…this happens during every major revolution—electricity the internet etc. so why do people treat AI like it’s a uniquely damaging phenomenon?
You are confusing the macro view with the personal. Try losing your job and being told "don't worry, your quality of life will improve over time!" Would you respond positively?
Industrial revolution made lives of many people into a hell. In the long term we gained, but only after those people went through periods of violence and fights.
> AI is an incredibly powerful technological shift in our way of life but where is the net employment hit taking place? Unemployment numbers remain stable.
At this point, a lot of AI is hot water rather then powerful shift in our way of life.
The true AI doomsayers believe in some sort of technological singularity, which means a point after which things become so strange that the world is radically transformed.
Things like "jobs" and "careers" are so integral to society that we can't really imagine what society would be like in a world where people don't have any clear purpose. That's why you won't get a definitive answer. The whole idea of a singularity is that people don't have the faintest clue what day to day life would look like after.
We often to choose to believe that a singularity can't happen, because we don't know what that even means. We can't answer the simple question. So it definitely better not happen, that would be very inconvenient.
I believe that AI will continue to progress. I believe that we’re going to see a fast takeoff.
That said, some people are now discussing a “societal singularity” wherein society breaks before the actual emergence of AGI. I believe this is the trajectory we are on. The question is what happens to the unemployed. Democracies will not tolerate mass permanent unemployment, as we’ve seen over and over again.
UBI is a scam, many middle class folks would be worse off under UBI than they are under the current system. They will fight to defend the economic status quo.
In the end, I think capitalism is incompatible with the emergence of AGI, and I think an aligned ASI will smash the capitalist system simply out of pure egalitarianism. (Note: I was previously a proponent of capitalism.) I think many people will die trying to defend capitalism. We’re at the beginning of the AI wars.
I’m always amazed that when I tell people I intend to retire in my 50s, they tell me that I can’t possibly mean that and actively wonder how I could possibly fill my time. It’s as if we could not possibly function as humans without meaningless shifting of tangible/intangibles from one place to another.
Society is so hellbent on the idea that we need our job to be our identity, they lack the imagination for another other reality.
Sure working sucks, but have you tried not working? I think this is from lived experience because I've gone for stretches of not working (intentionally). It can be challenging to find a sense of fulfillment. I know it seems counter-intuitive but if you do succeed in your dream of retiring in your 50's I think you'll understand what I mean when you get there.
It is indeed ridiculous. People saying they're going to let someone else tell them what to do with their time, energy, and calendar, even if they hate doing it. The only explanation I have is that they have been letting the wrong people program them.
It can't happen. For one - if it did happen it would mean all domains reach singularity at once, but we know the capability curve is jagged. Each domain advances at its own speed.
Second - the more you make progress, the harder it gets, exponentially harder. Maybe Newton could advance physics observing an apple fall, today they need space telescopes and billion dollar particle accelerators. The more tech advances, the harder it is. Will AGI be so "super" to cancel out exponentials?
And third - the AI progress is tied to learning signal, and we have exhausted the available data. In the last 1-2 years we have started using verified synthetic data (RLVR) but exponential difficulty is a barrier. Other domains don't even have built in verifiability like math and code. So there the progress will be slower. Testing a vaccine to be safe takes 6 months for 1 bit of information - that is how slow and expensive it can get in some domains. AI can't get the learning signal it needs across all domains fast enough.
You're asking a question that only applies to rational actors.
Corporations exist for one purpose: to get as much money as possible. Side concerns, which can range from "not destroying the environment" or "not destroying the economy," are objectively not their goal, nor do they consider them their responsibility. Those are things "someone else" should worry about.
AI destroying all jobs is similar to a nuclear arms race; these companies don't want to eliminate everyone's ability to buy things, but they don't want to be the only entity without that ability, so ...
That is mostly true but a bit of a simplification. They exist to do what the people who have power want them to do which is not always strictly profit maximization.
A ceo may realize rto will decrease profits but do it anyway because it increases the power delta between him and the workers.
The controlling votes are all part of the same social class. They would gladly give up a small amount of profit to keep the distance between them and the workers as large as possible.
There are jobs AI can't easily come for... not always nice ones, but either too physically fiddly or too cheap to bother automating.
But jobs go "extinct" all the time. My ancestors going back generations were sugarhouse labourers. That job's gone, but the lineage isn't: we just do different things now.
The pattern seems pretty consistent: raise the floor (dishwashers, CNC machines, laundry), and people tend to climb to higher levels of abstraction. The real question is who captures those productivity gains; and historically, it isn't the workers.
Shoes are the classic example. Automation made them cheaper and accessible to everyone. Then, once the market was captured, mid-tier became the ceiling and anything above it got expensive again. Nobody won except the owners.
There will still be jobs. Manual jobs, the kind that break our backs and have us breath various stuff we shouldn't (dust, fumes). Robots are difficult and maybe not so economically viable when everyone is desperate for any job at any cost.
Why would the doomsayers be the ones who need to answer that? That’s kind of their point! It’s the AI boosters who need to answer that, and so far it’s just a big collective shrug + silence.
It's bizarre that some of the doomsayers are AI stakeholders. It's like they don't realize that most people don't have net worth in the 7-8 figures.
I console myself with the fact that without a functioning economy, AI will implode since capital will dry up. Then all of the investment in data centers, R&D, etc. will never be recovered. Then we'll be back to rational thinking? Maybe?
Yeah, but it doesn’t implode all at once - it’s not distributed evenly.
Something like over half of the US consumption is done by the top 10%, or something insane like that. This leads me to believe that a lot more people will eat shit, before enough feel real pain.
The consumer economy only exists to extract value from common people and funnel it up the wealth ladder. If robots and AI take over all the production, you don’t need a consumer economy, the robots produce and their output directly goes to the top. The rest of us are left to starve.
Take any econ 101 course, and you'll realize that this isn't a factor in the capitalist system. Capitalism is simply concerned with maximizing profit, and in this case, returning shareholder value. It's just simply not in the purview of the system to think about what happens when you completely get rid of your labor force.
Envisioned another way, the future of labor might look the way it did for laborers over 100 years ago, before major industries unionized; making 'Amazon-bucks' that can only be redeemed at the 'Amazon company store'.
Fully automated luxury space fascism doesn't really need buyers. A risk of high automation/post-scarcity is that abundance exists but remains under the control of people who are not interested in justice or equality or freedom. Lots of people feel that describes the leadership of most AI tech companies.
If they don't need your labor, and they don't need you as a customer, and they don't care about you as a person... where does that leave you?
[to be clear, I think post-scarcity, even in knowledge work, is a lot further off than most ai-doomsayers or ai-worshipers who take statements from people like Altman and Musk at face value]
1. If AI is like other technologies, there will be job displacement and temporary upheaval after which new jobs will be created and prosperity increases - this is by far the only good way to increase prosperity
2. If AI is so good that it is a proper superset of humans and can do all jobs humans can do, this is a huge deal and we don’t even have the vocabulary to express what would happen
- If (big If) AI actually replaces workers, then we have a problem, because lots of folk lost their jobs
- If AI doesn't replace workers, then we have a recession, because a lot of the US economy now sits on top of corporations betting on it. And this will tank the economy and lots of folk will lose their jobs
It feels that the only path forward is a narrow one where AI removes some jobs, but not too many, but still enough so that the (immense, disproportionate) hype that was put on it does not come with a vengeance and the house of cards falls.
I don't think it an either / or. The current AI models, as they are, improve productivity quite a bit. They're just super expensive but the expense is being subsidized so it appears reasonable.
An alternative possibility is that the models become much cheaper and their use becomes more ubiquitous which would be helpful.
If your competitor is cutting jobs because of AI you can either race them to the bottom or you can use the humans you already have to leverage AI to expand your product offering, become more competitive, tackle more work, deliver better quality results etc. I don't see a world where AI does the work and humans sit around poor and idle.
If one of both cases happens you can be sure the people most responsible will be the least affected.
That is why this can get ugly. Honestly they deserve for it to get ugly. We can keep giving the Musks, Zuckerbergs, Altmans and co the benefit of the doubt
There is a third and, to me, so much more likely outcome that it's not even worth talking about the other two: AI makes workers more productive and unlocks more economic activity.
People like Altman and Musk are saying that Universal Basic Income will be necessary once AI has fully automated away most jobs, but at the same time they aggressively fight against any kind of tax policy that would allow UBI to function.
I am convinced that their talk of UBI is just handwaving; they're trying to convince us that there will be a solution to the destruction of the economy as we know it, so that we'll just let them do whatever they want.
It isn't the backlash against AI that will get ugly, it will be the backlash against the ten people who suddenly own the entire world's money supply
It's the same "give me a lot of money and everything will be great for everyone!" pitch that rich guys have been running for the history of humanity.
I keep warning people that promising UBI and not delivering UBI both serve the same end, undermining opposition.
By the time you find out that their promises of UBI are empty it’s too late to do anything about it.
Given how resistant American voters and politicians are against any sort of welfare or social assistance I doubt UBI would ever be possible here. Remember the backlash against "ObamaPhones" and "welfare queens!" We can't even get mandatory paid parental leave approved; UBI would be a non-starter.
Americans are fine with low taxes for billionaires and don't mind high inequality as one of their core beliefs is that upward class mobility is achievable and they might also get rich.
What’s an example where they took a side on taxes?
Elon's companies famously pay very little in taxes, he spent last year attempting to gut the federal government, he complains constantly about how much he pays in taxes, and he's been very vocal about California's recent efforts to tax very wealthy people.
Google is your friend.
https://finance.yahoo.com/news/elon-musk-bashes-government-t...
Support for Trump, or even Republicans writ large, means support for reducing taxes (both estate and income) on the wealthy, while increasing them on consumers (via tariffs). Musk has been an ardent supporter of Trump.
A few days ago he tweeted something along the lines of:
“Bitches Money No Taxes Party”
I think he deleted it afterwards
[dead]
Haven't really paid attention to Altman, so can't comment there, but on the Musk file I will say it is insane that anyone relies upon his future benevolence. And they do rely upon it given that America is 100% a plutocracy now and is run in the service of the ultra-rich who hold complete and utter control over government.
Musk's entire history on this planet betrays him to be a profoundly selfish individual with perilously little regard for anyone else. Musk and his ilk (Trump, Bezos, Page, Ellison, Thiel, etc) are more likely to see you ground up into Soylent Green than to offer largess like UBI.
UBI == neo-slavery
> Already, as many as a quarter of Americans seem accepting of violence as a tool for achieving political change.
I'm surprised it's only a quarter: violence as a tool for achieving political change is the entire point of the right to bear arms.
EDIT: I'm not arguing for or against political violence, just noting an apparent inconsistency between Americans' views and one of the documents that they talk about as though it's holy writ.
It's 100% accurate to say that the history of the United States is filled to the brim with political change via violence.
Some friends and I read "A People's History of the United States" a while back and were surprised at how true this is. US classroom history textbooks hold civil disobedience up as the One True Way to bring change, but it's alarming how often the backdrop of famous acts of civil disobedience was in fact incredible violence.
Our conclusion in our impromptu book club was that made sense: why would the state schools give students lots of examples of how violence against the state was an effective negotiating tool? It was extremely jarring to reconcile with the image of US history we'd been imbued with up to that point, which of course was also a reflection of our socioeconomic status at the time.
As a counterpoint, "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants" is also taught in schools, so it's possible I'm just selectively remembering things.
I don’t condone it but I’m also expecting it to escalate. I grew up extremely poor and remained so until I dug myself out (through an absolutely ridiculous amount of work that no one should have to do this is not pro bootstraps).
Every week was a struggle to eat and the cost of living has significantly increased since then.
I guess the question is what is the terminal percentage of people who can’t afford to exist?
It’s 25% increase…for every meal missed.
Ignoring CEO predictions, can anyone point to any major revolutionary technology that had a net negative impact on quality of life and employment statistics? AI is an incredibly powerful technological shift in our way of life but where is the net employment hit taking place? Unemployment numbers remain stable. Revolutions like this do create widening inequality while also increasing long run productivity. Yes inequality rises but what you should care about is your quality of life and that will also improve over time. There will be suffering during transition and there will be many that don’t fare well but…this happens during every major revolution—electricity the internet etc. so why do people treat AI like it’s a uniquely damaging phenomenon?
> There will be suffering during transition and there will be many that don’t fare well
Yeah, and the people suffering are not going to like that. If people are afraid of being in that group, then they will not be very happy about it.
If you put yourself in the shoes of someone suffering from AI, how comforting do you think your observations here are?
Don't ignore the transient.
The industrial revolution created a hell on Earth for workers for the better part of a century.
You are confusing the macro view with the personal. Try losing your job and being told "don't worry, your quality of life will improve over time!" Would you respond positively?
https://www.amazon.com/Blood-Machine-Origins-Rebellion-Again... Has some pertinent examples.
Industrial revolution made lives of many people into a hell. In the long term we gained, but only after those people went through periods of violence and fights.
> AI is an incredibly powerful technological shift in our way of life but where is the net employment hit taking place? Unemployment numbers remain stable.
At this point, a lot of AI is hot water rather then powerful shift in our way of life.
> They want to replace workers
A simple question none of the ai-doomsayers can answer... who buys anything when nobody has a job cos robots do everything?
The true AI doomsayers believe in some sort of technological singularity, which means a point after which things become so strange that the world is radically transformed.
Things like "jobs" and "careers" are so integral to society that we can't really imagine what society would be like in a world where people don't have any clear purpose. That's why you won't get a definitive answer. The whole idea of a singularity is that people don't have the faintest clue what day to day life would look like after.
We often to choose to believe that a singularity can't happen, because we don't know what that even means. We can't answer the simple question. So it definitely better not happen, that would be very inconvenient.
I believe that AI will continue to progress. I believe that we’re going to see a fast takeoff.
That said, some people are now discussing a “societal singularity” wherein society breaks before the actual emergence of AGI. I believe this is the trajectory we are on. The question is what happens to the unemployed. Democracies will not tolerate mass permanent unemployment, as we’ve seen over and over again.
UBI is a scam, many middle class folks would be worse off under UBI than they are under the current system. They will fight to defend the economic status quo.
In the end, I think capitalism is incompatible with the emergence of AGI, and I think an aligned ASI will smash the capitalist system simply out of pure egalitarianism. (Note: I was previously a proponent of capitalism.) I think many people will die trying to defend capitalism. We’re at the beginning of the AI wars.
I’m always amazed that when I tell people I intend to retire in my 50s, they tell me that I can’t possibly mean that and actively wonder how I could possibly fill my time. It’s as if we could not possibly function as humans without meaningless shifting of tangible/intangibles from one place to another.
Society is so hellbent on the idea that we need our job to be our identity, they lack the imagination for another other reality.
It’s ridiculous.
Sure working sucks, but have you tried not working? I think this is from lived experience because I've gone for stretches of not working (intentionally). It can be challenging to find a sense of fulfillment. I know it seems counter-intuitive but if you do succeed in your dream of retiring in your 50's I think you'll understand what I mean when you get there.
It is indeed ridiculous. People saying they're going to let someone else tell them what to do with their time, energy, and calendar, even if they hate doing it. The only explanation I have is that they have been letting the wrong people program them.
It can't happen. For one - if it did happen it would mean all domains reach singularity at once, but we know the capability curve is jagged. Each domain advances at its own speed.
Second - the more you make progress, the harder it gets, exponentially harder. Maybe Newton could advance physics observing an apple fall, today they need space telescopes and billion dollar particle accelerators. The more tech advances, the harder it is. Will AGI be so "super" to cancel out exponentials?
And third - the AI progress is tied to learning signal, and we have exhausted the available data. In the last 1-2 years we have started using verified synthetic data (RLVR) but exponential difficulty is a barrier. Other domains don't even have built in verifiability like math and code. So there the progress will be slower. Testing a vaccine to be safe takes 6 months for 1 bit of information - that is how slow and expensive it can get in some domains. AI can't get the learning signal it needs across all domains fast enough.
You're asking a question that only applies to rational actors.
Corporations exist for one purpose: to get as much money as possible. Side concerns, which can range from "not destroying the environment" or "not destroying the economy," are objectively not their goal, nor do they consider them their responsibility. Those are things "someone else" should worry about.
AI destroying all jobs is similar to a nuclear arms race; these companies don't want to eliminate everyone's ability to buy things, but they don't want to be the only entity without that ability, so ...
That is mostly true but a bit of a simplification. They exist to do what the people who have power want them to do which is not always strictly profit maximization.
A ceo may realize rto will decrease profits but do it anyway because it increases the power delta between him and the workers.
"not always strictly profit maximization."
Maybe in the short-term but public companies with shareholders won't allow this in any sort of long-term way right?
Not allow it? They insist upon it!
The controlling votes are all part of the same social class. They would gladly give up a small amount of profit to keep the distance between them and the workers as large as possible.
Nobody can answer that?
There are jobs AI can't easily come for... not always nice ones, but either too physically fiddly or too cheap to bother automating.
But jobs go "extinct" all the time. My ancestors going back generations were sugarhouse labourers. That job's gone, but the lineage isn't: we just do different things now.
The pattern seems pretty consistent: raise the floor (dishwashers, CNC machines, laundry), and people tend to climb to higher levels of abstraction. The real question is who captures those productivity gains; and historically, it isn't the workers.
Shoes are the classic example. Automation made them cheaper and accessible to everyone. Then, once the market was captured, mid-tier became the ceiling and anything above it got expensive again. Nobody won except the owners.
There will still be jobs. Manual jobs, the kind that break our backs and have us breath various stuff we shouldn't (dust, fumes). Robots are difficult and maybe not so economically viable when everyone is desperate for any job at any cost.
Why would the doomsayers be the ones who need to answer that? That’s kind of their point! It’s the AI boosters who need to answer that, and so far it’s just a big collective shrug + silence.
We shouldn't be surprised people have a negative view of AI when Altman et al. have stated on stage that the goal is to replace everyone.
It's bizarre that some of the doomsayers are AI stakeholders. It's like they don't realize that most people don't have net worth in the 7-8 figures.
I console myself with the fact that without a functioning economy, AI will implode since capital will dry up. Then all of the investment in data centers, R&D, etc. will never be recovered. Then we'll be back to rational thinking? Maybe?
They realize it, and they don't care.
Yeah, but it doesn’t implode all at once - it’s not distributed evenly.
Something like over half of the US consumption is done by the top 10%, or something insane like that. This leads me to believe that a lot more people will eat shit, before enough feel real pain.
The consumer economy only exists to extract value from common people and funnel it up the wealth ladder. If robots and AI take over all the production, you don’t need a consumer economy, the robots produce and their output directly goes to the top. The rest of us are left to starve.
Take any econ 101 course, and you'll realize that this isn't a factor in the capitalist system. Capitalism is simply concerned with maximizing profit, and in this case, returning shareholder value. It's just simply not in the purview of the system to think about what happens when you completely get rid of your labor force.
Envisioned another way, the future of labor might look the way it did for laborers over 100 years ago, before major industries unionized; making 'Amazon-bucks' that can only be redeemed at the 'Amazon company store'.
Fully automated luxury space fascism doesn't really need buyers. A risk of high automation/post-scarcity is that abundance exists but remains under the control of people who are not interested in justice or equality or freedom. Lots of people feel that describes the leadership of most AI tech companies.
If they don't need your labor, and they don't need you as a customer, and they don't care about you as a person... where does that leave you?
[to be clear, I think post-scarcity, even in knowledge work, is a lot further off than most ai-doomsayers or ai-worshipers who take statements from people like Altman and Musk at face value]
the answer to every question: Agents, of course! With GPU-collaterized credit or some other idiocy.
The robots will tell you what to do, you will own nothing, and you will be happy. I think that is the plan?
If you have a magic robot that builds everything you want you don’t need anyone to buy anything.
Jfc this site is the worst. Use your words instead of drive by downvoting.
Where do the raw materials for the thing it's building (or the robot itself for that matter) come from?
From the earth. Maybe in the future space.
Finite natural resources are by their very nature, limited.
Yes finite things are finite. Glad we cleared up it.
Finite means not free. Who will pay?
Let’s hope so
1. If AI is like other technologies, there will be job displacement and temporary upheaval after which new jobs will be created and prosperity increases - this is by far the only good way to increase prosperity
2. If AI is so good that it is a proper superset of humans and can do all jobs humans can do, this is a huge deal and we don’t even have the vocabulary to express what would happen
I don’t foresee a third option.
the fact no one can actually describe these "new jobs" makes #2 appear increasingly more probable.
http://archive.today/8PRnh
My main problem is this:
- If (big If) AI actually replaces workers, then we have a problem, because lots of folk lost their jobs
- If AI doesn't replace workers, then we have a recession, because a lot of the US economy now sits on top of corporations betting on it. And this will tank the economy and lots of folk will lose their jobs
It feels that the only path forward is a narrow one where AI removes some jobs, but not too many, but still enough so that the (immense, disproportionate) hype that was put on it does not come with a vengeance and the house of cards falls.
I don't think it an either / or. The current AI models, as they are, improve productivity quite a bit. They're just super expensive but the expense is being subsidized so it appears reasonable.
An alternative possibility is that the models become much cheaper and their use becomes more ubiquitous which would be helpful.
If your competitor is cutting jobs because of AI you can either race them to the bottom or you can use the humans you already have to leverage AI to expand your product offering, become more competitive, tackle more work, deliver better quality results etc. I don't see a world where AI does the work and humans sit around poor and idle.
Its probably a combination of both. Also, the "lots of folk" in the two scenarios are probably different orders of magnitude.
If one of both cases happens you can be sure the people most responsible will be the least affected. That is why this can get ugly. Honestly they deserve for it to get ugly. We can keep giving the Musks, Zuckerbergs, Altmans and co the benefit of the doubt
There is a third and, to me, so much more likely outcome that it's not even worth talking about the other two: AI makes workers more productive and unlocks more economic activity.
Fixed pie fallacy.
Infinite pie fallacy.