I think this is far too nuanced. I am terrified by what the civilization we have known will become. People living in less advanced economies will do OK, but the rest of us not so much. We stand on the brink of a world where some wealthy people will get more wealthy, but very many will struggle without work or prospects.
A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?
Gaza is kept as a testing ground for domestic spying and domestic military technology intended to be used on other groups. Otherwise they'd have destroyed it by now. Stuff like Palantir is always tested in Gaza first.
That is exactly the motivation. The problem with being a billionaire is you still have to associate with poor people. But imagine a world where your wealth completely insulates you from the resentful poor.
How does a billionaire have to associate with poor people? They can live in a complete bubble: house in the hills, driven by a chauffeur, private jets, private islands for holidays etc...?
The people who cook for them, the people who clean for them, the ones who take care of their kids, the one who sell them stuff or serve them in restaurants...
They have separate kitchens for the prep, the cleaners work while they’re out on the yacht, they have people to do the buying, and the restaurants they visit have very well trained staff who stay out of the way.
Also, they're not building the house or the jet, they're not growing the food, ... people close enough can be chosen for willingness to be sycophants and happiness to be servants. Unless you're feeding yourself from your own farm, or manufacturing your own electronics, there are limits to even a billionaires ability to control personnel.
The fact that people see that basically the singularity is happening but can't imagine that humanoid robots get good rapidly is why most people here are bad futurists.
> If you look at the rapid acceleration of progress
I don’t understand this perspective. There are numerous examples of technical progress that then stalls out. Just look at batteries for example. Why is previous progress a guaranteed indicator of future progress?
If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.
I look at the trajectory of LLMs, and the shape I see is one of diminishing returns.
The improvements in the first few generations came fast, and they were impressive. Then subsequent generations took longer, improved less over the previous generation, and required more and more (and more and more) resources to achieve.
I'm not interested in one guy's take that LLMs are AGI, regardless of his computer science bonafides. I can look at what they do myself, and see that they aren't, by most very reasonable definitions of AGI.
If you really believe that the singularity is happening now...well, then, shouldn't it take a very short time for the effects of that to be painfully obvious? Like, massive improvements in all kinds of technology coming in a matter of months? Come back in a few months and tell me what amazing new technologies this supposed AGI has created...or maybe the one in denial isn't me.
I wonder, will the rich start hiring elaborate casts of servants including butlers, footmen, lady's maids, and so on, since they'll be the only ones with the income?
They already do and always have. They never stopped hiring butlers (who are pretty well paid BTW), chefs, chauffeurs, maids, gardeners, nannies.....
The terminology may have changed a bit, but they still employ people to do stuff for them
One big difference is while professional class affluent people will hire cleaners or gardeners or nannies for a certain number or hours they cannot (at least in rich countries) hire them as full time live in employees.
There are some things that are increasing. For example employing full time tutors to teach their kids - as rich people used to often do (say a 100 years ago). So they get one to one attention while other people kids are in classes with many kids, and the poor have their kids in classes with a large number of kids. Interesting the government here in the UK is increasingly hostile to ordinary people educating their kids outside school which is the nearest we can get to what the rich do (again, hiring tutors by the hour, and self-supply within the household).
They also hire people to manage their wealth. I do not know enough about the history to be sure, but this seems to be also to be a return to historical norms after an egalitarian anomaly. A lot of wealth is looked after by full time employees of "family offices" - and the impression I get from people in investment management and high end property is that this has increased a lot in the last few decades. Incidentally, one of the questions around Epstein is why so many rich people let him take over some of the work that you would expect their family offices to handle.
As far as I can tell, the rich have never stopped employing elaborate casts of servants; these servants just go by different titles now: private chef, personal assistant, nanny, fashion consultant, etc.
It's regression to the mean in action. Everethyng eventually collapses into olygarhy and wevwill simply joing the unpriviliged rest in their misery. Likely with few wars civil or not here and there
> very many will struggle without work or prospects.
People always say this with zero evidence. What are some real examples of real people losing their job today because of LLMs. Apart from copywriters (i.e. the original human slop creators) having to rebrand as copyeditors because the first draft of their work now comes from a language model.
>We stand on the brink of a world where some wealthy people will get more wealthy, but very many will struggle without work or prospects.
Brink? This has been the reality for decades now.
>A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?
Nobody. They will try to channel it.
I think all signals are pretty inevitably pointing to three potential outcomes (in order of likelihood): WW3, soviet style collapse of the west or a soviet style collapse of the sino-russian bloc.
If the promise of AI is real I think it makes WW3 a much more likely outcome - a "freed up" disaffected workforce pining for meaning and a revolutionized AI-drone first battlefield both tip the scales in favor of world war.
Besides being a bit of a shallow comment, what exactly do you imply here? That capitalism logically implies that the rich become richer? I don't think this is necessarily the case, it just needs a stronger government than what the US currently has in place. (e.g. progressive taxation and strong antitrust policy seem to work fairly well in Europe).
We have a lot of people, capitalism values them as approaching zero, anything that alters that valuation (without reducing population) is contrary to capitalism. Capitalism means the rich must get richer, they own the resources and means of production, they take the reward.
It comes to a point where they need an underclass to insulate them from the masses; look how cheaply Trump bought his paramilitary though, he only had to spend the money taken from those he's suppressing, didn't even have to reduce his own wealth one bit; the military and his new brown shirts will ensure the rich stay rich and that eventually there is massive starvation (possibly water/fuel poverty first).
Or USA recovers the constitution, recognises climate change and start to do something about it.
It seems like the whole of humanities future hinges on a handful of billionaires megalomania and that riding on the coattails of Trump's need to not face justice for his crimes.
Capitalism just means private citizens can own the means of
production (e.g. start a business, buy stock) and earn a return on investment. It doesn’t mean only the rich must get richer. It means anyone who saves and invests their money instead of spending it gets richer.
However capitalism is perfectly compatible with a progressives taxation system such that the rich get richer at a lesser rate than the poor get richer.
But with how compounding works, isn't this outcome inevitable in capitalism? If the strong government prevents it then the first step for the rich is to weaken or co-opt the government, and exactly this has been happening.
I have deep concerns surrounding LLM-based systems in general, which you can see discussed in my other threads and comments. However in this particular article's case, I feel the same fears outlined largely predate mass LLM adoption.
If you substitute "artificial intelligence" with offshored labor ("actually indo-asians" meme moniker) you have some parallels: cheap spaghetti code that "mostly works", just written by farms of humans instead of farms of GPUs. The result is largely the same. The primary difference is that we've now subsidized (through massive, unsustainable private investment) the cost of "offshoring" to basically zero. Obviously that has its own set of problems, but the piper will need to be paid eventually...
Commercial ventures already had to care exactly to the extent that they are financially motivated by competition forces and by regulation.
In my experience coding agents are actually better at doing the final polish and plugging in gaps that a developer under time pressure to ship would skip.
LLM are an embodiment of the Pareto principle. Turns out that if you can get an 80% solution in 1% of the time no one gives a shit about the remaining 20%. I agree that’s terrifying. The existential AI risk crowd is afraid we’ll produce gods to destroy us. The reality is we’ve instead exposed a major weakness in our culture where we’ve trained ourselves to care nothing about quality but instead to maximize consumption.
This isn’t news really. Content farms already existed. Amusing Ourselves to Death was written in 1985. Critiques of the culture exist way before that. But the reality of seeing the end game of such a culture laid bare in the waste of the data center buildout is shocking and repulsive.
The data center buildout feels obscene when framed this way. Not because computation is evil, but because we're burning planetary-scale resources to accelerate a culture that already struggles to articulate why quality matters at all
As much as we speak about slop in the context of AI, slop as the cheap low-quality thing is not a new concept.
As lots of people seem to always prefer the cheaper option, we now have single-use plastic ultra-fast fashion, plastic stuff that'll break in the short term, brittle plywood furniture, cheap ultra-processed food, etc.
Classic software development always felt like a tailor-made job to me and of course it's slow and expensive but if it's done by professionals it can give excellent results. Now if you can get crappy but cheap and good enough results of course it'll be the preferred option for mass production.
The creme rises to the top. If someone's shit-coded program hangs and crashes frequently, in this day and age, we don't have to put up. with it any longer. That lazy half-assed feature that everyone knows sucks but we're forced to use it anyway? The competition just vibe coded up a hyper-specific version of that app that doesn't suck for everyone involved. We start looking at who's requiring what. What's an interface and what's required to use it. If there's an endpoint that I can hit, but someone has a better, more polished UI, that users prefer, let the markets decide.
My favorite pre-LLM thing in this area is Flighty. It's a flight tracking app that takes available data and presents it in the best possible wway. Another one is that EU border visa residency app that came thru here a couple of months ago.
Standards for interchange formats have now become paramount.
API access is another place where things hinge on.
AI slop is similar to the cheap tools at harbor freight. Before we used to have to buy really expensive tools that were designed to last forever and perform a ton of jobs. Now we can just go to harbor freight and get a tool that is good enough for most people.
80% of good maybe reframed as 100% ok for 80% of the people. It is when you are in the minority that cares about or needs that last 20% where it is a problem because the 80% were subsidizing your needs by buying more than the need.
I was watching a youtube video the other day where the guy was complaining his website was dropping off the google search results. Long story short, he reworded it according to advice from Gemini, the more he did it, the better it performed, but he was reflecting on how the website no longer represented him.
Soon, we'll all just be meatpuppets, guided by AI to suit AI.
> 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.
I feel like long before LLMs, people already didn't care about this.
If anything software quality has been decreasing significantly, even at the "highest level" (see Windows, macOS, etc). Are LLMs going to make it worse? I'm skeptical, because they might actually accelerate shipping bug fixes that (pre-LLMs) would have required more time and management buy-in, only to be met with "yeah don’t bother, look at the usage stats, nobody cares".
I don't think LLMs are the root cause or even a dramatic inflection point. They just tilt an already-skewed system a little further toward motion over judgment
If slop doesn't get better, it would mean that at least I get to keep my job. In the areas where the remaining 10% don't matter, maybe I won't. I'm struggling to come up with an example of such software outside of one-off scripts and some home automation though.
The job is going to be much less fun, yes, but I won't have to learn from scratch and compete with young people in a different area (and which I will enjoy less, most likely). So, if anything slop gives me hope.
I find working with LLMs much more fun and frictionless comprated to the drudgery of boring glue code or tracking down nongeneralizable version-specific workarounds in github issues etc. Coding LLMs let you focus on the domain of you actual problem instead of the low level stumbling blocks that just create annoyance without real learning.
"terrified".... overused word. As a man I literally can't relate. I get terrified when I see a shark next to me in the ocean. I get impatient when code is hard to debug.
We're pretty good at naming fear when it has a physical trigger. We're much worse at naming the unease that comes from watching something you care about get quietly hollowed out over time. That doesn't make it melodrama, just a different category of discomfort.
I deeply hate the people that use AI to poison the music, video or articles that I consume. However I really feel that it can possibly make software cheaper.
A couple of years ago, I worked for an agency as a dev. I had a chat with one of the sales people, and he said clients asked him why custom apps were so expensive, when the hardware had gotten relatively cheap. He had a much harder time selling mobile apps.
Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.
>Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.
There has been no shortage of mobile apps, Apple frequently boasts that there are over 2 million of them in the App Store.
I have little doubt there will be more, whether any of the extra will be decent remains to be seen.
ai is trained on the stuff already written. Software has been taking a nosedive for ages (ex, committing to shipping something in 6 months before one even figures out what to put in it). If anything shit will get worse due to the deskilling being caused by ai.
> You get AI that can make you like 90% of a thing! 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.
Based on the Adobe stock price the market thinks AI slop software will be good enough for about 20% of Adobe users (or Adobe will need to make its software 20% cheaper, or most likely somewhere between).
Interestingly workday, which is possibly slightly simpler software more easily replicable using coding agents is about the same (down 26%).
The bear case for Workday is not that it gets replicated as slop, but that its “user base” becomes dominated by agents.
Agents don’t care about any of Workday’s value-adds: Customizable workflows, “intuitive” experiences, a decent mobile app. Agents are happy to write SQL against a few boring databases.
It's the societal level impact of recent advances that I'd call "terrifying". There is a non-zero chance we end up with a "useless" class that can't compete against AI & machines - like at all, on any metric. And there doesn't seem to be much of a game plan for dealing with that without social fabric tearing
Some of us have a perfectly good game plan for that. It's called Universal Basic Income.
It's just that many powerful people have a vested interest in keeping the rest of us poor, miserable, and desperate, and so do everything they can to fight the idea that anything can ever be done to improve the lot of the poor without destroying the economy. Despite ample empirical evidence to the contrary.
I'd rather we democratize ownership [1]. Instead of taxing the owning class and being paid UBI peanuts, how about becoming the owning class and reaping the rewards directly?
We can (and should) provide for those among us who aren't able to provide for themselves, without also firing everyone in the welfare department. UBI is shit. People need to do something in order to recieve money, even if the something is begging on the side of the freeway or going into the welfare office to claim benefits. Magic money from the sky is not the answer.
>What if the future of computing belongs not to artisan developers or Carol from Accounting, but to whoever can churn out the most software out the fastest? What if good enough really is good enough for most people?
Sounds like the cost of everything goes down. Instead of subscription apps, we have free Fdroid apps. Instead of only the 0.1% commissioning art, all of humanity gets to commission art.
And when we do pay for things, instead of an app doing 1 feature well, we have apps do 10 features well with integration. (I am living this, instead of shipping software with 1 core feature, I can do 1 core feature and 6 different options for free, no change order needed)
The future you describe seems closer to the "Carol from Accounting" future I am hoping for in the blog post. My worry is that cost of everything goes down just enough to price out of existence all of the artists the 0.1% used to commission, without actually letting all of humanity do the same.
Meh. Slop is not danger. Because in software lines of code quantity does not have quality on its own. Or if it has it is not a good quality. And bad software costs money. The problem with temu for the west is not that the things sold there are bad. The real problem rose in the last 2-3 years when they become good.
They allow me to do work I could never have done before.
But there’s no chance at all of an LLM one shotting anything that I aim to build.
Every single step in the process is an intensely human grind trying to understand the LLM and coax it to make the thing I have in mind.
The people who are panicking aren’t using this stuff in depth. If they were, then they would have no anxiety at all.
If only the LLM was smart enough to write the software. I wish it could. It can’t, nor even close.
As for web browsers built in a few hours. No. No LLM is coming anywhere new at building a web browser in a few hours. Unless your talking about some super simple super minimal toy with some of the surface appearance of a web browser.
This has been my experience. I tend to use chats, in a synchronous, single-threaded manner, as opposed to agents, in an asynchronous way. That’s because I think of the LLM as a “know-it-all smartass personal assistant”; not an “employee replacement.”
I just enjoy writing my own software. If I have a tool that will help me to lubricate the tight bits, I’ll use it.
Same. I hit Tab a lot because even though the system doesn't actually understand what it's doing, it's really good at following patterns. Takes off the mental load of checking syntax.
Occasionally of course it's way off, in which case I have to tell it to stfu ("snooze").
Also it's great at presenting someone else's knowledge, as it doesn't actually know facts - just what token should come after a sequence of others. The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it - brilliant, just what I wanted.
> The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it
That’s probably the single most valuable aspect, for me.
Our definition of slop (repetitive characteristic language from LLMs) is the original one as articulated by the LLM creative writing community circa 2022-2023. Folks trying to redefine it today to mean "lazy LLM outputs I don't like" should have chosen a different word.
I was disappointed that your paper devoted less than a sentence in the introduction to qualifying "slop" before spending many pages quantifying it.
The definitions you're operating under are mentioned thus:
> characteristic repetitive phraseology, termed “slop,” which degrades output quality and makes AI-generated text immediately recognizable. (abstract)
> ... some patterns occur over 1000× more frequently in LLM text than in human writing, leading to the perception of repetition and over-use – i.e. "slop". (introduction)
And that's ... it, I think. No further effort is visible towards a definition of the term, nor do the background citations propose one that I could see (I'll admit to skimming them, though I did read most of your paper--if I missed something, let me know).
That might be suitable as an operating definition of "slop" to explain the techniques in your paper, but neither your paper nor any of your citations defend it as the common definition of an established term. Your paper's not making an incorrect claim per se, rather, it's taking your definition of "slop" for granted without evidence.
That doesn't bode well for the rigor of the rest of the paper.
Like, look: I get that this is an extremely fraught and important/popular area of research, and that your approach has "antislop" in the name. That's all great; I hope your approach is beneficial--truly. But you aren't claiming a definition of slop in your paper; you're just assuming one. Then you're coming here and asserting a definition citing "the LLM creative writing community circa 2022-2023" and asserting redefinition-after-the-fact, both of which are extraordinary claims that require evidence.
Again, not only do I think that mis-definition is untrue, I also think that you're not actually defining "slop" (the irony of my emphasizing that in a not-just-x-but-y sentence is not lost on me).
I don't know which of the authors you are, but Ravid, at least, should know better: this is not how you establish terminology in academic writing, nor how you defend it.
Slop is food scraps fed to pigs. Folks trying to redefine it in 2022–2023 as "repetitive characteristic language from LLMs" should have chosen a different word.
I think this is far too nuanced. I am terrified by what the civilization we have known will become. People living in less advanced economies will do OK, but the rest of us not so much. We stand on the brink of a world where some wealthy people will get more wealthy, but very many will struggle without work or prospects.
A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?
> Trouble who in power is willing to stop it?
Absolutely no one.
https://www.penguinrandomhouse.ca/books/719111/survival-of-t...
https://www.youtube.com/watch?v=pwJQEAI_KE0
I definitely recommend to watch this video with Reinhold Niebuhr.
Sure some things deteriorate, but many things improve. Talking about a net decline (or net gain) is very difficult.
Every age has their own set problems that need to be solved.
https://www.youtube.com/watch?v=93EJJVAinRc
Yes, that’s why they are on the race to building the very advanced robots. To prevent the violence towards them.
Gaza is kept as a testing ground for domestic spying and domestic military technology intended to be used on other groups. Otherwise they'd have destroyed it by now. Stuff like Palantir is always tested in Gaza first.
> Otherwise they'd have destroyed it by now.
About that…
That is exactly the motivation. The problem with being a billionaire is you still have to associate with poor people. But imagine a world where your wealth completely insulates you from the resentful poor.
Watch, or read "altered carbon" for a taste of that future.
How does a billionaire have to associate with poor people? They can live in a complete bubble: house in the hills, driven by a chauffeur, private jets, private islands for holidays etc...?
The people who cook for them, the people who clean for them, the ones who take care of their kids, the one who sell them stuff or serve them in restaurants...
They have separate kitchens for the prep, the cleaners work while they’re out on the yacht, they have people to do the buying, and the restaurants they visit have very well trained staff who stay out of the way.
Also, they're not building the house or the jet, they're not growing the food, ... people close enough can be chosen for willingness to be sycophants and happiness to be servants. Unless you're feeding yourself from your own farm, or manufacturing your own electronics, there are limits to even a billionaires ability to control personnel.
Unless they’re living entirely by themselves, they will always be dependent on poor people.
The poor maids and servants, the poor chauffeur, the poor chef, etc.
The fact that people see that basically the singularity is happening but can't imagine that humanoid robots get good rapidly is why most people here are bad futurists.
You’re delusional if you think singularity is happening.
> the singularity is happening
[Citation needed]
No LLM is yet being used effectively to improve LLM output in exponential ways. Personally, I'm skeptical that such a thing is possible.
LLMs aren't AGI, and aren't a path to AGI.
The Singularity is the Rapture for techbros.
If you look at the rapid acceleration of progress and conclude this way, well, de nile ain't just a river in egypt.
Also yes LLMs are indeed AGI: https://www.noemamag.com/artificial-general-intelligence-is-...
This was Peter Norvig's take. AGI is a low bar because most humans are really stupid.
> If you look at the rapid acceleration of progress
I don’t understand this perspective. There are numerous examples of technical progress that then stalls out. Just look at batteries for example. Why is previous progress a guaranteed indicator of future progress?
> rapid acceleration
Who was it who stated that every exponential was just a sigmoid in disguise?
> most humans are really stupid.
Statistically, don't we all sort of fit somewhere along a bell curve?
If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.
What rapid acceleration?
I look at the trajectory of LLMs, and the shape I see is one of diminishing returns.
The improvements in the first few generations came fast, and they were impressive. Then subsequent generations took longer, improved less over the previous generation, and required more and more (and more and more) resources to achieve.
I'm not interested in one guy's take that LLMs are AGI, regardless of his computer science bonafides. I can look at what they do myself, and see that they aren't, by most very reasonable definitions of AGI.
If you really believe that the singularity is happening now...well, then, shouldn't it take a very short time for the effects of that to be painfully obvious? Like, massive improvements in all kinds of technology coming in a matter of months? Come back in a few months and tell me what amazing new technologies this supposed AGI has created...or maybe the one in denial isn't me.
People in power won't act out of foresight or ethics. They'll act when the cost of not acting exceeds the cost of doing something messy and imperfect
Even that’s giving them too much credit. They’ll burn it all down preserve their fragile egos.
I wonder, will the rich start hiring elaborate casts of servants including butlers, footmen, lady's maids, and so on, since they'll be the only ones with the income?
They already do and always have. They never stopped hiring butlers (who are pretty well paid BTW), chefs, chauffeurs, maids, gardeners, nannies.....
The terminology may have changed a bit, but they still employ people to do stuff for them
One big difference is while professional class affluent people will hire cleaners or gardeners or nannies for a certain number or hours they cannot (at least in rich countries) hire them as full time live in employees.
There are some things that are increasing. For example employing full time tutors to teach their kids - as rich people used to often do (say a 100 years ago). So they get one to one attention while other people kids are in classes with many kids, and the poor have their kids in classes with a large number of kids. Interesting the government here in the UK is increasingly hostile to ordinary people educating their kids outside school which is the nearest we can get to what the rich do (again, hiring tutors by the hour, and self-supply within the household).
They also hire people to manage their wealth. I do not know enough about the history to be sure, but this seems to be also to be a return to historical norms after an egalitarian anomaly. A lot of wealth is looked after by full time employees of "family offices" - and the impression I get from people in investment management and high end property is that this has increased a lot in the last few decades. Incidentally, one of the questions around Epstein is why so many rich people let him take over some of the work that you would expect their family offices to handle.
As far as I can tell, the rich have never stopped employing elaborate casts of servants; these servants just go by different titles now: private chef, personal assistant, nanny, fashion consultant, etc.
They already do. In fact, we are all working in service of their power trips.
Who do you think is building the machines for the rich? All of these tech companies are nothing without the employees that build the tech.
This is what the service economy in the imperial core already is.
It's regression to the mean in action. Everethyng eventually collapses into olygarhy and wevwill simply joing the unpriviliged rest in their misery. Likely with few wars civil or not here and there
> very many will struggle without work or prospects.
People always say this with zero evidence. What are some real examples of real people losing their job today because of LLMs. Apart from copywriters (i.e. the original human slop creators) having to rebrand as copyeditors because the first draft of their work now comes from a language model.
book keepers, graphic artists
I wouldn't let an LLM touch my business's books with a 10 foot pole.
>We stand on the brink of a world where some wealthy people will get more wealthy, but very many will struggle without work or prospects.
Brink? This has been the reality for decades now.
>A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?
Nobody. They will try to channel it.
I think all signals are pretty inevitably pointing to three potential outcomes (in order of likelihood): WW3, soviet style collapse of the west or a soviet style collapse of the sino-russian bloc.
If the promise of AI is real I think it makes WW3 a much more likely outcome - a "freed up" disaffected workforce pining for meaning and a revolutionized AI-drone first battlefield both tip the scales in favor of world war.
Welcome to capitalism!
Besides being a bit of a shallow comment, what exactly do you imply here? That capitalism logically implies that the rich become richer? I don't think this is necessarily the case, it just needs a stronger government than what the US currently has in place. (e.g. progressive taxation and strong antitrust policy seem to work fairly well in Europe).
We have a lot of people, capitalism values them as approaching zero, anything that alters that valuation (without reducing population) is contrary to capitalism. Capitalism means the rich must get richer, they own the resources and means of production, they take the reward.
It comes to a point where they need an underclass to insulate them from the masses; look how cheaply Trump bought his paramilitary though, he only had to spend the money taken from those he's suppressing, didn't even have to reduce his own wealth one bit; the military and his new brown shirts will ensure the rich stay rich and that eventually there is massive starvation (possibly water/fuel poverty first).
Or USA recovers the constitution, recognises climate change and start to do something about it.
It seems like the whole of humanities future hinges on a handful of billionaires megalomania and that riding on the coattails of Trump's need to not face justice for his crimes.
Capitalism just means private citizens can own the means of production (e.g. start a business, buy stock) and earn a return on investment. It doesn’t mean only the rich must get richer. It means anyone who saves and invests their money instead of spending it gets richer.
However capitalism is perfectly compatible with a progressives taxation system such that the rich get richer at a lesser rate than the poor get richer.
But with how compounding works, isn't this outcome inevitable in capitalism? If the strong government prevents it then the first step for the rich is to weaken or co-opt the government, and exactly this has been happening.
Isn't that what Americans call socialism?
I have deep concerns surrounding LLM-based systems in general, which you can see discussed in my other threads and comments. However in this particular article's case, I feel the same fears outlined largely predate mass LLM adoption.
If you substitute "artificial intelligence" with offshored labor ("actually indo-asians" meme moniker) you have some parallels: cheap spaghetti code that "mostly works", just written by farms of humans instead of farms of GPUs. The result is largely the same. The primary difference is that we've now subsidized (through massive, unsustainable private investment) the cost of "offshoring" to basically zero. Obviously that has its own set of problems, but the piper will need to be paid eventually...
Commercial ventures already had to care exactly to the extent that they are financially motivated by competition forces and by regulation.
In my experience coding agents are actually better at doing the final polish and plugging in gaps that a developer under time pressure to ship would skip.
LLM are an embodiment of the Pareto principle. Turns out that if you can get an 80% solution in 1% of the time no one gives a shit about the remaining 20%. I agree that’s terrifying. The existential AI risk crowd is afraid we’ll produce gods to destroy us. The reality is we’ve instead exposed a major weakness in our culture where we’ve trained ourselves to care nothing about quality but instead to maximize consumption.
This isn’t news really. Content farms already existed. Amusing Ourselves to Death was written in 1985. Critiques of the culture exist way before that. But the reality of seeing the end game of such a culture laid bare in the waste of the data center buildout is shocking and repulsive.
The data center buildout feels obscene when framed this way. Not because computation is evil, but because we're burning planetary-scale resources to accelerate a culture that already struggles to articulate why quality matters at all
There isn't nearly enough AI demand to make all of these projects turn a profit.
As much as we speak about slop in the context of AI, slop as the cheap low-quality thing is not a new concept.
As lots of people seem to always prefer the cheaper option, we now have single-use plastic ultra-fast fashion, plastic stuff that'll break in the short term, brittle plywood furniture, cheap ultra-processed food, etc.
Classic software development always felt like a tailor-made job to me and of course it's slow and expensive but if it's done by professionals it can give excellent results. Now if you can get crappy but cheap and good enough results of course it'll be the preferred option for mass production.
The creme rises to the top. If someone's shit-coded program hangs and crashes frequently, in this day and age, we don't have to put up. with it any longer. That lazy half-assed feature that everyone knows sucks but we're forced to use it anyway? The competition just vibe coded up a hyper-specific version of that app that doesn't suck for everyone involved. We start looking at who's requiring what. What's an interface and what's required to use it. If there's an endpoint that I can hit, but someone has a better, more polished UI, that users prefer, let the markets decide.
My favorite pre-LLM thing in this area is Flighty. It's a flight tracking app that takes available data and presents it in the best possible wway. Another one is that EU border visa residency app that came thru here a couple of months ago.
Standards for interchange formats have now become paramount.
API access is another place where things hinge on.
AI slop is similar to the cheap tools at harbor freight. Before we used to have to buy really expensive tools that were designed to last forever and perform a ton of jobs. Now we can just go to harbor freight and get a tool that is good enough for most people.
80% of good maybe reframed as 100% ok for 80% of the people. It is when you are in the minority that cares about or needs that last 20% where it is a problem because the 80% were subsidizing your needs by buying more than the need.
I was watching a youtube video the other day where the guy was complaining his website was dropping off the google search results. Long story short, he reworded it according to advice from Gemini, the more he did it, the better it performed, but he was reflecting on how the website no longer represented him.
Soon, we'll all just be meatpuppets, guided by AI to suit AI.
I don't think craft dies, but I do think it retreats
> 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.
I feel like long before LLMs, people already didn't care about this.
If anything software quality has been decreasing significantly, even at the "highest level" (see Windows, macOS, etc). Are LLMs going to make it worse? I'm skeptical, because they might actually accelerate shipping bug fixes that (pre-LLMs) would have required more time and management buy-in, only to be met with "yeah don’t bother, look at the usage stats, nobody cares".
I don't think LLMs are the root cause or even a dramatic inflection point. They just tilt an already-skewed system a little further toward motion over judgment
If it can enable very small teams to deliver big apps, I do think the quality will increase.
If slop doesn't get better, it would mean that at least I get to keep my job. In the areas where the remaining 10% don't matter, maybe I won't. I'm struggling to come up with an example of such software outside of one-off scripts and some home automation though.
The job is going to be much less fun, yes, but I won't have to learn from scratch and compete with young people in a different area (and which I will enjoy less, most likely). So, if anything slop gives me hope.
I find working with LLMs much more fun and frictionless comprated to the drudgery of boring glue code or tracking down nongeneralizable version-specific workarounds in github issues etc. Coding LLMs let you focus on the domain of you actual problem instead of the low level stumbling blocks that just create annoyance without real learning.
> I'm terrified that our craft will die, and nobody will even care to mourn it.
"Terrified" is a strong word for the death of any craft. And as long as there are thousands that love the craft, then it will not have died.
"terrified".... overused word. As a man I literally can't relate. I get terrified when I see a shark next to me in the ocean. I get impatient when code is hard to debug.
We're pretty good at naming fear when it has a physical trigger. We're much worse at naming the unease that comes from watching something you care about get quietly hollowed out over time. That doesn't make it melodrama, just a different category of discomfort.
Step 1: Start looking beyond your code, as the stuff beyond your code is looking at you.
Its existential dread, of being useless and of not being able to thrive.
Its being compared to that of a slop machine, and billionaires claiming that its better than you are in all ways.
Its having integrity in your work, but the LLM slop-machines can lie and go "You're actually right (tells more lies)".
It all comes down to that LLMs serve to 'fix' the trillion dollar problem: peoples wages. Especially those engineers, developers, medical, and more.
I wonder how people like you would have fared even just 100y ago, if typing on a keyboard with your own fingers is so foundational to your identity.
I deeply hate the people that use AI to poison the music, video or articles that I consume. However I really feel that it can possibly make software cheaper.
A couple of years ago, I worked for an agency as a dev. I had a chat with one of the sales people, and he said clients asked him why custom apps were so expensive, when the hardware had gotten relatively cheap. He had a much harder time selling mobile apps.
Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.
>Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.
There has been no shortage of mobile apps, Apple frequently boasts that there are over 2 million of them in the App Store.
I have little doubt there will be more, whether any of the extra will be decent remains to be seen.
ai is trained on the stuff already written. Software has been taking a nosedive for ages (ex, committing to shipping something in 6 months before one even figures out what to put in it). If anything shit will get worse due to the deskilling being caused by ai.
> You get AI that can make you like 90% of a thing! 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.
Based on the Adobe stock price the market thinks AI slop software will be good enough for about 20% of Adobe users (or Adobe will need to make its software 20% cheaper, or most likely somewhere between).
Interestingly workday, which is possibly slightly simpler software more easily replicable using coding agents is about the same (down 26%).
The bear case for Workday is not that it gets replicated as slop, but that its “user base” becomes dominated by agents.
Agents don’t care about any of Workday’s value-adds: Customizable workflows, “intuitive” experiences, a decent mobile app. Agents are happy to write SQL against a few boring databases.
The slop is sad but a mild irritation at most.
It's the societal level impact of recent advances that I'd call "terrifying". There is a non-zero chance we end up with a "useless" class that can't compete against AI & machines - like at all, on any metric. And there doesn't seem to be much of a game plan for dealing with that without social fabric tearing
Some of us have a perfectly good game plan for that. It's called Universal Basic Income.
It's just that many powerful people have a vested interest in keeping the rest of us poor, miserable, and desperate, and so do everything they can to fight the idea that anything can ever be done to improve the lot of the poor without destroying the economy. Despite ample empirical evidence to the contrary.
> It's called Universal Basic Income.
I'd rather we democratize ownership [1]. Instead of taxing the owning class and being paid UBI peanuts, how about becoming the owning class and reaping the rewards directly?
[1] https://en.wikipedia.org/wiki/Worker_cooperative
Well that sounds less like a plan and more like a pipe dream.
We can (and should) provide for those among us who aren't able to provide for themselves, without also firing everyone in the welfare department. UBI is shit. People need to do something in order to recieve money, even if the something is begging on the side of the freeway or going into the welfare office to claim benefits. Magic money from the sky is not the answer.
>What if the future of computing belongs not to artisan developers or Carol from Accounting, but to whoever can churn out the most software out the fastest? What if good enough really is good enough for most people?
Sounds like the cost of everything goes down. Instead of subscription apps, we have free Fdroid apps. Instead of only the 0.1% commissioning art, all of humanity gets to commission art.
And when we do pay for things, instead of an app doing 1 feature well, we have apps do 10 features well with integration. (I am living this, instead of shipping software with 1 core feature, I can do 1 core feature and 6 different options for free, no change order needed)
The future you describe seems closer to the "Carol from Accounting" future I am hoping for in the blog post. My worry is that cost of everything goes down just enough to price out of existence all of the artists the 0.1% used to commission, without actually letting all of humanity do the same.
Meh. Slop is not danger. Because in software lines of code quantity does not have quality on its own. Or if it has it is not a good quality. And bad software costs money. The problem with temu for the west is not that the things sold there are bad. The real problem rose in the last 2-3 years when they become good.
Butlers jihad has to happen. Destroy the datacenters and give the oligarchs the french treatment!
I use AI/LLMs hard for my programming.
They allow me to do work I could never have done before.
But there’s no chance at all of an LLM one shotting anything that I aim to build.
Every single step in the process is an intensely human grind trying to understand the LLM and coax it to make the thing I have in mind.
The people who are panicking aren’t using this stuff in depth. If they were, then they would have no anxiety at all.
If only the LLM was smart enough to write the software. I wish it could. It can’t, nor even close.
As for web browsers built in a few hours. No. No LLM is coming anywhere new at building a web browser in a few hours. Unless your talking about some super simple super minimal toy with some of the surface appearance of a web browser.
This has been my experience. I tend to use chats, in a synchronous, single-threaded manner, as opposed to agents, in an asynchronous way. That’s because I think of the LLM as a “know-it-all smartass personal assistant”; not an “employee replacement.”
I just enjoy writing my own software. If I have a tool that will help me to lubricate the tight bits, I’ll use it.
Same. I hit Tab a lot because even though the system doesn't actually understand what it's doing, it's really good at following patterns. Takes off the mental load of checking syntax.
Occasionally of course it's way off, in which case I have to tell it to stfu ("snooze").
Also it's great at presenting someone else's knowledge, as it doesn't actually know facts - just what token should come after a sequence of others. The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it - brilliant, just what I wanted.
> The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it
That’s probably the single most valuable aspect, for me.
I'm less afraid of people using LLMs for coding well than I am of people not caring to and just shipping slop.
This is the browser engine I was alluding to in the post: https://github.com/wilsonzlin/fastrender
Our paper on removing AI slop got accepted to ICLR 2026, and it's under consideration for an IgNobel prize:
https://arxiv.org/abs/2510.15061
Our definition of slop (repetitive characteristic language from LLMs) is the original one as articulated by the LLM creative writing community circa 2022-2023. Folks trying to redefine it today to mean "lazy LLM outputs I don't like" should have chosen a different word.
I was disappointed that your paper devoted less than a sentence in the introduction to qualifying "slop" before spending many pages quantifying it.
The definitions you're operating under are mentioned thus:
> characteristic repetitive phraseology, termed “slop,” which degrades output quality and makes AI-generated text immediately recognizable. (abstract)
> ... some patterns occur over 1000× more frequently in LLM text than in human writing, leading to the perception of repetition and over-use – i.e. "slop". (introduction)
And that's ... it, I think. No further effort is visible towards a definition of the term, nor do the background citations propose one that I could see (I'll admit to skimming them, though I did read most of your paper--if I missed something, let me know).
That might be suitable as an operating definition of "slop" to explain the techniques in your paper, but neither your paper nor any of your citations defend it as the common definition of an established term. Your paper's not making an incorrect claim per se, rather, it's taking your definition of "slop" for granted without evidence.
That doesn't bode well for the rigor of the rest of the paper.
Like, look: I get that this is an extremely fraught and important/popular area of research, and that your approach has "antislop" in the name. That's all great; I hope your approach is beneficial--truly. But you aren't claiming a definition of slop in your paper; you're just assuming one. Then you're coming here and asserting a definition citing "the LLM creative writing community circa 2022-2023" and asserting redefinition-after-the-fact, both of which are extraordinary claims that require evidence.
Again, not only do I think that mis-definition is untrue, I also think that you're not actually defining "slop" (the irony of my emphasizing that in a not-just-x-but-y sentence is not lost on me).
I don't know which of the authors you are, but Ravid, at least, should know better: this is not how you establish terminology in academic writing, nor how you defend it.
Slop is food scraps fed to pigs. Folks trying to redefine it in 2022–2023 as "repetitive characteristic language from LLMs" should have chosen a different word.
A computer is a person employed to do arithmetic.
Sloppy joes is either a food item or a slur against the previous democratic president. Checkmate.
Words expand meanings all the time and frankly I don't think your narrow definition of slop was ever a common one.