There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.
It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].
It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.
> the market sells as if all goods were high-quality
The phrase "high-quality" is doing work here. The implication I'm reading is that poor performance = low quality. However, the applications people are mentioning in this comment section as low performance (Teams, Slack, Jira, etc) all have competitors with much better performance. But if I ask a person to pick between Slack and, say, a a fast IRC client like Weechat... what do you think the average person is going to consider low-quality? It's the one with a terminal-style UI, no video chat, no webhook integrations, and no custom avatars or emojis.
Performance is a feature like everything else. Sometimes, it's a really important feature; the dominance of Internet Explorer was destroyed by Chrome largely because it was so much faster than IE when it was released, and Python devs are quickly migrating to uv/ruff due to the performance improvement. But when you start getting into the territory of "it takes Slack 5 seconds to start up instead of 10ms", you're getting into the realm where very few people care.
You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Performance is not just a quantitative issue. It leaks into everything, from architecture to delivery to user experience. Bad performance has expensive secondary effects, because we introduce complexity to patch over it like horizontal scaling, caching or eventual consistency. It limits our ability to make things immediately responsive and reliable at the same time.
> You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
I never said performance wasn't an important quality metric, just that it's not the only quality metric. If a slow program has the features I need and a fast program doesn't, the slow program is going to be "higher quality" in my mind.
> How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Like any other feature, whether or not performance is important depends on the user and context. Chrome being faster than IE8 at general browsing (rendering pages, opening tabs) was very noticeable. uv/ruff being faster than pip/poetry is important because of how the tools integrate into performance-sensitive development workflows. Does Slack taking 5-10 seconds to load on startup matter? -- to me not really, because I have it come up on boot and forget about it until my next system update forced reboot. Do I use LibreOffice or Word and Excel, even though LibreOffice is faster? -- I use Word/Excel because I've run into annoying compatibility issues enough times with LO to not bother. LibreOffice could reduce their startup and file load times to 10 picoseconds and I would still use MS Office, because I just want my damn documents to keep the same formatting my colleagues using MS Office set on their Windows computers.
Now of course I would love the best of all worlds; programs to be fast and have all the functionality I want! In reality, though, companies can't afford to build every feature, performance included, and need to pick and choose what's important.
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.
There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
I don't think it's necessarily a market for lemons. That involves information asymmetry.
Sometimes that happens with buggy software, but I think in general, people just want to pay less and don't mind a few bugs in the process. Compare and contrast what you'd have to charge to do a very thorough process with multiple engineers checking every line of code and many hours of rigorous QA.
I once did some software for a small book shop where I lived in Padova, and created it pretty quickly and didn't charge the guy - a friend - much. It wasn't perfect, but I fixed any problems (and there weren't many) as they came up and he was happy with the arrangement. He was patient because he knew he was getting a good deal.
I have worked for large corporations that have foisted awful HR, expense reporting, time tracking and insurance "portals" that were so awful I had to wonder if anyone writing the checks had ever seen the product. I brought up the point several times that if my team tried to tell a customer that we had their project all done but it was full of as many bugs and UI nightmares as these back office platforms, I would be chastised, demoted and/or fired.
> I had to wonder if anyone writing the checks had ever seen the product
Probably not, and that's like 90% of the issue with enterprise software. Sadly enterprise software products are often sold based mainly on how many boxes they check in the list of features sent to management, not based on the actual quality and usability of the product itself.
What you're describing is Enterprise(tm) software. Some consultancy made tens of millions of dollars building, integrating, and deploying those things. This of course was after they made tens of millions of dollars producing reports exploring how they would build, integrate, and deploy these things and all the various "phases" involved. Then they farmed all the work out to cheap coders overseas and everyone went for golf.
Meanwhile I'm a founder of startup that has gotten from zero to where it is on probably what that consultancy spends every year on catering for meetings.
Is this really tolerance and not just monopolistic companies abusing their market position? I mean workers can't even choose what software they're allowed to use, those choices are made by the executive/management class.
I have that washing machine btw. I saw the AI branding and had a chuckle. I bought it anyway because it was reasonably priced (the washer was $750 at Costco).
Even if end-users had the data to reasonably tie-break on software quality and performance, as I scroll my list of open applications not a single one of them can be swapped out with another just because it were more performant.
For example: Docker, iterm2, WhatsApp, Notes.app, Postico, Cursor, Calibre.
I'm using all of these for specific reasons, not for reasons so trivial that I can just use the best-performing solution in each niche.
So it seems obviously true that it's more important that software exists to fill my needs in the first place than it pass some performance bar.
I’m surprised in your list because it contains 3 apps that I’ve replaced specifically due to performance issues (docker, iterm and notes). I don’t consider myself particularly performance sensitive (at home) either. So it might be true that the world is even _less_ likely to pay for resource efficiency than we think.
Except you’ve already swapped terminal for iterm, and orbstack already exists in part because docker left so much room for improvement, especially on the perf front.
> The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
That's where FOSS or even proprietary "shared source" wins. You know if the software you depend on is generally badly or generally well programmed. You may not be able to find the bugs, but you can see how long the functions are, the comments, and how things are named. YMMV, but conscientiousness is a pretty great signal of quality; you're at least confident that their code is clean enough that they can find the bugs.
Basically the opposite of the feeling I get when I look at the db schemas of proprietary stuff that we've paid an enormous amount for.
1. Sometimes speed = money. Being the first to market, meeting VC-set milestones for additional funding, and not running out of runway are all things cheaper than the alternatives. Software maintenance costs later don't come close to opportunity costs if a company/project fails.
2. Most of the software is disposable. It's made to be sold, and the code repo will be chucked into a .zip on some corporate drive. There is no post-launch support, and the software's performance after launch is irrelevant for the business. They'll never touch the codebase again. There is no "long-term" for maintenance. They may harm their reputation, but that depends on whether their clients can talk with each other. If they have business or govt clients, they don't care.
3. The average tenure in tech companies is under 3 years. Most people involved in software can consider maintenance "someone else's problem." It's like the housing stock is in bad shape in some countries (like the UK) because the average tenure is less than 10 years. There isn't a person in the property's owner history to whom an investment in long-term property maintenance would have yielded any return. So now the property is dilapidated. And this is becoming a real nationwide problem.
4. Capable SWEs cost a lot more money. And if you hire an incapable IC who will attempt to future-proof the software, maintenance costs (and even onboarding costs) can balloon much more than some inefficient KISS code.
5. It only takes 1 bad engineering manager in the whole history of a particular piece of commercial software to ruin its quality, wiping out all previous efforts to maintain it well. If someone buys a second-hand car and smashes it into a tree hours later, was keeping the car pristinely maintained for that moment (by all the previous owners) worth it?
And so forth. What you say is true in some cases (esp where a company and its employees act in good faith) but not in many others.
What does "make in the long-term" even mean? How do you make a sandwich in the long-term?
Bad things are cheaper and easier to make. If they weren't, people would always make good things. You might say "work smarter," but smarter people cost more money. If smarter people didn't cost more money, everyone would always have the smartest people.
the thing is - countries have set down legal rules preventing selling of food that actively harms the consumer(expired, known poisonous etc) to continue your food analogy.
in software the regulations can be boiled down to 'lol lmao' in pre-GDPR era. and even now i see GDPR violations daily.
I like to point out that since ~1980, computing power has increased about 1000X.
If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.
If you went back in time to 1980 and offered the following choice:
I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.
People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...
Personally I think the 1000Xers kinda ruined things for the rest of us.
Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.
The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.
Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch. My corporate VPN app takes 30 seconds to go from a blank screen to deciding if it’s going to prompt me for credentials or remember my login, every morning. This is on an i9 with 64GB ram, and 1GN fiber.
On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional, and their core functionality has regressed significantly in the last decade. I’m not talking about features that I prefer, but as an example if you load two links in Reddit in two different tabs my experience has been that it’s 50/50 if they’ll actually both load or if one gets stuck either way skeletons.
> are all applications I use every day that take 20+ seconds to launch.
I suddenly remembered some old Corel Draw version circa year 2005, which had loading screen enumerating random things it loaded and was computing until a final message "Less than a minute now...". It most often indeed lasted less than a minute to show interface :).
I'm on a four year old mid-tier laptop and opening VS Code takes maybe five seconds. Opening IDEA takes five seconds. Opening twitter on an empty cache takes perhaps four seconds and I believe I am a long way from their servers.
On my work machine slack takes five seconds, IDEA is pretty close to instant, the corporate VPN starts nearly instantly (although the Okta process seems unnecessarily slow I'll admit), and most of the sites I use day-to-day (after Okta) are essentially instant to load.
I would say that your experiences are not universal, although snappiness was the reason I moved to apple silicon macs in the first place. Perhaps Intel is to blame.
VS Code defers a lot of tasks to the background at least. This is a bit more visible in intellij; you seem to measure how long it takes to show its window, but how long does it take for it to warm up and finish indexing / loading everything, or before it actually becomes responsive?
Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.
Also keep in mind that desktop computers haven't gotten significantly faster for tasks like opening applications in the past years; they're more efficient (especially the M line CPUs) and have more hardware for specialist workloads like what they call AI nowadays, but not much innovation in application loading.
You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.
I wish the big desktop app builders would invest in native applications. I understand why they go for web technology (it's the crossplatform GUI technology that Java and co promised and offers the most advanced styling of anything anywhere ever), but I wish they invested in it to bring it up to date.
>Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.
Do any of those do the indexing that cause the slowness? If not it's comparing apples to oranges.
What timescale are we talking about? Many DOS stock and accounting applications were basically instantaneous. There are some animations on iPhone that you can't disable that take longer than a series of keyboard actions of a skilled operator in the 90s. Windows 2k with a stripped shell was way more responsive that today's systems as long as you didn't need to hit the harddrives.
The "instant" today is really laggy compared to what we had. Opening Slack takes 5s on a flagship phone and opening a channel which I just had open and should be fully cached takes another 2s. When you type in JIRA the text entry lags and all the text on the page blinks just a tiny bit (full redraw). When pages load on non-flagship phones (i.e. most of the world), they lag a lot, which I can see on monitoring dashboards.
Somehow the Xcode team managed to make startup and some features in newer Xcode versions slower than older Xcode versions running on old Intel Macs.
E.g. the ARM Macs are a perfect illustration that software gets slower faster than hardware gets faster.
After a very short 'free lunch' right after the Intel => ARM transition we're now back to the same old software performance regression spiral (e.g. new software will only be optimized until it feels 'fast enough', and that 'fast enough' duration is the same no matter how fast the hardware is).
Another excellent example is the recent release of the Oblivion Remaster on Steam (which uses the brand new UE5 engine):
On my somewhat medium-level PC I have to reduce the graphics quality in the Oblivion Remaster so much that the result looks worse than 14-year old Skyrim (especially outdoor environments), and that doesn't even result in a stable 60Hz frame rate, while Skyrim runs at a rock-solid 60Hz and looks objectively better in the outdoors.
E.g. even though the old Skyrim engine isn't by far as technologically advanced as UE5 and had plenty of performance issues at launch on a ca. 2010 PC, the Oblivion Remaster (which uses a "state of the art" engine) looks and performs worse than its own 14 years old predecessor.
I'm sure the UE5-based Oblivion remaster can be properly optimized to beat Skyrim both in looks and performance, but apparently nobody cared about that during development.
You're comparing the art(!) of two different games, that targeted two different sets of hardware while using the ideal hardware for one and not the other. Kind of a terrible example.
The art direction, modelling and animation work is mostly fine, the worse look results from the lack of dynamic lighting and ambient occlusion in the Oblivion Remaster when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).
Basically, the best art will always look bad without good lighting (and even baked or faked ambient lighting like in Skyrim looks better than no ambient lighting at all.
Digital Foundry has an excellent video about the issues:
> …when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).
This also happens to many other UE5 games like S.T.A.L.K.E.R. 2 where they try to push the graphics envelope with expensive techniques and most people without expensive hardware have to turn the settings way down (even use things like upscaling and framegen which further makes the experience a bit worse, at least when the starting point is very bad and you have to use them as a crutch), often making these modern games look worse than something a decade old.
Whatever UE5 is doing (or rather, how so many developers choose to use it) is a mistake now and might be less of a mistake in 5-10 years when the hardware advances further and becomes more accessible. Right now it feels like a ploy by the Big GPU to force people to upgrade to overpriced hardware if they want to enjoy any of these games; or rather, sillyness aside, is an attempt by studios to save resources by making the artists spend less time on faking and optimizing effects and detail that can just be brute forced by the engine.
In contrast, most big CryEngine and idTech games run great even on mid range hardware and still look great.
I just clicked on the network icon next to the clock on a Windows 11 laptop. A gray box appeared immediately, about one second later all the buttons for wifi, bluetooth, etc appeared. Windows is full of situations like this, that require no network calls, but still take over one second to render.
It's strange, it visibly loading the buttons is indicative they use async technology that can use multithreaded CPUs effectively... but it's slower than the old synchronous UI stuff.
I'm sure it's significantly more expensive to render than Windows 3.11 - XP were - rounded corners and scalable vector graphics instead of bitmaps or whatever - but surely not that much? And the resulting graphics can be cached.
Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.
A few years ago I accidentally left my laptop at work on a Friday afternoon. Instead of going into the office, I pulled out a first generation raspberry pi and got everything set up on that. Needless to say, our nodejs app started pretty slowly. Not for any good reason - there were a couple modules which pulled in huge amounts of code which we didn’t use anyway. A couple hours work made the whole app start 5x faster and use half the ram. I would never have noticed that was a problem with my snappy desktop.
I've found so many performance issues at work by booting up a really old laptop or working remotely from another continent. It's pretty straightforward to simulate either poor network conditions or generally low performance hardware, but we just don't generally bother to chase down those issues.
Oh yeah, I didn't even touch on devs being used to working on super faster internet.
If you're on Mac, go install Network Link Conditioner and crank that download an upload speed way down. (Xcode > Open Developer Tools > More Developer Tools... > "Additional Tools for Xcode {Version}").
A mix of both. There are large number of websites that are inefficiently written using up unnecessary amounts of resources. Semi-modern devices make up for that by just having a massive amount of computing power.
However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)
Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency.
I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.
I think it's a very theoretical argument: we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
Says who? Who are these experienced people that know how to write fast software that think it is such a huge sacrifice?
The reality is that people who say things like this don't actually know much about writing fast software because it really isn't that difficult. You just can't grab electron and the lastest javascript react framework craze.
These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning C++ and sticking to javascript or python because that's what they learned first.
One example is Office. Microsoft is going back to preloading office during Windows Boot so that you don't notice it loading. With the average system spec 25 years ago it made sense to preload office. But today, what is Office doing that it needs to offload its startup to running at boot?
I think it’s a little more nuanced than the broad takes make it seem.
One of the biggest performance issues I witness is that everyone assumes a super fast, always on WiFi/5G connection. Very little is cached locally on device so even if I want to do a very simple search through my email inbox I have to wait on network latency. Sometimes that’s great, often it really isn’t.
Same goes for many SPA web apps. It’s not that my phone can’t process the JS (even though there’s way too much of it), it’s poor caching strategies that mean I’m downloading and processing >1MB of JS way more often than I should be. Even on a super fast connection that delay is noticeable.
Spotify takes 7 seconds from clicking on its icon to playing a song on a 2024 top-of-the-range MacBook Pro. Navigating through albums saved on your computer can take several seconds. Double clicking on a song creates a 1/4sec pause.
This is absolutely remarkable inefficiency considering the application's core functionality (media players) was perfected a quarter century ago.
To note, people will have wildly different tolerance to delays and lag.
On the extreme, my retired parents don't feel the difference between 5s or 1s when loading a window or clicking somewhere. I offered a switch to a new laptop, cloning their data, and they didn't give a damn and just opened the laptop the closest to them.
Most people aren't that desensitized, but for some a 600ms delay is instantaneous when for other it's 500ms too slow.
In Carmack's Lex Fridman interview he says he knows C++ devs who still insist on using some ancient version of MSVC because it's *so fast* compared to the latest, on the latest hardware.
Correction: devs have made the mistake of turning everything into remote calls, without having any understanding as to the performance implications of doing so.
Sonos’ app is a perfect example of this. The old app controlled everything locally, since the speakers set up their own wireless mesh network. This worked fantastically well. Someone at Sonos got the bright idea to completely rewrite the app such that it wasn’t even backwards-compatible with older hardware, and everything is now a remote calls. Changing volume? Phone —> Router —> WAN —> Cloud —> Router —> Speakers. Just… WHY. This failed so spectacularly that the CEO responsible stepped down / was forced out, and the new one claims that fixing the app is his top priority. We’ll see.
You’re right, and I shouldn’t necessarily blame devs for the idea, though I do blame their CTO for not standing up to it if nothing else.
Though it’s also unclear to me in this particular case why they couldn’t collect commands being issued, and then batch-send them hourly, daily, etc. instead of having each one route through the cloud.
A lot of nostalgia is at work here. Modern tech is amazing. If the old tools were actually better people would actually use them. Its not like you can't get them to work.
You are using a relatively high end computer and mobile device. Go and find a cheap laptop x86 and try doing the same. It will be extremely painful. Most of this is due to a combination of Windows 11 being absolute trash and JavaScript being used extensively in applications/websites. JavaScript is memory hog and can be extremely slow depending on how it is written (how you deal with loops massively affects the performance).
What is frustrating though that until relatively recently these devices would work fine with JS heavy apps and work really well with anything that is using a native toolkit.
I’m sure you know this, but a reminder that modern devices cache a hell of a lot, even when you “quit” such that subsequent launches are faster. Such is the benefit of more RAM.
I could compare Slack to, say, HexChat (or any other IRC client). And yeah, it’s an unfair comparison in many ways – Slack has far more capabilities. But from another perspective, how many of them do you immediately need at launch? Surely the video calling code could be delayed until after the main client is up, etc. (and maybe it is, in which case, oh dear).
A better example is Visual Studio [0], since it’s apples to apples.
It vastly depends on what software you're forced to use.
Here's some software I use all the time, which feels horribly slow, even on a new laptop:
Slack.
Switching channels on slack, even when you've just switched so it's all cached, is painfully slow. I don't know if they build in a 200ms or so delay deliberately to mask when it's not cached, or whether it's some background rendering, or what it is, but it just feels sluggish.
Outlook
Opening an email gives a spinner before it's opened. Emails are about as lightweight as it gets, yet you get a spinner. It's "only" about 200ms, but that's still 200ms of waiting for an email to open. Plain text emails were faster 25 years ago. Adding a subset of HTML shouldn't have caused such a massive regression.
Teams
Switching tabs on teams has the same delayed feeling as Slack. Every iteraction feels like it's waiting 50-100ms before actioning. Clicking an empty calendar slot to book a new event gives 30-50ms of what I've mentally internalised as "Electron blank-screen" but there's probably a real name out there for basically waiting for a new dialog/screen to even have a chrome, let alone content. Creating a new calendar event should be instant, it should not take 300-500ms or so of waiting for the options to render.
These are basic "productivity" tools in which every single interaction feels like it's gated behind at least a 50ms debounce waiting period, with often extra waiting for content on top.
Is the root cause network hops or telemetry? Is it some corporate antivirus stealing the computer's soul?
Ultimately the root cause doesn't actually matter, because no matter the cause, it still feels like I'm wading through treacle trying to interact with my computer.
You're probably right, I'm likely massively underestimating the time, it's long enough to be noticable, but not so long that it feels instantly frustrating the first time, it just contributes to an overall sluggishness.
They're comparing these applications to older applications that loaded instantly on much slower computers.
Both sides are right.
There is a ton of waste and bloat and inefficiency. But there's also a ton of stuff that genuinely does demand more memory and CPU. An incomplete list:
- Higher DPI displays use intrinsically more memory and CPU to paint and rasterize. My monitor's pixel array uses 4-6X more memory than my late 90s PC had in the entire machine.
- Better font rendering is the same.
- Today's UIs support Unicode, right to left text, accessibility features, different themes (dark/light at a minimum), dynamic scaling, animations, etc. A modern GUI engine is similar in difficulty to a modern game engine.
- Encryption everywhere means that protocols are no longer just opening a TCP connection but require negotiation of state and running ciphers.
- The Web is an incredibly rich presentation platform that comes with the overhead of an incredibly rich presentation platform. It's like PostScript meets a GUI library meets a small OS meets a document markup layer meets...
- The data sets we deal with today are often a lot larger.
- Some of what we've had to do to get 1000X performance itself demands more overhead: multiple cores, multiple threads, 64 bit addressing, sophisticated MMUs, multiple levels of cache, and memory layouts optimized for performance over compactness. Those older machines were single threaded machines with much more minimal OSes, memory managers, etc.
- More memory means more data structure overhead to manage that memory.
- Larger disks also demand larger structures to manage them, and modern filesystems have all kinds of useful features like journaling and snapshots that also add overhead.
IMO, the prime offender is simply not understanding fundamentals. From simple things like “a network call is orders of magnitude slower than a local disk, which is orders of magnitude slower than RAM…” (and moreover, not understanding that EBS et al. are networked disks, albeit highly specialized and optimized), or doing insertions to a DB by looping over a list and writing each row individually.
I have struggled against this long enough that I don’t think there is an easy fix. My current company is the first I’ve been at that is taking it seriously, and that’s only because we had a spate of SEV0s. It’s still not easy, because a. I and the other technically-minded people have to find the problems, then figure out how to explain them b. At its heart, it’s a culture war. Properly normalizing your data model is harder than chucking everything into JSON, even if the former will save you headaches months down the road. Learning how to profile code (and fix the problems) may not be exactly hard, but it’s certainly harder than just adding more pods to your deployment.
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.
If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.
But each vendor only develop a few software and generally supports only three platforms -/+ one. It’s so damning when I see projects reaching out for electron, when they only support macOS and Windows. And software like Slack has no excuse for being this slow on anything other than latest gen cpu and 1gb internet connection.
Users only want 5% of the features of the few programs they use. However everyone has a different list of features and a different list of programs. And so to get a market you need all the features on all the programs.
> Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
Really? Because while abstractions like that exist (i.e. a webserver frameworks, reactivity, SQL and ORMs etc), I would argue that these aren't the abstractions that cause the most maintenance and performance issues. These are usually in the domain/business application and often not something that made anything quicker to develop or anything, but instead created by a developer that just couldn't help themselves
The backend programming language usually isn't a significant bottleneck; running dozens of database queries in sequence is the usual bottleneck, often compounded by inefficient queries, inappropriate indexing, and the like.
Yep. I’m a DBRE, and can confirm, it’s almost always the DB, with the explicit caveat that it’s also rarely the fault of the DB itself, but rather the fault of poor schema and query design.
Queries I can sometimes rewrite, and there’s nothing more satisfying than handing a team a 99% speed-up with a couple of lines of SQL. Sometimes I can’t, and it’s both painful and frustrating to explain that the reason the dead-simple single-table SELECT is slow is because they have accumulated billions of rows that are all bloated with JSON and low-cardinality strings, and short of at a minimum table partitioning (with concomitant query rewrites to include the partition key), there is nothing anyone can do. This has happened on giant instances, where I know the entire working set they’re dealing with is in memory. Computers are fast, but there is a limit.
The other way the DB gets blamed is row lock contention. That’s almost always due to someone opening a transaction (e.g. SELECT… FOR UPDATE) and then holding it needlessly while doing other stuff, but sometimes it’s due to the dev not being aware of the DB’s locking quirks, like MySQL’s use of gap locks if you don’t include a UNIQUE column as a search predicate. Read docs, people!
It seems to me most developers don't want to learn much about the database and would prefer to hide it behind the abstractions used by their language of choice. I can relate to a degree; I was particularly put off by SQL's syntax (and still dislike it), but eventually came to see the value of leaning into the database's capabilities.
Certain ORMs such as Rails's ActiveRecord are part of the problem because they create the illusion that local memory access and DB access are the same thing. This can lead to N+1 queries and similar issues. The same goes for frameworks that pretend that remote network calls are just a regular method access (thankfully, such frameworks seem to have become largely obsolete).
> If dynamic array bounds checking cost 5% (narrator: it is far less than that)
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.
It's not that simple either - normally, if you're doing some loops over a large array of pixels, say, to perform some operation to them, there will only be a couple of bounds checks before the loop starts, checking the starting and ending conditions of the loops, not re-doing the bounds check for every pixel.
So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!
I agree with the sentiment and analysis that most humans prefer short term gains over long term ones. One correction to your example, though. Dynamic bounds checking does not solve security. And we do not know of a way to solve security. So, the gains are not as crisp as you are making them seem.
Bounds checking solves one tiny subset of security. There are hundreds of other subsets that we know how to solve. However these days the majority of the bad attacks are social and no technology is likely to solve them - as more than 10,000 years of history of the same attack has shown. Technology makes the attacks worse because they now scale, but social attacks have been happening for longer than recorded history (well there is every reason to believe that - there is unlikely to evidence going back that far).
You don't have to "solve" security in order to improve security hygiene by a factor of X, and thus risk of negative consequences by that same factor of X.
Don't forget the law of large numters. 5% performance hit on one system is one thing, 5% across almost all of the current computing landscape is still a pretty huge value.
But it's not free for the taking. The point is that we'd get more than that 5%'s worth in exchange. So sure, we'll get significant value "if software optimization was truly a priority", but we get even more value by making other things a priority.
Saying "if we did X we'd get a lot in return" is similar to the fallacy of inverting logical implication. The question isn't, will doing something have significant value, but rather, to get the most value, what is the thing we should do? The answer may well be not to make optimisation a priority even if optimisation has a lot of value.
depends on whether the fact that software can be finished will ever be accepted. If you're constantly redeveloping the same thing to "optimize and streamline my experience" (please don't) then yes, the advantage is dubious. But if not, then the saved value in operating costs keeps increasing as time goes on. It won't make much difference in my homelab, but at datacenter scale it does
Even the fact that value keeps increasing doesn't mean it's a good idea. It's a good idea if it keeps increasing more than other value. If a piece of software is more robust against attacks then the value in that also keeps increasing over time, possibly more than the cost in hardware. If a piece of software is easier to add features to, then that value also keeps increasing over time.
If what we're asking is whether value => X, i.e. to get the most value we should do X, you cannot answer that in the positive by proving X => value. If optimising something is worth a gazillion dollars, you still should not do it if doing something else is worth two gazillion dollars.
The first reply is essentially right. This isn't what happened at all, just because C is still prevalent. All the inefficiency is everything down the stack, not in C.
I don't trust that shady-looking narrator. 5% of what exactly? Do you mean that testing for x >= start and < end is only 5% as expensive as assigning an int to array[x]?
Or would bounds checking in fact more than double the time to insert a bunch of ints separately into the array, testing where each one is being put? Or ... is there some gimmick to avoid all those individual checks, I don't know.
>Personally I think the 1000Xers kinda ruined things for the rest of us.
Reminds me of when NodeJS came out that bridged client and server side coding. And apparently their repos can be a bit of a security nightmare nowadays- so the minimalist languages with limited codebase do have their pros.
I think it'd be pretty funny if to book travel in 2035 you need to use a travel agent that's objectively dumber than a human. We'd be stuck in the eighties again, but this time without each other to rely on.
Of course, that would be suicide for the industry. But I'm not sure investors see that.
I don't think we are gonna go there. Talking is cumbersome. There's a reason, besides social anxiety that people prefer to use self-checkout and electronically order fastfood. There are easier ways to do a lot of things than with words.
I'd bet on maybe ad hoc ai designed ui-s you click but have a voice search when you are confused about something.
If you know what you want then not talking to a human is faster. However if you are not sure a human can figure out. I'm not sure I'd trust a voice assistant - the value in the human is an informed opinion which is hard to program, but it is easy to program a recommendation for whatever makes the most profit. Of course humans often don't have an informed opinion either, but at least sometimes they do, and they will also sometimes admit it when they don't.
> the value in the human is an informed opinion which is hard to program
I don't think I ever used a human for that. They are usually very uninformed about everything that's not their standard operational procedure or some current promotional materials.
20 years ago when I was at McDonalds there would be several customers per shift (so many 1 in 500?) who didn't know what they wanted and asked for a recommendation. Since I worked there I ate there often enough to know if the special was something I liked or not.
Bless your souls. I'm not saying it doesn't happen. I just personally had only bad experiences so I actively avoid human interactive input in my commercial activity.
You can always install DOS as your daily driver and run 1980's software on any hardware from the past decade, and then tell me how that's slow.
1000x referred to the hardware capability, and that's not a rarity that is here.
The trouble is how software has since wasted a majority of that performance improvement.
Some of it has been quality of life improvements, leading nobody to want to use 1980s software or OS when newer versions are available.
But the lion's share of the performance benefit got chucked into the bin with poor design decisions, layers of abstractions, too many resources managed by too many different teams that never communicate making any software task have to knit together a zillion incompatible APIs, etc.
The sad thing is that even running DOS software in DOSBox (or in QEMU+FreeDOS), or Amiga software in UAE, is much faster than any native software I have run in many years on any modern systems. They also use more reasonable amounts of storage/RAM.
Animations is part of it of course. A lot of old software just updates the screen immediately, like in a single frame, instead of adding frustrating artificial delays to every interaction. Disabling animations in Android (an accessibility setting) makes it feel a lot faster for instance, but it does not magically fix all apps unfortunately.
IPC could be 80x higher when taking into account SIMD and then you have to multiply by each core. Mainstream CPUs are more like 1 to 2 million times faster than what was there in the 80s.
You can get full refurbished office computers that are still in the million times faster range for a few hundred dollars.
The things you are describing don't have much to do with computers being slow and feeling slow, but they are happening anyway.
Scripting languages that are constantly allocating memory to any small operation and pointer chasing ever variable because the type is dynamic is part of the problem, then you have people writing extremely inefficient programs in an already terrible environment.
Most programs are written now in however way the person writing them wants to work, not how someone using it wishes they were written.
Most people have actually no concept of optimization or what runs faster than something else. The vast majority of programs are written by someone who gets it to work and thinks "this is how fast this program runs".
The idea that the same software can run faster is a niche thought process, not even everyone on hacker news thinks about software this way.
The title made me think Carmack was criticizing poorly optimized software and advocating for improving performance on old hardware.
When in fact, the tweet is absolutely not about either of the two. He's talking about a thought experiment where hardware stopped advancing and concludes with "Innovative new products would get much rarer without super cheap and scalable compute, of course".
> "Innovative new products would get much rarer without super cheap and scalable compute, of course".
Interesting conclusion—I'd argue we haven't seen much innovation since the smartphone (18 years ago now), and it's entirely because capital is relying on the advances of hardware to sell what is to consumers essentially the same product that they already have.
Of course, I can't read anything past the first tweet.
And I'd argue that we've seen tons of innovation in the past 18 years aside from just "the smartphone" but it's all too easy to take for granted and forget from our current perspective.
First up, the smartphone itself had to evolve a hell of a lot over 18 years or so. Go try to use an iPhone 1 and you'll quickly see all of the roadblocks and what we now consider poor design choices littered everywhere, vs improvements we've all taken for granted since then.
18 years ago was 2007? Then we didn't have (for better or for worse on all points):
* Video streaming services
* Decent video game market places or app stores. Maybe "Battle.net" with like 5 games, lol!
* VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
* Mapping applications on a phone (there were some stand-alone solutions like Garmin and TomTom just getting off the ground)
* QR Codes (the standard did already exist, but mass adoption would get nowhere without being carried by the smartphone)
* Rideshare, food, or grocery delivery services (aside from taxis and whatever pizza or chinese places offered their own delivery)
* Voice-activated assistants (including Alexa and other standalone devices)
* EV Cars (that anyone wanted to buy) or partial autopilot features aside from 1970's cruise control
* Decent teleconferencing (Skype's featureset was damn limited at the time, and any expensive enterprise solutions were dead on the launchpad due to lack of network effects)
* Decent video displays (flatscreens were still busy trying to mature enough to push CRTs out of the market at this point)
* Color printers were far worse during this period than today, though that tech will never run out of room for improvement.
* Average US Internet speeds to the home were still ~1Mbps, with speeds to cellphone of 100kbps being quite luxurious. Average PCs had 2GB RAM and 50GB hard drive space.
* Naturally: the tech everyone loves to hate such as AI, Cryptocurrencies, social network platforms, "The cloud" and SaaS, JS Frameworks, Python (at least 3.0 and even realistically heavy adoption of 2.x), node.js, etc. Again "Is this a net benefit to humanity" and/or "does this get poorly or maliciously used a lot" doesn't speak to whether or not a given phenomena is innovative, and all of these objectively are.
Steam was selling games, even third party ones, for years by 2007.
I'm not sure what a "VS-Code style IDE" is, but I absolutely did appreciate Visual Studio ( and VB6! ) prior to 2007.
2007 was in fact the peak of TomTom's profit, although GPS navigation isn't really the same as general purpose mapping application.
Grocery delivery was well established, Tesco were doing that in 1996. And the idea of takeaways not doing delivery is laughable, every establishment had their own delivery people.
Yes, there are some things on that list that didn't exist, but the top half of your list is dominated by things that were well established by 2007.
There has been a lot of innovation - but it is focused to some niche and so if you are not in a niche you don't see it and wouldn't care if you did. Most of the major things you need have already invented - I recall word processors as a kid, so they for sure date back to the 1970s - we still need word processors and there is a lot of polish that can be added, but all innovation is in niche things that the majority of us wouldn't have a use for if we knew about it.
Of course innovation is always in bits and spurts.
I think its a bad argument though. If we had to stop with the features for a little while and created some breathing room, the features would come roaring back. There'd be a downturn sure but not a continuous one.
"The world" runs on _features_ not elegant, fast, or bug free software. To the end user, there is no difference between a lack of a feature, and a bug. Nor is there any meaningful difference between software taking 5 minutes to complete something because of poor performance, compared to the feature not being there and the user having to spend 5 minutes completing the same task manually. It's "slow".
If you keep maximizing value for the end user, then you invariably create slow and buggy software. But also, if you ask the user whether they would want faster and less buggy software in exchange for fewer features, they - surprise - say no. And even more importantly: if you ask the buyer of software, which in the business world is rarely the end user, then they want features even more, and performance and elegance even less.
Given the same feature set, a user/buyer would opt for the fastest/least buggy/most elegant software. But if it lacks any features - it loses. The reason to keep software fast and elegant is because it's the most likely path to be able to _keep_ adding features to it as to not be the less feature rich offering. People will describe the fast and elegant solution with great reviews, praising how good it feels to use. Which might lead people to think that it's an important aspect. But in the end - they wouldn't buy it at all if it didn't do what they wanted. They'd go for the slow frustrating buggy mess if it has the critical feature they need.
Agree WRT the tradeoff between features and elegance.
Although, I do wonder if there’s an additional tradeoff here. Existing users, can apparently do what they need to do with the software, because they are already doing it. Adding a new feature might… allow them to get rid of some other software, or do something new (but, that something new must not be so earth shattering, because they didn’t seek out other software to do it, and they were getting by without it). Therefore, I speculate that existing users, if they really were introspective, would ask for those performance improvements first. And maybe a couple little enhancements.
Potential new users on the other hand, either haven’t heard of your software yet, or they need it to do something else before they find it useful. They are the ones that reasonably should be looking for new features.
So, in “features vs performance” decision is also a signal about where the developers’ priorities lay: adding new users or keeping old ones happy. So, it is basically unsurprising that:
* techies tend to prefer the latter—we’ve played this game before, and know we want to be the priority for the bulk of the time using the thing, not just while we’re being acquired.
* buggy slow featureful software dominates the field—this is produced by companies that are prioritizing growth first.
* history is littered with beautiful, elegant software that users miss dearly, but which didn’t catch on broadly enough to sustain the company.
However, the tradeoff is real in both directions; most people spend most of their time as users instead of potential users. I think this is probably a big force behind the general perception that software and computers are incredibly shit nowadays.
Almost all of my nontechnical friends and family members have at some point complained about bloated and overly complicated software that they are required to use.
Also remember that Microsoft at this point has to drag their users kicking and screaming to use the next Windows version. If users were let to decide for themselves, many would have never upgraded past Windows XP. All that despite all the pretty new features in the later versions.
I'm fully with you that businesses and investors want "features" for its own sake, but definitely not users.
Perfectly put. People who try to argue that more time should be spent on making software perform better probably aren't thinking about who's going to pay for that.
For the home/office computer, the money spent on more RAM and a better CPU enables all software it runs to be shipped more cheaply and with more features.
I heartily agree. It would be nice if we could extend the lifetime of hardware 5, 10 years past its, "planned obsolescence." This would divert a lot of e-waste, leave a lot of rare earth minerals in the ground, and might even significantly lower GHG emissions.
The market forces for producing software however... are not paying for such externalities. It's much cheaper to ship it sooner, test, and iterate than it is to plan and design for performance. Some organizations in the games industry have figured out a formula for having good performance and moving units. It's not spread evenly though.
In enterprise and consumer software there's not a lot of motivation to consider performance criteria in requirements: we tend to design for what users will tolerate and give ourselves as much wiggle room as possible... because these systems tend to be complex and we want to ship changes/features continually. Every change is a liability that can affect performance and user satisfaction. So we make sure we have enough room in our budget for an error rate.
Much different compared to designing and developing software behind closed doors until it's, "ready."
We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
I think this specific class of computational power - strictly serialized transaction processing - has not grown at the same rate as other metrics would suggest. Adding 31 additional cores doesn't make the order matching engine go any faster (it could only go slower).
If your product is handling fewer than several million transactions per second and you are finding yourself reaching for a cluster of machines, you need to back up like 15 steps and start over.
> We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
This is the bit that really gets me fired up. People (read: system “architects”) were so desperate to “prove their worth” and leave a mark that many of these systems have been over complicated, unleashing a litany of new issues. The original design would still satisfy 99% of use cases and these days, given local compute capacity, you could run an entire market on a single device.
You are only able to do that because you are doing simple processing on each transaction. If you had to do more complex processing on each transaction it wouldn't be possible to do that many. Though it is hard for me to imagine what more complex processing would be (I'm not in your domain)
HFT would love to do more complex calculations for some of their trades. They often make the compromise of using a faster algorithm that is known to be right only 60% of the time vs the better but slower algorithm that is right 90% of the time.
That is a different problem from yours though and so it has different considerations. In some areas I/O dominates, in some it does not.
In a perfect world, maximizing (EV/op) x (ops/sec) should be done for even user software. How many person-years of productivity are lost each year to people waiting for Windows or Office to start up, finish updating, etc?
I work in card payments transaction processing and IO dominates. You need to have big models and lots of data to authorize a transaction. And you need that data as fresh as possible and as close to your compute as possible... but you're always dominated by IO. Computing the authorization is super cheap.
Tends to scale vertically rather than horizontally. Give me massive caches and wide registers and I can keep them full. For now though a lot of stuff is run on commodity cloud hardware so... eh.
Why can you not match orders in parallel using logarithmic reduction, the same way you would sort in parallel? Is it that there is not enough other computation being done other than sorting by time and price?
Well, yes. It's an economic problem (which is to say, it's a resource allocation problem). Do you have someone spend extra time optimising your software or do you have them produce more functionality. If the latter generates more cash then that's what you'll get them to do. If the former becomes important to your cashflow then you'll get them to do that.
It's the kind of economics that shifts the financial debt to accumulating waste, and technical debt, which is paid for by someone else. It's basically stealing. There are --of course-- many cases in which thorough optimizing doesn't make much sense, but the idea of just adding servers instead of rewriting is a sad state of affairs.
It doesn't seem like stealing to me? Highly optimised software generally takes more effort to create and maintain.
The tradeoff is that we get more software in general, and more features in that software, i.e. software developers are more productive.
I guess on some level we can feel that it's morally bad that adding more servers or using more memory on the client is cheaper than spending developer time but I'm not sure how you could shift that equilibrium without taking away people's freedom to choose how to build software?
I feel like the argument is similar to that of all corporate externality pushes.
For example "polluting the air/water, requiring end-users to fill landfills with packaging and planned obscolescence" allows a company to more cheaply offer more products to you as a consumer.. but now everyone collectively has to live in a more polluted world with climate change and wasted source material converted to expensive and/or dangerous landfills and environmental damage from fracking and strip mining.
But that's still not different from theft. A company that sells you things that "Fell off the back of a truck" is in a position to offer you lower costs and greater variety, as well. Aren't they?
Our shared resources need to be properly managed: neither siphoned wastefully nor ruined via polution. That proper management is a cost, and it either has to be borne by those using the resources and creating the waste, or it is theft of a shared resource and tragedy of the commons.
This feels like hyperbole to me. Who is being stolen from here? Not the end user, they're getting the tradeoff of more features for a low price in exchange for less optimized software.
From what I’m seeing people do on their computers, it barely changed from what they’ve been doing on their pentium 4 one. But now, with Electron-based software and the generals state of Windows, you can’t recommend something older than 4 years. It’s hard to not see it as stealing when you have to buy a 1000+ laptop, when a 400 one could easily do the job if the software were a bit better.
It’s only a tradeoff for the user if the user find the added features useful.
Increasingly, this is not the case. My favorite example here is the Adobe Creative Suite, which for many users useful new features became far and few between some time ~15 years ago. For those users, all they got was a rather absurd degree of added bloat and slowness for essentially the same thing they were using in 2010. These users would’ve almost certainly been happier had 80-90% of the feature work done in that time instead been bug fixes and optimization.
This is exactly right. Why should the company pay an extra $250k in salary to "optimize" when they can just offload that salary to their customers' devices instead? The extra couple of seconds, extra megabytes of bandwidth, and shittery of the whole ecosystem has been externalized to customers in search of ill-gotten profits.
It's like ignoring backwards compatibility. That is really cheap since all the cost is pushed to end-users (that have to relearn the UI) or second/third-party developers (that have to rewrite their client code to work with a new API). But it's OK since everyone is doing it and also without all those pointless rewrites many of us would not have a job.
> without all those pointless rewrites many of us would not have a job.
I hear arguments like this fairly often. I don't believe it's true.
Instead of having a job writing a pointless rewrite, you might have a job optimizing software. You might have a different career altogether. Having a job won't go away: what you do for your job will simply change.
Not really stealing. You could off course build software that is more optimized and with the same features but at a higher cost. Would most buyers pay twice the price for a webb app that loads in 1 sec instead of 2? Probably not.
I have been thinking about this a lot ever since I played a game called "Balatro". In this game nothing extraordinary happens in terms of computing - some computations get done, some images are shuffled around on the screen, the effects are sparse. The hardware requirements aren't much by modern standards, but still, this game could be ported 1:1 to a machine with Pentium II with a 3dfx graphics card. And yet it demands so much more - not a lot by today standards, but still. I am tempted to try to run it on a 2010 netbook to see if it even boots up.
It is made in lua using love2d. That helped the developers and comes with a cost in minimal requirements (even if they aren't much for a game released in 2024).
My daily drivers at home are an i3-540 and and Athlon II X4. Every time something breaks down, I find it much cheaper to just buy a new part than to buy a whole new kit with motherboard/CPU/RAM.
I'm a sysadmin, so I only really need to log into other computers, but I can watch videos, browse the web, and do some programming on them just fine. Best ROI ever.
Can you watch H.265 videos? That's the one limitation I regularly hit on my computer (that I got for free from some company, is pretty old, but is otherwise good enough that I don't think I'll replace it until it breaks). I don't think I can play videos recorded on modern iPhones.
IBM PowerPC 750X apparently, which was the CPU the Power Mac G3 used back in the day. Since it's going into space it'll be one of the fancy radiation-hardened versions which probably still costs more than your car though, and they run four of them in lockstep to guard against errors.
Ha! What's special about rad-hard chips is that they're old designs. You need big geometries to survive cosmic rays, and new chips all have tiny geometries.
So there are two solutions:
1. Find a warehouse full of 20-year old chips.
2. Build a fab to produce 20-year old designs.
Both approaches are used, and both approaches are expensive. (Approach 1 is expensive because as you eventually run out of chips they become very, very valuable and you end up having to build a fab anyway.)
There's more to it than just big geometries but that's a major part of the solution.
I'm not sure what artemis or orion are, but you can blame defense contractors for this. Nobody ever got fired for hiring IBM or Lockheed, even if they deliver unimpressive results at massive cost.
I don't disagree that the engineering can be justified. But you don't need custom hardware to achieve radiation hardening, much less hiring fucking IBM.
And to be clear, I love power chips. I remain very bullish about the architecture. But as a taxpayer reading this shit just pisses me off. Pork-fat designed to look pro-humanity.
If we're talking numbers, there are many, many more embedded systems than general purpose computers. And these are mostly built on ancient process nodes compared to the cutting edge we have today; the shiny octa-cores on our phones are supported by a myriad of ancilliary chips that are definitely not cutting edge.
We aren't talking numbers, though. Who cares about embedded? I mean that literally. This is computation invisible by design. If that were sufficient we wouldn't have smartphones.
What cost? The hardware is dirt cheap. Programmers aren't cheap. The value of being able to use cheap software on cheap hardware is basically not having to spend a lot of time optimizing things. Time is the one thing that isn't cheap here. So there's a value in shipping something slightly sub optimal sooner rather than something better later.
> Except your browser taking 180% of available ram maybe.
For most business users, running the browser is pretty much the only job of the laptop. And using virtual memory for open tabs that aren't currently open is actually not that bad. There's no need to fit all your gazillion tabs into memory; only the ones you are looking at. Browsers are pretty good at that these days. The problem isn't that browsers aren't efficient but that we simply push them to the breaking content with content. Content creators simply expand their resource usage whenever browsers get optimized. The point of optimization is not saving cost on hardware but getting more out of the hardware.
The optimization topic triggers the OCD of a lot of people and sometimes those people do nice things. John Carmack built his career when Moore's law was still on display. Everything he did to get the most out of CPUs was super relevant and cool but it also dated in a matter of a few years. One moment we were running doom on simple 386 computers and the next we were running Quake and Unreal with shiny new Voodoo GPUs on a Pentium II pro. I actually had the Riva 128 as my first GPU, which was one of the first products that Nvidia shipped running Unreal and other cool stuff. And while CPUs have increased enormously in performance, GPUs have increased even more by some ridiculous factor. Nvidia has come a long way since then.
I'm not saying optimization is not important but I'm just saying that compute is a cheap commodity. I actually spend quite a bit of time optimizing stuff so I can appreciate what that feels like and how nice it is when you make something faster. And sometimes that can really make a big difference. But sometimes my time is better spent elsewhere as well.
Right, and that's true of end users as well. It's just not taken into account by most businesses.
I think your take is pretty reasonable, but I think most software is too far towards slow and bloated these days.
Browsers are pretty good, but developers create horribly slow and wasteful web apps. That's where the optimization should be done. And I don't mean they should make things as fast as possible, just test on an older machine that a big chunk of the population might still be using, and make it feel somewhat snappy.
The frustrating part is that most web apps aren't really doing anything that complicated, they're just built on layers of libraries that the developers don't understand very well. I don't really have a solution to any of this, I just wish developers cared a little bit more than they do.
I was working as a janitor, moonlighting as an IT director, in 2010. Back then I told the business that laptops for the past five years (roughly since Nehalem) have plenty of horsepower to run spreadsheets (which is basically all they do) with two cores, 16 GB of RAM, and a 500GB SATA SSD. A couple of users in marketing did need something a little (not much) beefier. Saved a bunch of money by not buying the latest-and-greatest laptops.
I don't work there any more. Today I am convinced that's true today: those computers should still be great for spreadsheets. Their workflow hasn't seriously changed. It's the software that has. If they've continued with updates (can it even "run" MS Windows 10 or 11 today? No idea, I've since moved on to Linux) then there's a solid chance that the amount of bloat and especially move to online-only spreadsheets would tank their productivity.
Further, the internet at that place was terrible. The only offerings were ~16Mbit asynchronous DSL (for $300/mo just because it's a "business", when I could get the same speed for $80/mo at home), or Comcast cable 120Mbit for $500/mo. 120Mbit is barely enough to get by with an online-only spreadsheet, and 16Mbit definitely not. But worse: if internet goes down, then the business ceases to function.
This is the real theft that another commenter [0] mentioned that I wholeheartedly agree with. There's no reason whatsoever that a laptop running spreadsheets in an office environment should require internet to edit and update spreadsheets, or crazy amounts of compute/storage, or even huge amounts of bandwidth.
Computers today have zero excuse for terrible performance except only to offload costs onto customers - private persons and businesses alike.
He mentions the rate of innovation would slow down which I agree with. But I think that even 5% slower innovation rate would delay the optimizations we can do or even figure out what we need to optimize through centuries of computer usage and in the end we'd be less efficient because we'd be slower at finding efficiencies. Low adoption rate of new efficiencies is worse than high adoption rate of old efficiencies is I guess how to phrase it.
If Cadence for example releases every feature 5 years later because they spend more time optimizing them, it's software after all, how much will that delay semiconductor innovations?
Obviously, the world ran before computers. The more interesting part of this is what would we lose if we knew there were no new computers, and while I'd like to believe the world would put its resources towards critical infrastructure and global logistics, we'd probably see the financial sector trying to buy out whatever they could, followed by any data center / cloud computing company trying to lock all of the best compute power in their own buildings.
The idea of a hand me down computer made of brass and mahogany still sounds ridiculous because it is, but we're nearly there in terms of Moore's law. We have true 2nm within reach and then the 1nm process is basically the end of the journey. I expect 'audiophile grade' PCs in the 2030s and then PCs become works of art, furniture, investments, etc. because they have nowhere to go.
The increasing longevity of computers has been impressing me for about 10 years.
My current machine is 4 years old. It's absolutely fine for what I do. I only ever catch it "working" when I futz with 4k 360 degree video (about which: fine). It's a M1 Macbook Pro.
I traded its predecessor in to buy it, so I don't have that one anymore; it was a 2019 model. But the one before that, a 2015 13" Intel Macbook Pro, is still in use in the house as my wife's computer. Keyboard is mushy now, but it's fine. It'd probably run faster if my wife didn't keep fifty billion tabs open in Chrome, but that's none of my business. ;)
The one behind that one, purchased in 2012, is also still in use as a "media server" / ersatz SAN. It's a little creaky and is I'm sure technically a security risk given its age and lack of updates, but it RUNS just fine.
The priority should be safety, not speed. I prefer an e.g. slower browser or OS that isn't ridden with exploits and attack vectors.
Of course that doesn't mean everything should be done in JS and Electron as there's a lot of drawbacks to that. There exists a reasonable middle ground where you get e.g. memory safety but don't operate on layers upon layers of heavy abstraction and overhead.
Is there or could we make an iPhone-like that runs 100x slower than conventional phones but uses much less energy, so it powers itself on solar? It would be good for the environment and useful in survival situations.
Or could we make a phone that runs 100x slower but is much cheaper? If it also runs on solar it would be useful in third-world countries.
Processors are more than fast enough for most tasks nowadays; more speed is still useful, but I think improving price and power consumption is more important. Also cheaper E-ink displays, which are much better for your eyes, more visible outside, and use less power than LEDs.
> Or could we make a phone that runs 100x slower but is much cheaper? I
Probably not - a large part of the cost is equipment and R&D. It doesn't cost much more to build the most complex CPU vs a 6502 - there is only a tiny bit more silicon and chemicals. What is costly is the R&D behind the chip, and the R&D behind the machines that make the chips. If intel fired all their R&D engineers who were not focused on reducing manufacturing costs they could greatly reduce the price of their CPUs - until AMD released a next generation that is much better. (this is more or less what Henry Ford did with the model-T - he reduced costs every year until his competition adding features were enough better that he couldn't sell his cars.
We have much hardware on the secondary market (resale) that's only 2-3x slower than pristine new primary market devices. It is cheap, it is reuse, and it helps people save in a hyper-consumerist society. The common complaint is that it doesn't run bloated software anymore. And I don't think we can make non-bloated software for a variety of reasons.
As a video game developer, I can add some perspective (N=1 if you will). Most top-20 game franchises spawned years ago on much weaker hardware, but their current installments demand hardware not even a few years old (as recommended/intended way to play the game). This is due to hyper-bloating of software, and severe downskilling of game programmers in the industry to cut costs. The players don't often see all this, and they think the latest game is truly the greatest, and "makes use" of the hardware. But the truth is that aside from current-generation graphics, most games haven't evolved much in the last 10 years, and current-gen graphics arrived on PS4/Xbox One.
Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Ironically, upgrading a graphics card one generation (RTX 3070 -> 4070) costs about $300 if the old card is sold and $500 if it isn't. So gamers end up paying ~$400 for the latest games every few years and then rebel against paying $30 extra per game instead, which could very well be cheaper than the GPU upgrade (let alone other PC upgrades), and would allow companies to spend much more time on optimization. Well, assuming it wouldn't just go into the pockets of publishers (but that is a separate topic).
It's an example of Scott Alexander's Moloch where it's unclear who could end this race to the bottom. Maybe a culture shift could, we should perhaps become less consumerist and value older hardware more. But the issue of bad software has very deep roots. I think this is why Carmack, who has a practically perfect understanding of software in games, doesn't prescribe a solution.
One only needs to look at Horizon: Zero Dawn to note that the truth of this is deeply uneven across the games industry. World streaming architectures are incredible technical achievements. So are moddable engines. There are plenty of technical limits being pushed by devs, it's just not done at all levels.
You are right, but you picked a game by a studio known for its technical expertise, with plenty of points to prove about quality game development. I'd like them to be the future of this industry.
But right now, 8-9/10 game developers and publishers are deeply concerned with cash and rather unconcerned by technical excellence or games as a form of interactive art (where, once again, Guerrilla and many other Sony studios are).
Minimalism is excellent. As others have mentioned, using languages that are more memory safe (by assumption the language is wrote in such a way) may be worth the additional complexity cost.
But surely with burgeoning AI use efficiency savings are being gobbled up by the brute force nature of it.
Maybe model training and the likes of hugging face can avoid different groups trying to reinvent the same AI wheel using more resources than a cursory search of a resource.
Tell me about it. Web development has only become fun again at my place since upgrading from Intel Mac to M4 Mac.
Just throw in Slack chat, vscode editor in Electron, Next.js stack, 1-2 docker containers, one browser and you need top notch hardware to run it fluid (Apple Silicon is amazing though). I'm doing no fancy stuff.
Chat, editor in a browser and docker don't seem the most efficient thing if put all together.
I'm already moving in this direction in my personal life. It's partly nostalgia but it's partly practical. It's just that work requires working with people who only use what hr and it hoists on them, then I need a separate machine for that.
100% agree with Carmack. There was a craft in writing software that I feel has been lost with access to inexpensive memory and compute. Programmers can be inefficient because they have all that extra headroom to do so which just contributes to the cycle of needing better hardware.
Software development has been commoditized and is directed by MBA's and others who don't see it as a craft. The need for fast project execution is above the craft of programming, hence, the code is bug-riddled and slow.
There are some niche areas (vintage,pico-8, arduino...) where people can still practise the craft, but that's just a hobby now. When this topic comes up I always think about Tarkovsky's Andrei Rublev movie, the artist's struggle.
Really no notes on this. Carmack hit both sides of the coin:
- the way we do industry-scale computing right now tends to leave a lot of opportunity on the table because we decouple, interpret, and de-integrate where things would be faster and take less space if we coupled, compiled, and made monoliths
- we do things that way because it's easier to innovate, tweak, test, and pivot on decoupled systems that isolate the impact of change and give us ample signal about their internal state to debug and understand them
It could also run on much less current hardware if efficiency was a priority. Then comes the AI bandwagon and everyone is buying loads of new equipment to keep up with the Jones.
This is a double edge sword problem, but I think what people are glazing over with the compute power topic is power efficiency. One thing I struggle with home labing old gaming equipment is the consideration to the power efficiency of new hardware. Hardly a valid comparison, but I can choose to recycle my Ryzen 1700x with a 2080ti for a media server that will probably consume a few hundred watts, or I can get a M1 that sips power. The double edge sword part is that Ryzen system becomes considerably more power efficient running proxmox or ubuntu server vs a windows client. We as a society choose our niche we want to leverage and it swings with and like economics, strapped for cash, choose to build more efficient code; no limits, buy the horsepower to meet the needs.
Yeah, having browsers the size and complexities of OSs is just one of many symptoms. I intimate at this concept in a grumbling, helpless manner somewhat chronically.
There's a lot today that wasn't possible yesterday, but it also sucks in ways that weren't possible then.
I foresee hostility for saying the following, but it really seems most people are unwilling to admit that most software (and even hardware) isn't necessarily made for the user or its express purpose anymore. To be perhaps a bit silly, I get the impression of many services as bait for telemetry and background fun.
While not an overly earnest example, looking at Android's Settings/System/Developer Options is pretty quick evidence that the user is involved but clearly not the main component in any respect. Even an objective look at Linux finds manifold layers of hacks and compensation for a world of hostile hardware and soft conflict. It often works exceedingly well, though as impractical as it may be to fantasize, imagine how badass it would be if everything was clean, open and honest. There's immense power, with lots of infirmities.
I've said that today is the golden age of the LLM in all its puerility. It'll get way better, yeah, but it'll get way worse too, in the ways that matter.[1]
I mean, if you put win 95 on a period appropriate machine, you can do office work easily. All that is really driving computing power is the web and gaming. If we weren't doing either of those things as much, I bet we could all quite happily use machines from the 2000s era
Let's keep the CPU efficiency golf to Zachtronics games, please.
I/O is almost always the main bottleneck. I swear to god 99% of developers out there only know how to measure cpu cycles of their code so that's the only thing they optimize for. Call me after you've seen your jobs on your k8s clusters get slow because all of your jobs are inefficiently using local disk and wasting cycles waiting in queue for reads/writes. Or your DB replication slows down to the point that you have to choose between breaking the mirror and stop making money.
And older hardware consumes more power. That's the main driving factor between server hardware upgrades because you can fit more compute into your datacenter.
I agree with Carmack's assessment here, but most people reading are taking the wrong message away with them.
There's servers and there's all of the rest of consumer hardware.
I need to buy a new phone every few years simply because the manufacturer refuses to update it. Or they add progressively more computationally expensive effects that makes my old hardware crawl. Or the software I use only supports 2 old version of macOS. Or Microsoft decides that your brand new cpu is no good for win 11 because it's lacking a TPM. Or god help you if you try to open our poorly optimized electron app on your 5 year old computer.
People say this all the time, and usually it's just an excuse not to optimize anything.
First, I/O can be optimized. It's very likely that most servers are either wasteful in the number of requests they make, or are shuffling more data around than necessary.
Beyond that though, adding slow logic on top of I/O latency only makes things worse.
Also, what does I/O being a bottleneck have to do with my browser consuming all of my RAM and using 120% of my CPU? Most people who say "I/O is the bottleneck" as a reason to not optimize only care about servers, and ignore the end users.
I/O _can_ be optimized. I know someone who had this as their fulltime job at Meta. Outside of that nobody is investing in it though.
I'm a platform engineer for a company with thousands of microservices. I'm not thinking on your desktop scale. Our jobs are all memory hogs and I/O bound messes. Across all of the hardware we're buying we're using maybe 10% CPU. Peers I talk to at other companies are almost universally in the same situation.
I'm not saying don't care about CPU efficiency, but I encounter dumb shit all the time like engineers asking us to run exotic new databases with bad licensing and no enterprise features just because it's 10% faster when we're nowhere near experiencing those kinds of efficiency problems. I almost never encounter engineers who truly understand or care about things like resource contention/utilization. Everything is still treated like an infinite pool with perfect 100% uptime, despite (at least) 20 years of the industry knowing better.
There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.
It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].
It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.
[0] https://www.lg.com/uk/lg-experience/inspiration/lg-ai-wash-e...
> the market sells as if all goods were high-quality
The phrase "high-quality" is doing work here. The implication I'm reading is that poor performance = low quality. However, the applications people are mentioning in this comment section as low performance (Teams, Slack, Jira, etc) all have competitors with much better performance. But if I ask a person to pick between Slack and, say, a a fast IRC client like Weechat... what do you think the average person is going to consider low-quality? It's the one with a terminal-style UI, no video chat, no webhook integrations, and no custom avatars or emojis.
Performance is a feature like everything else. Sometimes, it's a really important feature; the dominance of Internet Explorer was destroyed by Chrome largely because it was so much faster than IE when it was released, and Python devs are quickly migrating to uv/ruff due to the performance improvement. But when you start getting into the territory of "it takes Slack 5 seconds to start up instead of 10ms", you're getting into the realm where very few people care.
You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Performance is not just a quantitative issue. It leaks into everything, from architecture to delivery to user experience. Bad performance has expensive secondary effects, because we introduce complexity to patch over it like horizontal scaling, caching or eventual consistency. It limits our ability to make things immediately responsive and reliable at the same time.
> You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
I never said performance wasn't an important quality metric, just that it's not the only quality metric. If a slow program has the features I need and a fast program doesn't, the slow program is going to be "higher quality" in my mind.
> How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Like any other feature, whether or not performance is important depends on the user and context. Chrome being faster than IE8 at general browsing (rendering pages, opening tabs) was very noticeable. uv/ruff being faster than pip/poetry is important because of how the tools integrate into performance-sensitive development workflows. Does Slack taking 5-10 seconds to load on startup matter? -- to me not really, because I have it come up on boot and forget about it until my next system update forced reboot. Do I use LibreOffice or Word and Excel, even though LibreOffice is faster? -- I use Word/Excel because I've run into annoying compatibility issues enough times with LO to not bother. LibreOffice could reduce their startup and file load times to 10 picoseconds and I would still use MS Office, because I just want my damn documents to keep the same formatting my colleagues using MS Office set on their Windows computers.
Now of course I would love the best of all worlds; programs to be fast and have all the functionality I want! In reality, though, companies can't afford to build every feature, performance included, and need to pick and choose what's important.
That's true. I meant it in a broader sense. Quality = {speed, function, lack of bugs, ergonomics, ... }.
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.
There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
I don't think it's necessarily a market for lemons. That involves information asymmetry.
Sometimes that happens with buggy software, but I think in general, people just want to pay less and don't mind a few bugs in the process. Compare and contrast what you'd have to charge to do a very thorough process with multiple engineers checking every line of code and many hours of rigorous QA.
I once did some software for a small book shop where I lived in Padova, and created it pretty quickly and didn't charge the guy - a friend - much. It wasn't perfect, but I fixed any problems (and there weren't many) as they came up and he was happy with the arrangement. He was patient because he knew he was getting a good deal.
I do think there is an information problem in many cases.
It is easy to get information of features. It is hard to get information on reliability or security.
The result is worsened because vendors compete on features, therefore they all make the same trade off of more features for lower quality.
I have worked for large corporations that have foisted awful HR, expense reporting, time tracking and insurance "portals" that were so awful I had to wonder if anyone writing the checks had ever seen the product. I brought up the point several times that if my team tried to tell a customer that we had their project all done but it was full of as many bugs and UI nightmares as these back office platforms, I would be chastised, demoted and/or fired.
If they think it is unimportant talk as if it is. It could be more polished. Do we want to impress them or just satisfy their needs?
> I had to wonder if anyone writing the checks had ever seen the product
Probably not, and that's like 90% of the issue with enterprise software. Sadly enterprise software products are often sold based mainly on how many boxes they check in the list of features sent to management, not based on the actual quality and usability of the product itself.
The job it’s paid to do is satisfy regulation requirements.
What you're describing is Enterprise(tm) software. Some consultancy made tens of millions of dollars building, integrating, and deploying those things. This of course was after they made tens of millions of dollars producing reports exploring how they would build, integrate, and deploy these things and all the various "phases" involved. Then they farmed all the work out to cheap coders overseas and everyone went for golf.
Meanwhile I'm a founder of startup that has gotten from zero to where it is on probably what that consultancy spends every year on catering for meetings.
The user tolerance has changed as well because the web 2.0 "perpetual beta" and SaaS replacing other distribution models.
Also Microsoft has educated now several generations to accept that software fails and crashes.
Because "all software is the same", customers may not appreciate good software when they're used to live with bad software.
Is this really tolerance and not just monopolistic companies abusing their market position? I mean workers can't even choose what software they're allowed to use, those choices are made by the executive/management class.
I have that washing machine btw. I saw the AI branding and had a chuckle. I bought it anyway because it was reasonably priced (the washer was $750 at Costco).
In my case I bought it because LG makes appliances that fit under the counter if you don't have much space.
It bothered me the AI BS, but the price was good and the machine works fine.
Even if end-users had the data to reasonably tie-break on software quality and performance, as I scroll my list of open applications not a single one of them can be swapped out with another just because it were more performant.
For example: Docker, iterm2, WhatsApp, Notes.app, Postico, Cursor, Calibre.
I'm using all of these for specific reasons, not for reasons so trivial that I can just use the best-performing solution in each niche.
So it seems obviously true that it's more important that software exists to fill my needs in the first place than it pass some performance bar.
I’m surprised in your list because it contains 3 apps that I’ve replaced specifically due to performance issues (docker, iterm and notes). I don’t consider myself particularly performance sensitive (at home) either. So it might be true that the world is even _less_ likely to pay for resource efficiency than we think.
Except you’ve already swapped terminal for iterm, and orbstack already exists in part because docker left so much room for improvement, especially on the perf front.
Therefore brands as guardians of quality .
> The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
That's where FOSS or even proprietary "shared source" wins. You know if the software you depend on is generally badly or generally well programmed. You may not be able to find the bugs, but you can see how long the functions are, the comments, and how things are named. YMMV, but conscientiousness is a pretty great signal of quality; you're at least confident that their code is clean enough that they can find the bugs.
Basically the opposite of the feeling I get when I look at the db schemas of proprietary stuff that we've paid an enormous amount for.
Bad software is not cheaper to make (or maintain) in the long-term.
There are many exceptions.
1. Sometimes speed = money. Being the first to market, meeting VC-set milestones for additional funding, and not running out of runway are all things cheaper than the alternatives. Software maintenance costs later don't come close to opportunity costs if a company/project fails.
2. Most of the software is disposable. It's made to be sold, and the code repo will be chucked into a .zip on some corporate drive. There is no post-launch support, and the software's performance after launch is irrelevant for the business. They'll never touch the codebase again. There is no "long-term" for maintenance. They may harm their reputation, but that depends on whether their clients can talk with each other. If they have business or govt clients, they don't care.
3. The average tenure in tech companies is under 3 years. Most people involved in software can consider maintenance "someone else's problem." It's like the housing stock is in bad shape in some countries (like the UK) because the average tenure is less than 10 years. There isn't a person in the property's owner history to whom an investment in long-term property maintenance would have yielded any return. So now the property is dilapidated. And this is becoming a real nationwide problem.
4. Capable SWEs cost a lot more money. And if you hire an incapable IC who will attempt to future-proof the software, maintenance costs (and even onboarding costs) can balloon much more than some inefficient KISS code.
5. It only takes 1 bad engineering manager in the whole history of a particular piece of commercial software to ruin its quality, wiping out all previous efforts to maintain it well. If someone buys a second-hand car and smashes it into a tree hours later, was keeping the car pristinely maintained for that moment (by all the previous owners) worth it?
And so forth. What you say is true in some cases (esp where a company and its employees act in good faith) but not in many others.
That’s true - but finding good engineers who know how to do it is more expensive, at least in expenditures.
Maybe not, but that still leaves the question of who ends up bearing the actual costs of the bad software.
What does "make in the long-term" even mean? How do you make a sandwich in the long-term?
Bad things are cheaper and easier to make. If they weren't, people would always make good things. You might say "work smarter," but smarter people cost more money. If smarter people didn't cost more money, everyone would always have the smartest people.
the thing is - countries have set down legal rules preventing selling of food that actively harms the consumer(expired, known poisonous etc) to continue your food analogy.
in software the regulations can be boiled down to 'lol lmao' in pre-GDPR era. and even now i see GDPR violations daily.
I like to point out that since ~1980, computing power has increased about 1000X.
If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.
If you went back in time to 1980 and offered the following choice:
I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.
People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...
Personally I think the 1000Xers kinda ruined things for the rest of us.
Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.
Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.
The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.
Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch. My corporate VPN app takes 30 seconds to go from a blank screen to deciding if it’s going to prompt me for credentials or remember my login, every morning. This is on an i9 with 64GB ram, and 1GN fiber.
On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional, and their core functionality has regressed significantly in the last decade. I’m not talking about features that I prefer, but as an example if you load two links in Reddit in two different tabs my experience has been that it’s 50/50 if they’ll actually both load or if one gets stuck either way skeletons.
> are all applications I use every day that take 20+ seconds to launch.
I suddenly remembered some old Corel Draw version circa year 2005, which had loading screen enumerating random things it loaded and was computing until a final message "Less than a minute now...". It most often indeed lasted less than a minute to show interface :).
I'm on a four year old mid-tier laptop and opening VS Code takes maybe five seconds. Opening IDEA takes five seconds. Opening twitter on an empty cache takes perhaps four seconds and I believe I am a long way from their servers.
On my work machine slack takes five seconds, IDEA is pretty close to instant, the corporate VPN starts nearly instantly (although the Okta process seems unnecessarily slow I'll admit), and most of the sites I use day-to-day (after Okta) are essentially instant to load.
I would say that your experiences are not universal, although snappiness was the reason I moved to apple silicon macs in the first place. Perhaps Intel is to blame.
VS Code defers a lot of tasks to the background at least. This is a bit more visible in intellij; you seem to measure how long it takes to show its window, but how long does it take for it to warm up and finish indexing / loading everything, or before it actually becomes responsive?
Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.
Also keep in mind that desktop computers haven't gotten significantly faster for tasks like opening applications in the past years; they're more efficient (especially the M line CPUs) and have more hardware for specialist workloads like what they call AI nowadays, but not much innovation in application loading.
You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.
I wish the big desktop app builders would invest in native applications. I understand why they go for web technology (it's the crossplatform GUI technology that Java and co promised and offers the most advanced styling of anything anywhere ever), but I wish they invested in it to bring it up to date.
>Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.
Do any of those do the indexing that cause the slowness? If not it's comparing apples to oranges.
What timescale are we talking about? Many DOS stock and accounting applications were basically instantaneous. There are some animations on iPhone that you can't disable that take longer than a series of keyboard actions of a skilled operator in the 90s. Windows 2k with a stripped shell was way more responsive that today's systems as long as you didn't need to hit the harddrives.
The "instant" today is really laggy compared to what we had. Opening Slack takes 5s on a flagship phone and opening a channel which I just had open and should be fully cached takes another 2s. When you type in JIRA the text entry lags and all the text on the page blinks just a tiny bit (full redraw). When pages load on non-flagship phones (i.e. most of the world), they lag a lot, which I can see on monitoring dashboards.
I guess you don't need to wrestle with Xcode?
Somehow the Xcode team managed to make startup and some features in newer Xcode versions slower than older Xcode versions running on old Intel Macs.
E.g. the ARM Macs are a perfect illustration that software gets slower faster than hardware gets faster.
After a very short 'free lunch' right after the Intel => ARM transition we're now back to the same old software performance regression spiral (e.g. new software will only be optimized until it feels 'fast enough', and that 'fast enough' duration is the same no matter how fast the hardware is).
Another excellent example is the recent release of the Oblivion Remaster on Steam (which uses the brand new UE5 engine):
On my somewhat medium-level PC I have to reduce the graphics quality in the Oblivion Remaster so much that the result looks worse than 14-year old Skyrim (especially outdoor environments), and that doesn't even result in a stable 60Hz frame rate, while Skyrim runs at a rock-solid 60Hz and looks objectively better in the outdoors.
E.g. even though the old Skyrim engine isn't by far as technologically advanced as UE5 and had plenty of performance issues at launch on a ca. 2010 PC, the Oblivion Remaster (which uses a "state of the art" engine) looks and performs worse than its own 14 years old predecessor.
I'm sure the UE5-based Oblivion remaster can be properly optimized to beat Skyrim both in looks and performance, but apparently nobody cared about that during development.
You're comparing the art(!) of two different games, that targeted two different sets of hardware while using the ideal hardware for one and not the other. Kind of a terrible example.
> You're comparing the art(!)
The art direction, modelling and animation work is mostly fine, the worse look results from the lack of dynamic lighting and ambient occlusion in the Oblivion Remaster when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).
Basically, the best art will always look bad without good lighting (and even baked or faked ambient lighting like in Skyrim looks better than no ambient lighting at all.
Digital Foundry has an excellent video about the issues:
https://www.youtube.com/watch?v=p0rCA1vpgSw
TL;DR: the 'ideal hardware' for the Oblivion Remaster doesn't exist, even if you get the best gaming rig money can buy.
> …when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).
This also happens to many other UE5 games like S.T.A.L.K.E.R. 2 where they try to push the graphics envelope with expensive techniques and most people without expensive hardware have to turn the settings way down (even use things like upscaling and framegen which further makes the experience a bit worse, at least when the starting point is very bad and you have to use them as a crutch), often making these modern games look worse than something a decade old.
Whatever UE5 is doing (or rather, how so many developers choose to use it) is a mistake now and might be less of a mistake in 5-10 years when the hardware advances further and becomes more accessible. Right now it feels like a ploy by the Big GPU to force people to upgrade to overpriced hardware if they want to enjoy any of these games; or rather, sillyness aside, is an attempt by studios to save resources by making the artists spend less time on faking and optimizing effects and detail that can just be brute forced by the engine.
In contrast, most big CryEngine and idTech games run great even on mid range hardware and still look great.
I just clicked on the network icon next to the clock on a Windows 11 laptop. A gray box appeared immediately, about one second later all the buttons for wifi, bluetooth, etc appeared. Windows is full of situations like this, that require no network calls, but still take over one second to render.
It's strange, it visibly loading the buttons is indicative they use async technology that can use multithreaded CPUs effectively... but it's slower than the old synchronous UI stuff.
I'm sure it's significantly more expensive to render than Windows 3.11 - XP were - rounded corners and scalable vector graphics instead of bitmaps or whatever - but surely not that much? And the resulting graphics can be cached.
I'd wager that a 2021 MacBook, like the one I have, is stronger than the laptop used by majority of people in the world.
Life on an entry or even mid level windows laptop is a very different world.
Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.
A few years ago I accidentally left my laptop at work on a Friday afternoon. Instead of going into the office, I pulled out a first generation raspberry pi and got everything set up on that. Needless to say, our nodejs app started pretty slowly. Not for any good reason - there were a couple modules which pulled in huge amounts of code which we didn’t use anyway. A couple hours work made the whole app start 5x faster and use half the ram. I would never have noticed that was a problem with my snappy desktop.
I've found so many performance issues at work by booting up a really old laptop or working remotely from another continent. It's pretty straightforward to simulate either poor network conditions or generally low performance hardware, but we just don't generally bother to chase down those issues.
Oh yeah, I didn't even touch on devs being used to working on super faster internet.
If you're on Mac, go install Network Link Conditioner and crank that download an upload speed way down. (Xcode > Open Developer Tools > More Developer Tools... > "Additional Tools for Xcode {Version}").
A mix of both. There are large number of websites that are inefficiently written using up unnecessary amounts of resources. Semi-modern devices make up for that by just having a massive amount of computing power.
However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)
Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency. I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.
I think it's a very theoretical argument: we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
Says who? Who are these experienced people that know how to write fast software that think it is such a huge sacrifice?
The reality is that people who say things like this don't actually know much about writing fast software because it really isn't that difficult. You just can't grab electron and the lastest javascript react framework craze.
These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning C++ and sticking to javascript or python because that's what they learned first.
> All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
Would we? Really? I don't think giving up performance needs to be a compromise for the number of features or speed of delivering them.
People make higher-order abstractions for funzies?
One example is Office. Microsoft is going back to preloading office during Windows Boot so that you don't notice it loading. With the average system spec 25 years ago it made sense to preload office. But today, what is Office doing that it needs to offload its startup to running at boot?
I think it’s a little more nuanced than the broad takes make it seem.
One of the biggest performance issues I witness is that everyone assumes a super fast, always on WiFi/5G connection. Very little is cached locally on device so even if I want to do a very simple search through my email inbox I have to wait on network latency. Sometimes that’s great, often it really isn’t.
Same goes for many SPA web apps. It’s not that my phone can’t process the JS (even though there’s way too much of it), it’s poor caching strategies that mean I’m downloading and processing >1MB of JS way more often than I should be. Even on a super fast connection that delay is noticeable.
Spotify takes 7 seconds from clicking on its icon to playing a song on a 2024 top-of-the-range MacBook Pro. Navigating through albums saved on your computer can take several seconds. Double clicking on a song creates a 1/4sec pause.
This is absolutely remarkable inefficiency considering the application's core functionality (media players) was perfected a quarter century ago.
It depends. Can Windows 3.11 be faster than Windows 11? Sure, maybe even in most cases: https://jmmv.dev/2023/06/fast-machines-slow-machines.html
To note, people will have wildly different tolerance to delays and lag.
On the extreme, my retired parents don't feel the difference between 5s or 1s when loading a window or clicking somewhere. I offered a switch to a new laptop, cloning their data, and they didn't give a damn and just opened the laptop the closest to them.
Most people aren't that desensitized, but for some a 600ms delay is instantaneous when for other it's 500ms too slow.
In Carmack's Lex Fridman interview he says he knows C++ devs who still insist on using some ancient version of MSVC because it's *so fast* compared to the latest, on the latest hardware.
People conflat the insanity of running a network cable through every application with the poor performance of their computers.
Correction: devs have made the mistake of turning everything into remote calls, without having any understanding as to the performance implications of doing so.
Sonos’ app is a perfect example of this. The old app controlled everything locally, since the speakers set up their own wireless mesh network. This worked fantastically well. Someone at Sonos got the bright idea to completely rewrite the app such that it wasn’t even backwards-compatible with older hardware, and everything is now a remote calls. Changing volume? Phone —> Router —> WAN —> Cloud —> Router —> Speakers. Just… WHY. This failed so spectacularly that the CEO responsible stepped down / was forced out, and the new one claims that fixing the app is his top priority. We’ll see.
Presumably they wanted the telemetry. It's not clear that this was a dev-initiated switch.
Perhaps we can blame the 'statistical monetization' policies of adtech and then AI for all this -- i'm not entirely sold on developers.
What, after all, is the difference between an `/etc/hosts` set of loop'd records vs. an ISP's dns -- as far as the software goes?
You’re right, and I shouldn’t necessarily blame devs for the idea, though I do blame their CTO for not standing up to it if nothing else.
Though it’s also unclear to me in this particular case why they couldn’t collect commands being issued, and then batch-send them hourly, daily, etc. instead of having each one route through the cloud.
2021 MacBook and 2020 iPhone are not "old". Still using 2018 iPhone. Used a 2021 Macbook until a month ago.
A lot of nostalgia is at work here. Modern tech is amazing. If the old tools were actually better people would actually use them. Its not like you can't get them to work.
You are using a relatively high end computer and mobile device. Go and find a cheap laptop x86 and try doing the same. It will be extremely painful. Most of this is due to a combination of Windows 11 being absolute trash and JavaScript being used extensively in applications/websites. JavaScript is memory hog and can be extremely slow depending on how it is written (how you deal with loops massively affects the performance).
What is frustrating though that until relatively recently these devices would work fine with JS heavy apps and work really well with anything that is using a native toolkit.
I have a 2019 Intel MacBook and Outlook takes about five seconds to load and constantly sputters
I’m sure you know this, but a reminder that modern devices cache a hell of a lot, even when you “quit” such that subsequent launches are faster. Such is the benefit of more RAM.
I could compare Slack to, say, HexChat (or any other IRC client). And yeah, it’s an unfair comparison in many ways – Slack has far more capabilities. But from another perspective, how many of them do you immediately need at launch? Surely the video calling code could be delayed until after the main client is up, etc. (and maybe it is, in which case, oh dear).
A better example is Visual Studio [0], since it’s apples to apples.
[0]: https://youtu.be/MR4i3Ho9zZY
Lightroom non-user detected
It vastly depends on what software you're forced to use.
Here's some software I use all the time, which feels horribly slow, even on a new laptop:
Slack.
Switching channels on slack, even when you've just switched so it's all cached, is painfully slow. I don't know if they build in a 200ms or so delay deliberately to mask when it's not cached, or whether it's some background rendering, or what it is, but it just feels sluggish.
Outlook
Opening an email gives a spinner before it's opened. Emails are about as lightweight as it gets, yet you get a spinner. It's "only" about 200ms, but that's still 200ms of waiting for an email to open. Plain text emails were faster 25 years ago. Adding a subset of HTML shouldn't have caused such a massive regression.
Teams
Switching tabs on teams has the same delayed feeling as Slack. Every iteraction feels like it's waiting 50-100ms before actioning. Clicking an empty calendar slot to book a new event gives 30-50ms of what I've mentally internalised as "Electron blank-screen" but there's probably a real name out there for basically waiting for a new dialog/screen to even have a chrome, let alone content. Creating a new calendar event should be instant, it should not take 300-500ms or so of waiting for the options to render.
These are basic "productivity" tools in which every single interaction feels like it's gated behind at least a 50ms debounce waiting period, with often extra waiting for content on top.
Is the root cause network hops or telemetry? Is it some corporate antivirus stealing the computer's soul?
Ultimately the root cause doesn't actually matter, because no matter the cause, it still feels like I'm wading through treacle trying to interact with my computer.
I’d take 50ms but in my experience it’s more like 250.
You're probably right, I'm likely massively underestimating the time, it's long enough to be noticable, but not so long that it feels instantly frustrating the first time, it just contributes to an overall sluggishness.
They're comparing these applications to older applications that loaded instantly on much slower computers.
Both sides are right.
There is a ton of waste and bloat and inefficiency. But there's also a ton of stuff that genuinely does demand more memory and CPU. An incomplete list:
- Higher DPI displays use intrinsically more memory and CPU to paint and rasterize. My monitor's pixel array uses 4-6X more memory than my late 90s PC had in the entire machine.
- Better font rendering is the same.
- Today's UIs support Unicode, right to left text, accessibility features, different themes (dark/light at a minimum), dynamic scaling, animations, etc. A modern GUI engine is similar in difficulty to a modern game engine.
- Encryption everywhere means that protocols are no longer just opening a TCP connection but require negotiation of state and running ciphers.
- The Web is an incredibly rich presentation platform that comes with the overhead of an incredibly rich presentation platform. It's like PostScript meets a GUI library meets a small OS meets a document markup layer meets...
- The data sets we deal with today are often a lot larger.
- Some of what we've had to do to get 1000X performance itself demands more overhead: multiple cores, multiple threads, 64 bit addressing, sophisticated MMUs, multiple levels of cache, and memory layouts optimized for performance over compactness. Those older machines were single threaded machines with much more minimal OSes, memory managers, etc.
- More memory means more data structure overhead to manage that memory.
- Larger disks also demand larger structures to manage them, and modern filesystems have all kinds of useful features like journaling and snapshots that also add overhead.
... and so on.
Yup, people run software on shitty computers and blame all the software.
The only slow (local) software I know is llvm and cpp compilers
Other are pretty fast
You have stories of people running 2021 MacBooks and complaining about performance. Those are not shitty computers.
This is something I've wished to eliminate too. Maybe we just cast the past 20 years as the "prototyping phase" of modern infrastructure.
It would be interesting to collect a roadmap for optimizing software at scale -- where is there low hanging fruit? What are the prime "offenders"?
Call it a power saving initiative and get environmentally-minded folks involved.
IMO, the prime offender is simply not understanding fundamentals. From simple things like “a network call is orders of magnitude slower than a local disk, which is orders of magnitude slower than RAM…” (and moreover, not understanding that EBS et al. are networked disks, albeit highly specialized and optimized), or doing insertions to a DB by looping over a list and writing each row individually.
I have struggled against this long enough that I don’t think there is an easy fix. My current company is the first I’ve been at that is taking it seriously, and that’s only because we had a spate of SEV0s. It’s still not easy, because a. I and the other technically-minded people have to find the problems, then figure out how to explain them b. At its heart, it’s a culture war. Properly normalizing your data model is harder than chucking everything into JSON, even if the former will save you headaches months down the road. Learning how to profile code (and fix the problems) may not be exactly hard, but it’s certainly harder than just adding more pods to your deployment.
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.
If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.
I don’t think abundance vs speed is the right lens.
No user actually wants abundance. They use few programs and would benwfit if those programs were optimized.
Established apps could be optimized to the hilt.
But they seldom are.
> They use few programs
Yes but it's a different 'few programs' than 99% of all other users, so we're back to square one.
>No user actually wants abundance.
No, all users just want the few programs which they themselves need. The market is not one user, though. It's all of them.
But each vendor only develop a few software and generally supports only three platforms -/+ one. It’s so damning when I see projects reaching out for electron, when they only support macOS and Windows. And software like Slack has no excuse for being this slow on anything other than latest gen cpu and 1gb internet connection.
slack is shit because you're not the customer as i mentioned above.
Users only want 5% of the features of the few programs they use. However everyone has a different list of features and a different list of programs. And so to get a market you need all the features on all the programs.
> Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
Really? Because while abstractions like that exist (i.e. a webserver frameworks, reactivity, SQL and ORMs etc), I would argue that these aren't the abstractions that cause the most maintenance and performance issues. These are usually in the domain/business application and often not something that made anything quicker to develop or anything, but instead created by a developer that just couldn't help themselves
I think they’re referring to Electron.
Edit: and probably writing backends in Python or Ruby or JavaScript.
The backend programming language usually isn't a significant bottleneck; running dozens of database queries in sequence is the usual bottleneck, often compounded by inefficient queries, inappropriate indexing, and the like.
Yep. I’m a DBRE, and can confirm, it’s almost always the DB, with the explicit caveat that it’s also rarely the fault of the DB itself, but rather the fault of poor schema and query design.
Queries I can sometimes rewrite, and there’s nothing more satisfying than handing a team a 99% speed-up with a couple of lines of SQL. Sometimes I can’t, and it’s both painful and frustrating to explain that the reason the dead-simple single-table SELECT is slow is because they have accumulated billions of rows that are all bloated with JSON and low-cardinality strings, and short of at a minimum table partitioning (with concomitant query rewrites to include the partition key), there is nothing anyone can do. This has happened on giant instances, where I know the entire working set they’re dealing with is in memory. Computers are fast, but there is a limit.
The other way the DB gets blamed is row lock contention. That’s almost always due to someone opening a transaction (e.g. SELECT… FOR UPDATE) and then holding it needlessly while doing other stuff, but sometimes it’s due to the dev not being aware of the DB’s locking quirks, like MySQL’s use of gap locks if you don’t include a UNIQUE column as a search predicate. Read docs, people!
It seems to me most developers don't want to learn much about the database and would prefer to hide it behind the abstractions used by their language of choice. I can relate to a degree; I was particularly put off by SQL's syntax (and still dislike it), but eventually came to see the value of leaning into the database's capabilities.
> ORMs
Certain ORMs such as Rails's ActiveRecord are part of the problem because they create the illusion that local memory access and DB access are the same thing. This can lead to N+1 queries and similar issues. The same goes for frameworks that pretend that remote network calls are just a regular method access (thankfully, such frameworks seem to have become largely obsolete).
> If dynamic array bounds checking cost 5% (narrator: it is far less than that)
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.
It's not that simple either - normally, if you're doing some loops over a large array of pixels, say, to perform some operation to them, there will only be a couple of bounds checks before the loop starts, checking the starting and ending conditions of the loops, not re-doing the bounds check for every pixel.
So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!
I agree with the sentiment and analysis that most humans prefer short term gains over long term ones. One correction to your example, though. Dynamic bounds checking does not solve security. And we do not know of a way to solve security. So, the gains are not as crisp as you are making them seem.
Bounds checking solves one tiny subset of security. There are hundreds of other subsets that we know how to solve. However these days the majority of the bad attacks are social and no technology is likely to solve them - as more than 10,000 years of history of the same attack has shown. Technology makes the attacks worse because they now scale, but social attacks have been happening for longer than recorded history (well there is every reason to believe that - there is unlikely to evidence going back that far).
You don't have to "solve" security in order to improve security hygiene by a factor of X, and thus risk of negative consequences by that same factor of X.
Most programming languages have array bounds checking now.
Most programming languages are written in C, which doesn't.
Fairly sure that was OP's point.
Don't forget the law of large numters. 5% performance hit on one system is one thing, 5% across almost all of the current computing landscape is still a pretty huge value.
It's about 5%.
Cost of cyberattacks globally[1]: O($trillions)
Cost of average data breach[2][3]: ~$4 million
Cost of lost developer productivity: unknown
We're really bad at measuring the secondary effects of our short-sightedness.
[1] https://iotsecurityfoundation.org/time-to-fix-our-digital-fo...
[2] https://www.internetsociety.org/resources/doc/2023/how-to-ta...
[3] https://www.ibm.com/reports/data-breach
But it's not free for the taking. The point is that we'd get more than that 5%'s worth in exchange. So sure, we'll get significant value "if software optimization was truly a priority", but we get even more value by making other things a priority.
Saying "if we did X we'd get a lot in return" is similar to the fallacy of inverting logical implication. The question isn't, will doing something have significant value, but rather, to get the most value, what is the thing we should do? The answer may well be not to make optimisation a priority even if optimisation has a lot of value.
depends on whether the fact that software can be finished will ever be accepted. If you're constantly redeveloping the same thing to "optimize and streamline my experience" (please don't) then yes, the advantage is dubious. But if not, then the saved value in operating costs keeps increasing as time goes on. It won't make much difference in my homelab, but at datacenter scale it does
Even the fact that value keeps increasing doesn't mean it's a good idea. It's a good idea if it keeps increasing more than other value. If a piece of software is more robust against attacks then the value in that also keeps increasing over time, possibly more than the cost in hardware. If a piece of software is easier to add features to, then that value also keeps increasing over time.
If what we're asking is whether value => X, i.e. to get the most value we should do X, you cannot answer that in the positive by proving X => value. If optimising something is worth a gazillion dollars, you still should not do it if doing something else is worth two gazillion dollars.
The first reply is essentially right. This isn't what happened at all, just because C is still prevalent. All the inefficiency is everything down the stack, not in C.
I don't trust that shady-looking narrator. 5% of what exactly? Do you mean that testing for x >= start and < end is only 5% as expensive as assigning an int to array[x]?
Or would bounds checking in fact more than double the time to insert a bunch of ints separately into the array, testing where each one is being put? Or ... is there some gimmick to avoid all those individual checks, I don't know.
>Personally I think the 1000Xers kinda ruined things for the rest of us.
Reminds me of when NodeJS came out that bridged client and server side coding. And apparently their repos can be a bit of a security nightmare nowadays- so the minimalist languages with limited codebase do have their pros.
I don't think it's that deep. We are just stuck with browsers now, for better and worse. Everything else trails.
We're stuck with browsers now until the primary touch with the internet is assistants / agent UIs / chat consoles.
That could end up being Electron (VS Code), though that would be a bit sad.
I think it'd be pretty funny if to book travel in 2035 you need to use a travel agent that's objectively dumber than a human. We'd be stuck in the eighties again, but this time without each other to rely on.
Of course, that would be suicide for the industry. But I'm not sure investors see that.
I don't think we are gonna go there. Talking is cumbersome. There's a reason, besides social anxiety that people prefer to use self-checkout and electronically order fastfood. There are easier ways to do a lot of things than with words.
I'd bet on maybe ad hoc ai designed ui-s you click but have a voice search when you are confused about something.
If you know what you want then not talking to a human is faster. However if you are not sure a human can figure out. I'm not sure I'd trust a voice assistant - the value in the human is an informed opinion which is hard to program, but it is easy to program a recommendation for whatever makes the most profit. Of course humans often don't have an informed opinion either, but at least sometimes they do, and they will also sometimes admit it when they don't.
> the value in the human is an informed opinion which is hard to program
I don't think I ever used a human for that. They are usually very uninformed about everything that's not their standard operational procedure or some current promotional materials.
20 years ago when I was at McDonalds there would be several customers per shift (so many 1 in 500?) who didn't know what they wanted and asked for a recommendation. Since I worked there I ate there often enough to know if the special was something I liked or not.
Bless your souls. I'm not saying it doesn't happen. I just personally had only bad experiences so I actively avoid human interactive input in my commercial activity.
And this is JavaScript. And you. are. going. to. LOVE IT!
The problem is 1000xers are a rarity.
The software desktop users have to put up with is slow.
You can always install DOS as your daily driver and run 1980's software on any hardware from the past decade, and then tell me how that's slow.
1000x referred to the hardware capability, and that's not a rarity that is here.
The trouble is how software has since wasted a majority of that performance improvement.
Some of it has been quality of life improvements, leading nobody to want to use 1980s software or OS when newer versions are available.
But the lion's share of the performance benefit got chucked into the bin with poor design decisions, layers of abstractions, too many resources managed by too many different teams that never communicate making any software task have to knit together a zillion incompatible APIs, etc.
The sad thing is that even running DOS software in DOSBox (or in QEMU+FreeDOS), or Amiga software in UAE, is much faster than any native software I have run in many years on any modern systems. They also use more reasonable amounts of storage/RAM.
Animations is part of it of course. A lot of old software just updates the screen immediately, like in a single frame, instead of adding frustrating artificial delays to every interaction. Disabling animations in Android (an accessibility setting) makes it feel a lot faster for instance, but it does not magically fix all apps unfortunately.
Since 1980 maybe. But since 2005 it increased maybe 5x and even that's generous. And that's half of the time that passed and two decades.
https://youtu.be/m7PVZixO35c?si=px2QKP9-80hDV8Ui
Clock speeds are 2000x higher than the 80s.
IPC could be 80x higher when taking into account SIMD and then you have to multiply by each core. Mainstream CPUs are more like 1 to 2 million times faster than what was there in the 80s.
You can get full refurbished office computers that are still in the million times faster range for a few hundred dollars.
The things you are describing don't have much to do with computers being slow and feeling slow, but they are happening anyway.
Scripting languages that are constantly allocating memory to any small operation and pointer chasing ever variable because the type is dynamic is part of the problem, then you have people writing extremely inefficient programs in an already terrible environment.
Most programs are written now in however way the person writing them wants to work, not how someone using it wishes they were written.
Most people have actually no concept of optimization or what runs faster than something else. The vast majority of programs are written by someone who gets it to work and thinks "this is how fast this program runs".
The idea that the same software can run faster is a niche thought process, not even everyone on hacker news thinks about software this way.
It's more like 100,000X.
Just the clockspeed increased 1000X, from 4 MHz to 4 GHz.
But then you have 10x more cores, 10x more powerful instructions (AVX), 10x more execution units per core.
The title made me think Carmack was criticizing poorly optimized software and advocating for improving performance on old hardware.
When in fact, the tweet is absolutely not about either of the two. He's talking about a thought experiment where hardware stopped advancing and concludes with "Innovative new products would get much rarer without super cheap and scalable compute, of course".
It's related to a thread from yesterday, I'm guessing you haven't seen it:
https://news.ycombinator.com/item?id=43967208 https://threadreaderapp.com/thread/1922015999118680495.html
> "Innovative new products would get much rarer without super cheap and scalable compute, of course".
Interesting conclusion—I'd argue we haven't seen much innovation since the smartphone (18 years ago now), and it's entirely because capital is relying on the advances of hardware to sell what is to consumers essentially the same product that they already have.
Of course, I can't read anything past the first tweet.
And I'd argue that we've seen tons of innovation in the past 18 years aside from just "the smartphone" but it's all too easy to take for granted and forget from our current perspective.
First up, the smartphone itself had to evolve a hell of a lot over 18 years or so. Go try to use an iPhone 1 and you'll quickly see all of the roadblocks and what we now consider poor design choices littered everywhere, vs improvements we've all taken for granted since then.
18 years ago was 2007? Then we didn't have (for better or for worse on all points):
* Video streaming services
* Decent video game market places or app stores. Maybe "Battle.net" with like 5 games, lol!
* VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
* Mapping applications on a phone (there were some stand-alone solutions like Garmin and TomTom just getting off the ground)
* QR Codes (the standard did already exist, but mass adoption would get nowhere without being carried by the smartphone)
* Rideshare, food, or grocery delivery services (aside from taxis and whatever pizza or chinese places offered their own delivery)
* Voice-activated assistants (including Alexa and other standalone devices)
* EV Cars (that anyone wanted to buy) or partial autopilot features aside from 1970's cruise control
* Decent teleconferencing (Skype's featureset was damn limited at the time, and any expensive enterprise solutions were dead on the launchpad due to lack of network effects)
* Decent video displays (flatscreens were still busy trying to mature enough to push CRTs out of the market at this point)
* Color printers were far worse during this period than today, though that tech will never run out of room for improvement.
* Average US Internet speeds to the home were still ~1Mbps, with speeds to cellphone of 100kbps being quite luxurious. Average PCs had 2GB RAM and 50GB hard drive space.
* Naturally: the tech everyone loves to hate such as AI, Cryptocurrencies, social network platforms, "The cloud" and SaaS, JS Frameworks, Python (at least 3.0 and even realistically heavy adoption of 2.x), node.js, etc. Again "Is this a net benefit to humanity" and/or "does this get poorly or maliciously used a lot" doesn't speak to whether or not a given phenomena is innovative, and all of these objectively are.
most of that list is iteration, not innovation. like going from "crappy colour printer" to "not-so-crappy colour printer"
Your post seems entirely anachronistic.
2007 is the year we did get video streaming services: https://en.wikipedia.org/wiki/BBC_iPlayer
Steam was selling games, even third party ones, for years by 2007.
I'm not sure what a "VS-Code style IDE" is, but I absolutely did appreciate Visual Studio ( and VB6! ) prior to 2007.
2007 was in fact the peak of TomTom's profit, although GPS navigation isn't really the same as general purpose mapping application.
Grocery delivery was well established, Tesco were doing that in 1996. And the idea of takeaways not doing delivery is laughable, every establishment had their own delivery people.
Yes, there are some things on that list that didn't exist, but the top half of your list is dominated by things that were well established by 2007.
There has been a lot of innovation - but it is focused to some niche and so if you are not in a niche you don't see it and wouldn't care if you did. Most of the major things you need have already invented - I recall word processors as a kid, so they for sure date back to the 1970s - we still need word processors and there is a lot of polish that can be added, but all innovation is in niche things that the majority of us wouldn't have a use for if we knew about it.
Of course innovation is always in bits and spurts.
I think its a bad argument though. If we had to stop with the features for a little while and created some breathing room, the features would come roaring back. There'd be a downturn sure but not a continuous one.
This is exactly the point. People ignore that "bloat" is not (just) "waste", it is developer productivity increase motivated by economics.
The ability to hire and have people be productive in a less complicated language expands the market for workers and lowers cost.
"The world" runs on _features_ not elegant, fast, or bug free software. To the end user, there is no difference between a lack of a feature, and a bug. Nor is there any meaningful difference between software taking 5 minutes to complete something because of poor performance, compared to the feature not being there and the user having to spend 5 minutes completing the same task manually. It's "slow".
If you keep maximizing value for the end user, then you invariably create slow and buggy software. But also, if you ask the user whether they would want faster and less buggy software in exchange for fewer features, they - surprise - say no. And even more importantly: if you ask the buyer of software, which in the business world is rarely the end user, then they want features even more, and performance and elegance even less. Given the same feature set, a user/buyer would opt for the fastest/least buggy/most elegant software. But if it lacks any features - it loses. The reason to keep software fast and elegant is because it's the most likely path to be able to _keep_ adding features to it as to not be the less feature rich offering. People will describe the fast and elegant solution with great reviews, praising how good it feels to use. Which might lead people to think that it's an important aspect. But in the end - they wouldn't buy it at all if it didn't do what they wanted. They'd go for the slow frustrating buggy mess if it has the critical feature they need.
Agree WRT the tradeoff between features and elegance.
Although, I do wonder if there’s an additional tradeoff here. Existing users, can apparently do what they need to do with the software, because they are already doing it. Adding a new feature might… allow them to get rid of some other software, or do something new (but, that something new must not be so earth shattering, because they didn’t seek out other software to do it, and they were getting by without it). Therefore, I speculate that existing users, if they really were introspective, would ask for those performance improvements first. And maybe a couple little enhancements.
Potential new users on the other hand, either haven’t heard of your software yet, or they need it to do something else before they find it useful. They are the ones that reasonably should be looking for new features.
So, in “features vs performance” decision is also a signal about where the developers’ priorities lay: adding new users or keeping old ones happy. So, it is basically unsurprising that:
* techies tend to prefer the latter—we’ve played this game before, and know we want to be the priority for the bulk of the time using the thing, not just while we’re being acquired.
* buggy slow featureful software dominates the field—this is produced by companies that are prioritizing growth first.
* history is littered with beautiful, elegant software that users miss dearly, but which didn’t catch on broadly enough to sustain the company.
However, the tradeoff is real in both directions; most people spend most of their time as users instead of potential users. I think this is probably a big force behind the general perception that software and computers are incredibly shit nowadays.
Almost all of my nontechnical friends and family members have at some point complained about bloated and overly complicated software that they are required to use.
Also remember that Microsoft at this point has to drag their users kicking and screaming to use the next Windows version. If users were let to decide for themselves, many would have never upgraded past Windows XP. All that despite all the pretty new features in the later versions.
I'm fully with you that businesses and investors want "features" for its own sake, but definitely not users.
Perfectly put. People who try to argue that more time should be spent on making software perform better probably aren't thinking about who's going to pay for that.
For the home/office computer, the money spent on more RAM and a better CPU enables all software it runs to be shipped more cheaply and with more features.
I heartily agree. It would be nice if we could extend the lifetime of hardware 5, 10 years past its, "planned obsolescence." This would divert a lot of e-waste, leave a lot of rare earth minerals in the ground, and might even significantly lower GHG emissions.
The market forces for producing software however... are not paying for such externalities. It's much cheaper to ship it sooner, test, and iterate than it is to plan and design for performance. Some organizations in the games industry have figured out a formula for having good performance and moving units. It's not spread evenly though.
In enterprise and consumer software there's not a lot of motivation to consider performance criteria in requirements: we tend to design for what users will tolerate and give ourselves as much wiggle room as possible... because these systems tend to be complex and we want to ship changes/features continually. Every change is a liability that can affect performance and user satisfaction. So we make sure we have enough room in our budget for an error rate.
Much different compared to designing and developing software behind closed doors until it's, "ready."
We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
I think this specific class of computational power - strictly serialized transaction processing - has not grown at the same rate as other metrics would suggest. Adding 31 additional cores doesn't make the order matching engine go any faster (it could only go slower).
If your product is handling fewer than several million transactions per second and you are finding yourself reaching for a cluster of machines, you need to back up like 15 steps and start over.
> We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
This is the bit that really gets me fired up. People (read: system “architects”) were so desperate to “prove their worth” and leave a mark that many of these systems have been over complicated, unleashing a litany of new issues. The original design would still satisfy 99% of use cases and these days, given local compute capacity, you could run an entire market on a single device.
You are only able to do that because you are doing simple processing on each transaction. If you had to do more complex processing on each transaction it wouldn't be possible to do that many. Though it is hard for me to imagine what more complex processing would be (I'm not in your domain)
The order matching engine is mostly about updating an in-memory order book representation.
It is rarely the case that high volume transaction processing facilities also need to deal with deeply complex transactions.
I can't think of many domains of business wherein each transaction is so compute intensive that waiting for I/O doesn't typically dominate.
HFT would love to do more complex calculations for some of their trades. They often make the compromise of using a faster algorithm that is known to be right only 60% of the time vs the better but slower algorithm that is right 90% of the time.
That is a different problem from yours though and so it has different considerations. In some areas I/O dominates, in some it does not.
In a perfect world, maximizing (EV/op) x (ops/sec) should be done for even user software. How many person-years of productivity are lost each year to people waiting for Windows or Office to start up, finish updating, etc?
I work in card payments transaction processing and IO dominates. You need to have big models and lots of data to authorize a transaction. And you need that data as fresh as possible and as close to your compute as possible... but you're always dominated by IO. Computing the authorization is super cheap.
Tends to scale vertically rather than horizontally. Give me massive caches and wide registers and I can keep them full. For now though a lot of stuff is run on commodity cloud hardware so... eh.
Why can you not match orders in parallel using logarithmic reduction, the same way you would sort in parallel? Is it that there is not enough other computation being done other than sorting by time and price?
Well, yes. It's an economic problem (which is to say, it's a resource allocation problem). Do you have someone spend extra time optimising your software or do you have them produce more functionality. If the latter generates more cash then that's what you'll get them to do. If the former becomes important to your cashflow then you'll get them to do that.
It's the kind of economics that shifts the financial debt to accumulating waste, and technical debt, which is paid for by someone else. It's basically stealing. There are --of course-- many cases in which thorough optimizing doesn't make much sense, but the idea of just adding servers instead of rewriting is a sad state of affairs.
It doesn't seem like stealing to me? Highly optimised software generally takes more effort to create and maintain.
The tradeoff is that we get more software in general, and more features in that software, i.e. software developers are more productive.
I guess on some level we can feel that it's morally bad that adding more servers or using more memory on the client is cheaper than spending developer time but I'm not sure how you could shift that equilibrium without taking away people's freedom to choose how to build software?
I feel like the argument is similar to that of all corporate externality pushes.
For example "polluting the air/water, requiring end-users to fill landfills with packaging and planned obscolescence" allows a company to more cheaply offer more products to you as a consumer.. but now everyone collectively has to live in a more polluted world with climate change and wasted source material converted to expensive and/or dangerous landfills and environmental damage from fracking and strip mining.
But that's still not different from theft. A company that sells you things that "Fell off the back of a truck" is in a position to offer you lower costs and greater variety, as well. Aren't they?
Our shared resources need to be properly managed: neither siphoned wastefully nor ruined via polution. That proper management is a cost, and it either has to be borne by those using the resources and creating the waste, or it is theft of a shared resource and tragedy of the commons.
> It's basically stealing
This feels like hyperbole to me. Who is being stolen from here? Not the end user, they're getting the tradeoff of more features for a low price in exchange for less optimized software.
From what I’m seeing people do on their computers, it barely changed from what they’ve been doing on their pentium 4 one. But now, with Electron-based software and the generals state of Windows, you can’t recommend something older than 4 years. It’s hard to not see it as stealing when you have to buy a 1000+ laptop, when a 400 one could easily do the job if the software were a bit better.
It’s only a tradeoff for the user if the user find the added features useful.
Increasingly, this is not the case. My favorite example here is the Adobe Creative Suite, which for many users useful new features became far and few between some time ~15 years ago. For those users, all they got was a rather absurd degree of added bloat and slowness for essentially the same thing they were using in 2010. These users would’ve almost certainly been happier had 80-90% of the feature work done in that time instead been bug fixes and optimization.
> It's basically stealing.
This is exactly right. Why should the company pay an extra $250k in salary to "optimize" when they can just offload that salary to their customers' devices instead? The extra couple of seconds, extra megabytes of bandwidth, and shittery of the whole ecosystem has been externalized to customers in search of ill-gotten profits.
It's like ignoring backwards compatibility. That is really cheap since all the cost is pushed to end-users (that have to relearn the UI) or second/third-party developers (that have to rewrite their client code to work with a new API). But it's OK since everyone is doing it and also without all those pointless rewrites many of us would not have a job.
> without all those pointless rewrites many of us would not have a job.
I hear arguments like this fairly often. I don't believe it's true.
Instead of having a job writing a pointless rewrite, you might have a job optimizing software. You might have a different career altogether. Having a job won't go away: what you do for your job will simply change.
Also offloaded to the miserable devs maintaining the system.
Not really stealing. You could off course build software that is more optimized and with the same features but at a higher cost. Would most buyers pay twice the price for a webb app that loads in 1 sec instead of 2? Probably not.
Try loading slack and youtube on a 4 year old laptop. It’s more in the 10s, and good luck if you only have 8GB of ram.
I have been thinking about this a lot ever since I played a game called "Balatro". In this game nothing extraordinary happens in terms of computing - some computations get done, some images are shuffled around on the screen, the effects are sparse. The hardware requirements aren't much by modern standards, but still, this game could be ported 1:1 to a machine with Pentium II with a 3dfx graphics card. And yet it demands so much more - not a lot by today standards, but still. I am tempted to try to run it on a 2010 netbook to see if it even boots up.
It is made in lua using love2d. That helped the developers and comes with a cost in minimal requirements (even if they aren't much for a game released in 2024).
The world DOES run on older hardware.
How new do you think the CPU in your bank ATM or car's ECU is?
Some of it does.
The chips in everyones pockets do a lot of compute and are relatively new though.
Well I know the CPU in my laptop is already over 10 years old and still works good enough for everything I do.
My daily drivers at home are an i3-540 and and Athlon II X4. Every time something breaks down, I find it much cheaper to just buy a new part than to buy a whole new kit with motherboard/CPU/RAM.
I'm a sysadmin, so I only really need to log into other computers, but I can watch videos, browse the web, and do some programming on them just fine. Best ROI ever.
> I can watch videos
Can you watch H.265 videos? That's the one limitation I regularly hit on my computer (that I got for free from some company, is pretty old, but is otherwise good enough that I don't think I'll replace it until it breaks). I don't think I can play videos recorded on modern iPhones.
Yes, they play just fine with Gnome Videos or VLC. Both machines have a GeForce GT 710 on them.
Doom can run on Apple's Lightning to HDMI adapter.
Powerplants and planes still run on 80s hardware.
Related: I wonder what cpu Artemis/Orion is using
IBM PowerPC 750X apparently, which was the CPU the Power Mac G3 used back in the day. Since it's going into space it'll be one of the fancy radiation-hardened versions which probably still costs more than your car though, and they run four of them in lockstep to guard against errors.
https://www.eetimes.com/comparing-tech-used-for-apollo-artem...
> fancy radiation-hardened versions
Ha! What's special about rad-hard chips is that they're old designs. You need big geometries to survive cosmic rays, and new chips all have tiny geometries.
So there are two solutions:
1. Find a warehouse full of 20-year old chips.
2. Build a fab to produce 20-year old designs.
Both approaches are used, and both approaches are expensive. (Approach 1 is expensive because as you eventually run out of chips they become very, very valuable and you end up having to build a fab anyway.)
There's more to it than just big geometries but that's a major part of the solution.
I'm not sure what artemis or orion are, but you can blame defense contractors for this. Nobody ever got fired for hiring IBM or Lockheed, even if they deliver unimpressive results at massive cost.
Put a 4 nm CPU into something that goes to space and see how long it would take to fail.
One of the tradeoffs of radiation hardening is increased transistor size.
Cost-wise it also makes sense - it’s a specialized, certified and low-volume part.
I don't disagree that the engineering can be justified. But you don't need custom hardware to achieve radiation hardening, much less hiring fucking IBM.
And to be clear, I love power chips. I remain very bullish about the architecture. But as a taxpayer reading this shit just pisses me off. Pork-fat designed to look pro-humanity.
Sure, if you think the world consists of cash transactions and whatever a car needs to think about.
If we're talking numbers, there are many, many more embedded systems than general purpose computers. And these are mostly built on ancient process nodes compared to the cutting edge we have today; the shiny octa-cores on our phones are supported by a myriad of ancilliary chips that are definitely not cutting edge.
We aren't talking numbers, though. Who cares about embedded? I mean that literally. This is computation invisible by design. If that were sufficient we wouldn't have smartphones.
Unfortunately, bloated software passes the costs to the customer and it's hard to evaluate the loss.
Except your browser taking 180% of available ram maybe.
By the way, the world could also have some bug free software, if anyone could afford to pay for it.
What cost? The hardware is dirt cheap. Programmers aren't cheap. The value of being able to use cheap software on cheap hardware is basically not having to spend a lot of time optimizing things. Time is the one thing that isn't cheap here. So there's a value in shipping something slightly sub optimal sooner rather than something better later.
> Except your browser taking 180% of available ram maybe.
For most business users, running the browser is pretty much the only job of the laptop. And using virtual memory for open tabs that aren't currently open is actually not that bad. There's no need to fit all your gazillion tabs into memory; only the ones you are looking at. Browsers are pretty good at that these days. The problem isn't that browsers aren't efficient but that we simply push them to the breaking content with content. Content creators simply expand their resource usage whenever browsers get optimized. The point of optimization is not saving cost on hardware but getting more out of the hardware.
The optimization topic triggers the OCD of a lot of people and sometimes those people do nice things. John Carmack built his career when Moore's law was still on display. Everything he did to get the most out of CPUs was super relevant and cool but it also dated in a matter of a few years. One moment we were running doom on simple 386 computers and the next we were running Quake and Unreal with shiny new Voodoo GPUs on a Pentium II pro. I actually had the Riva 128 as my first GPU, which was one of the first products that Nvidia shipped running Unreal and other cool stuff. And while CPUs have increased enormously in performance, GPUs have increased even more by some ridiculous factor. Nvidia has come a long way since then.
I'm not saying optimization is not important but I'm just saying that compute is a cheap commodity. I actually spend quite a bit of time optimizing stuff so I can appreciate what that feels like and how nice it is when you make something faster. And sometimes that can really make a big difference. But sometimes my time is better spent elsewhere as well.
> Time is the one thing that isn't cheap here.
Right, and that's true of end users as well. It's just not taken into account by most businesses.
I think your take is pretty reasonable, but I think most software is too far towards slow and bloated these days.
Browsers are pretty good, but developers create horribly slow and wasteful web apps. That's where the optimization should be done. And I don't mean they should make things as fast as possible, just test on an older machine that a big chunk of the population might still be using, and make it feel somewhat snappy.
The frustrating part is that most web apps aren't really doing anything that complicated, they're just built on layers of libraries that the developers don't understand very well. I don't really have a solution to any of this, I just wish developers cared a little bit more than they do.
> The hardware is dirt cheap.
Maybe to you.
Meanwhile plenty of people are living paycheck-to-paycheck and literally cannot afford a phone, let alone a new phone and computer every few years.
Your whole reply is focused at business level but not everybody can afford 32GB of RAM just to have a smooth experience on a web browser.
> The hardware is dirt cheap.
It's not, because you multiply that 100% extra CPU time by all of an application's users and only then you come to the real extra cost.
And if you want to pick on "application", think of the widely used libraries and how much any non optimization costs when they get into everything...
I was working as a janitor, moonlighting as an IT director, in 2010. Back then I told the business that laptops for the past five years (roughly since Nehalem) have plenty of horsepower to run spreadsheets (which is basically all they do) with two cores, 16 GB of RAM, and a 500GB SATA SSD. A couple of users in marketing did need something a little (not much) beefier. Saved a bunch of money by not buying the latest-and-greatest laptops.
I don't work there any more. Today I am convinced that's true today: those computers should still be great for spreadsheets. Their workflow hasn't seriously changed. It's the software that has. If they've continued with updates (can it even "run" MS Windows 10 or 11 today? No idea, I've since moved on to Linux) then there's a solid chance that the amount of bloat and especially move to online-only spreadsheets would tank their productivity.
Further, the internet at that place was terrible. The only offerings were ~16Mbit asynchronous DSL (for $300/mo just because it's a "business", when I could get the same speed for $80/mo at home), or Comcast cable 120Mbit for $500/mo. 120Mbit is barely enough to get by with an online-only spreadsheet, and 16Mbit definitely not. But worse: if internet goes down, then the business ceases to function.
This is the real theft that another commenter [0] mentioned that I wholeheartedly agree with. There's no reason whatsoever that a laptop running spreadsheets in an office environment should require internet to edit and update spreadsheets, or crazy amounts of compute/storage, or even huge amounts of bandwidth.
Computers today have zero excuse for terrible performance except only to offload costs onto customers - private persons and businesses alike.
[0]: https://news.ycombinator.com/item?id=43971960
My phone isn't getting slower, but rather the OS running on it becomes less efficient with every update. Shameful.
I'm not much into retro computing. But it amazes me what people are pulling out of a dated hardware.
Doom on the Amiga for example (many consider it the main factor for the Amiga demise). Optimization and 30 years and it finally arrived
Related: https://duskos.org/
Oh man, that's lovely. Awesome project!
Meanwhile on every programmer's 101 forum: "Space is cheap! Premature optimization is the root of all evil! Dev time > runtime!"
This always saddens me. We could have things instant, simple, and compute & storage would be 100x more abundant in practical terms than it is today.
It's not even a trade off a lot of the time, simpler architectures perform better but are also vastly easier and cheaper to maintain.
We just lack expertise I think, and pass on cargo cult "best practices" much of the time.
I wonder if anyone has calculated the additional planet heating generated by crappy e.g. JS apps or useless animations
He mentions the rate of innovation would slow down which I agree with. But I think that even 5% slower innovation rate would delay the optimizations we can do or even figure out what we need to optimize through centuries of computer usage and in the end we'd be less efficient because we'd be slower at finding efficiencies. Low adoption rate of new efficiencies is worse than high adoption rate of old efficiencies is I guess how to phrase it.
If Cadence for example releases every feature 5 years later because they spend more time optimizing them, it's software after all, how much will that delay semiconductor innovations?
Where lack of performance costs money, optimization is quite invested in. See PyTorch (Inductor CUDA graphs), Triton, FlashAttention, Jax, etc.
Obviously, the world ran before computers. The more interesting part of this is what would we lose if we knew there were no new computers, and while I'd like to believe the world would put its resources towards critical infrastructure and global logistics, we'd probably see the financial sector trying to buy out whatever they could, followed by any data center / cloud computing company trying to lock all of the best compute power in their own buildings.
The idea of a hand me down computer made of brass and mahogany still sounds ridiculous because it is, but we're nearly there in terms of Moore's law. We have true 2nm within reach and then the 1nm process is basically the end of the journey. I expect 'audiophile grade' PCs in the 2030s and then PCs become works of art, furniture, investments, etc. because they have nowhere to go.
https://en.wikipedia.org/wiki/2_nm_process
https://en.wikipedia.org/wiki/International_Roadmap_for_Devi...
The increasing longevity of computers has been impressing me for about 10 years.
My current machine is 4 years old. It's absolutely fine for what I do. I only ever catch it "working" when I futz with 4k 360 degree video (about which: fine). It's a M1 Macbook Pro.
I traded its predecessor in to buy it, so I don't have that one anymore; it was a 2019 model. But the one before that, a 2015 13" Intel Macbook Pro, is still in use in the house as my wife's computer. Keyboard is mushy now, but it's fine. It'd probably run faster if my wife didn't keep fifty billion tabs open in Chrome, but that's none of my business. ;)
The one behind that one, purchased in 2012, is also still in use as a "media server" / ersatz SAN. It's a little creaky and is I'm sure technically a security risk given its age and lack of updates, but it RUNS just fine.
My professor back in the day told me that "software is eating hardware". No matter how hardware gets advanced, software will utilize that advancement.
The priority should be safety, not speed. I prefer an e.g. slower browser or OS that isn't ridden with exploits and attack vectors.
Of course that doesn't mean everything should be done in JS and Electron as there's a lot of drawbacks to that. There exists a reasonable middle ground where you get e.g. memory safety but don't operate on layers upon layers of heavy abstraction and overhead.
Unfortunately currently the priority is neither.
Is there or could we make an iPhone-like that runs 100x slower than conventional phones but uses much less energy, so it powers itself on solar? It would be good for the environment and useful in survival situations.
Or could we make a phone that runs 100x slower but is much cheaper? If it also runs on solar it would be useful in third-world countries.
Processors are more than fast enough for most tasks nowadays; more speed is still useful, but I think improving price and power consumption is more important. Also cheaper E-ink displays, which are much better for your eyes, more visible outside, and use less power than LEDs.
> Or could we make a phone that runs 100x slower but is much cheaper? I
Probably not - a large part of the cost is equipment and R&D. It doesn't cost much more to build the most complex CPU vs a 6502 - there is only a tiny bit more silicon and chemicals. What is costly is the R&D behind the chip, and the R&D behind the machines that make the chips. If intel fired all their R&D engineers who were not focused on reducing manufacturing costs they could greatly reduce the price of their CPUs - until AMD released a next generation that is much better. (this is more or less what Henry Ford did with the model-T - he reduced costs every year until his competition adding features were enough better that he couldn't sell his cars.
We have much hardware on the secondary market (resale) that's only 2-3x slower than pristine new primary market devices. It is cheap, it is reuse, and it helps people save in a hyper-consumerist society. The common complaint is that it doesn't run bloated software anymore. And I don't think we can make non-bloated software for a variety of reasons.
As a video game developer, I can add some perspective (N=1 if you will). Most top-20 game franchises spawned years ago on much weaker hardware, but their current installments demand hardware not even a few years old (as recommended/intended way to play the game). This is due to hyper-bloating of software, and severe downskilling of game programmers in the industry to cut costs. The players don't often see all this, and they think the latest game is truly the greatest, and "makes use" of the hardware. But the truth is that aside from current-generation graphics, most games haven't evolved much in the last 10 years, and current-gen graphics arrived on PS4/Xbox One.
Ultimately, I don't know who or what is the culprit of all this. The market demands cheap software. Games used to cost up to $120 in the 90s, which is $250 today. A common price point for good quality games was $80, which is $170 today. But the gamers absolutely decry any game price increases beyond $60. So the industry has no option but to look at every cost saving, including passing the cost onto the buyer through hardware upgrades.
Ironically, upgrading a graphics card one generation (RTX 3070 -> 4070) costs about $300 if the old card is sold and $500 if it isn't. So gamers end up paying ~$400 for the latest games every few years and then rebel against paying $30 extra per game instead, which could very well be cheaper than the GPU upgrade (let alone other PC upgrades), and would allow companies to spend much more time on optimization. Well, assuming it wouldn't just go into the pockets of publishers (but that is a separate topic).
It's an example of Scott Alexander's Moloch where it's unclear who could end this race to the bottom. Maybe a culture shift could, we should perhaps become less consumerist and value older hardware more. But the issue of bad software has very deep roots. I think this is why Carmack, who has a practically perfect understanding of software in games, doesn't prescribe a solution.
One only needs to look at Horizon: Zero Dawn to note that the truth of this is deeply uneven across the games industry. World streaming architectures are incredible technical achievements. So are moddable engines. There are plenty of technical limits being pushed by devs, it's just not done at all levels.
You are right, but you picked a game by a studio known for its technical expertise, with plenty of points to prove about quality game development. I'd like them to be the future of this industry.
But right now, 8-9/10 game developers and publishers are deeply concerned with cash and rather unconcerned by technical excellence or games as a form of interactive art (where, once again, Guerrilla and many other Sony studios are).
Minimalism is excellent. As others have mentioned, using languages that are more memory safe (by assumption the language is wrote in such a way) may be worth the additional complexity cost.
But surely with burgeoning AI use efficiency savings are being gobbled up by the brute force nature of it.
Maybe model training and the likes of hugging face can avoid different groups trying to reinvent the same AI wheel using more resources than a cursory search of a resource.
Well obviously. And there would be no wars if everybody made peace a priority.
It's obvious for both cases where the real priorities of humanity lie.
Tell me about it. Web development has only become fun again at my place since upgrading from Intel Mac to M4 Mac.
Just throw in Slack chat, vscode editor in Electron, Next.js stack, 1-2 docker containers, one browser and you need top notch hardware to run it fluid (Apple Silicon is amazing though). I'm doing no fancy stuff.
Chat, editor in a browser and docker don't seem the most efficient thing if put all together.
I'm already moving in this direction in my personal life. It's partly nostalgia but it's partly practical. It's just that work requires working with people who only use what hr and it hoists on them, then I need a separate machine for that.
100% agree with Carmack. There was a craft in writing software that I feel has been lost with access to inexpensive memory and compute. Programmers can be inefficient because they have all that extra headroom to do so which just contributes to the cycle of needing better hardware.
Software development has been commoditized and is directed by MBA's and others who don't see it as a craft. The need for fast project execution is above the craft of programming, hence, the code is bug-riddled and slow. There are some niche areas (vintage,pico-8, arduino...) where people can still practise the craft, but that's just a hobby now. When this topic comes up I always think about Tarkovsky's Andrei Rublev movie, the artist's struggle.
Really no notes on this. Carmack hit both sides of the coin:
- the way we do industry-scale computing right now tends to leave a lot of opportunity on the table because we decouple, interpret, and de-integrate where things would be faster and take less space if we coupled, compiled, and made monoliths
- we do things that way because it's easier to innovate, tweak, test, and pivot on decoupled systems that isolate the impact of change and give us ample signal about their internal state to debug and understand them
I've installed OSX Sequoia on 2015 iMacs with 8 gigs of ram and it runs great. More than great actually.
Linux on 10-15 year old laptops and it runs good. if you beef up RAM and SSD then actually really good.
So for everyday stuff we can and do run on older hardware.
We are squandering bandwidth similarly and that hasn’t increased as much as processing power.
The world could run on older hardware if rapid development did not also make money.
Rapid development is creating a race towards faster hardware.
It could also run on much less current hardware if efficiency was a priority. Then comes the AI bandwagon and everyone is buying loads of new equipment to keep up with the Jones.
This is a double edge sword problem, but I think what people are glazing over with the compute power topic is power efficiency. One thing I struggle with home labing old gaming equipment is the consideration to the power efficiency of new hardware. Hardly a valid comparison, but I can choose to recycle my Ryzen 1700x with a 2080ti for a media server that will probably consume a few hundred watts, or I can get a M1 that sips power. The double edge sword part is that Ryzen system becomes considerably more power efficient running proxmox or ubuntu server vs a windows client. We as a society choose our niche we want to leverage and it swings with and like economics, strapped for cash, choose to build more efficient code; no limits, buy the horsepower to meet the needs.
I'd much prefer Carmack to think about optimizing for energy consumption.
These two metrics often scale linearly.
Yeah, having browsers the size and complexities of OSs is just one of many symptoms. I intimate at this concept in a grumbling, helpless manner somewhat chronically.
There's a lot today that wasn't possible yesterday, but it also sucks in ways that weren't possible then.
I foresee hostility for saying the following, but it really seems most people are unwilling to admit that most software (and even hardware) isn't necessarily made for the user or its express purpose anymore. To be perhaps a bit silly, I get the impression of many services as bait for telemetry and background fun.
While not an overly earnest example, looking at Android's Settings/System/Developer Options is pretty quick evidence that the user is involved but clearly not the main component in any respect. Even an objective look at Linux finds manifold layers of hacks and compensation for a world of hostile hardware and soft conflict. It often works exceedingly well, though as impractical as it may be to fantasize, imagine how badass it would be if everything was clean, open and honest. There's immense power, with lots of infirmities.
I've said that today is the golden age of the LLM in all its puerility. It'll get way better, yeah, but it'll get way worse too, in the ways that matter.[1]
Edit: 1. Assuming open source doesn't persevere
I mean, if you put win 95 on a period appropriate machine, you can do office work easily. All that is really driving computing power is the web and gaming. If we weren't doing either of those things as much, I bet we could all quite happily use machines from the 2000s era
Probably, but we'd be in a pretty terrible security place without modern hardware based cryptographic operations.
based
Let's keep the CPU efficiency golf to Zachtronics games, please.
I/O is almost always the main bottleneck. I swear to god 99% of developers out there only know how to measure cpu cycles of their code so that's the only thing they optimize for. Call me after you've seen your jobs on your k8s clusters get slow because all of your jobs are inefficiently using local disk and wasting cycles waiting in queue for reads/writes. Or your DB replication slows down to the point that you have to choose between breaking the mirror and stop making money.
And older hardware consumes more power. That's the main driving factor between server hardware upgrades because you can fit more compute into your datacenter.
I agree with Carmack's assessment here, but most people reading are taking the wrong message away with them.
There's servers and there's all of the rest of consumer hardware.
I need to buy a new phone every few years simply because the manufacturer refuses to update it. Or they add progressively more computationally expensive effects that makes my old hardware crawl. Or the software I use only supports 2 old version of macOS. Or Microsoft decides that your brand new cpu is no good for win 11 because it's lacking a TPM. Or god help you if you try to open our poorly optimized electron app on your 5 year old computer.
But Carmack is clearly talking about servers here. That is my problem -- the main audience is going to read this and think about personal compute.
All those situations you describe are also a choice made so that companies can make sales.
It shows up in different ways, and I agree that some of my examples are planned obsolescence.
I'm not so sure they're that different though. I do think that in the end most boil down to the same problem: no emphasis or care about performance.
Picking a programming paradigm that all but incentivizes N+1 selects is stupid. An N+1 select is not an I/O problem, it's a design problem.
I'm looking at our Datadog stats right now. It is 64% cpu 36% IO.
> I/O is almost always the main bottleneck.
People say this all the time, and usually it's just an excuse not to optimize anything.
First, I/O can be optimized. It's very likely that most servers are either wasteful in the number of requests they make, or are shuffling more data around than necessary.
Beyond that though, adding slow logic on top of I/O latency only makes things worse.
Also, what does I/O being a bottleneck have to do with my browser consuming all of my RAM and using 120% of my CPU? Most people who say "I/O is the bottleneck" as a reason to not optimize only care about servers, and ignore the end users.
I/O _can_ be optimized. I know someone who had this as their fulltime job at Meta. Outside of that nobody is investing in it though.
I'm a platform engineer for a company with thousands of microservices. I'm not thinking on your desktop scale. Our jobs are all memory hogs and I/O bound messes. Across all of the hardware we're buying we're using maybe 10% CPU. Peers I talk to at other companies are almost universally in the same situation.
I'm not saying don't care about CPU efficiency, but I encounter dumb shit all the time like engineers asking us to run exotic new databases with bad licensing and no enterprise features just because it's 10% faster when we're nowhere near experiencing those kinds of efficiency problems. I almost never encounter engineers who truly understand or care about things like resource contention/utilization. Everything is still treated like an infinite pool with perfect 100% uptime, despite (at least) 20 years of the industry knowing better.