As someone who used Franz LISP on Sun workstations while someone else nearby used a Symbolics 3600 refrigerator-sized machine, I was never all that impressed with the LISP machine.
The performance wasn't all that great. Initially garbage collection took 45 minutes, as it tried to garbage-collect paged-out code. Eventually that was fixed.
The hardware was not very good. Too much wire wrap and slow, arrogant maintenance.
I once had a discussion with the developers of Franz LISP. The way it worked was that it compiled LISP source files and produced .obj files. But instead of linking them into an executable, you had to load them into a run-time environment. So I asked, "could you put the run time environment in another .obj file, so you just link the entire program and get a standalone executable"? "Why would you want to do that?" "So we could ship a product." This was an alien concept to them.
So was managing LISP files with source control, like everything else. LISP gurus were supposed to hack.
And, in the end, 1980s "AI" technology didn't do enough to justify that hardware.
Most of them still require a very specific, very special, very fragile environment to run, and require multiple tools and carefully ran steps just so it does same you can do with a compiled executable linked to the OS.
They weren't made for having libraries, or being packaged to run in multiple machines, or being distributed to customers to run in their own computers. Perhaps JS was the exception but only to the last part.
Sure it mostly works today, but a lot of people put a lot of the effort so we can keep shoving square pegs into round roles.
Don't get me started. I tried to use a very simply python program the other day, to talk to a bluetooth module in a device I'm building. In the end I gave up and wrote the whole thing in another language, but that wasn't before fighting the python package system for a couple of hours thinking the solution is right around the corner, if only I can get rid of one more little conflict. Python is funny that way, it infantilized programming but then required you to become an expert at resolving package manager conflicts.
For a while Conda seemed to have cracked this, but there too I now get unresolvable conflicts. It is really boggling the mind how you could get this so incredibly wrong and still have the kind of adoption that python has.
Or, you know, it might just be that you're not very good at computers.
Instead of jamming in thing after thing after thing blindly hoping it's going to work, try reading the error messages and making sense of why it's doing what it's doing.
This is such Gen Z behaviour - it doesn't work first time so throw a strop and fling stuff.
Yeah, anytime I see a useful tool, and then find out it's written in Python, I want to kms — ofc, unless it happens to work with UV, but they don't always
Not the ones I've used. Haskell compiles to executables, F# compiles to the same bytecode that C# does and can be shipped the same way (including compiling to executables if you need to deploy to environments where you don't expect the .NET runtime to be already set up), Clojure compiles to .jar files and deploys just like other Java code, and so on.
I'll grant that there are plenty of languages that seemed designed for research and playing around with cool concepts rather than for shipping code, but the FP languages that I see getting the most buzz are all ones that can ship working code to users, so the end users can just run a standard .exe without needing to know how to set up a runtime.
True but some still wants me to understand what a monofunctor is or something that sounds like a disease to do things like print to screen or get a random number
I feel that is the biggest barrier to their adoption nowadays (and also silly things like requiring ;; at the end of the line)
Pure functions are a good theoretical exercise but they can't exist in practice.
> Pure functions are a good theoretical exercise but they can't exist in practice.
Well, they can. But not all the way up to the top level of your program. But the longer you can hold off from your functions having side effects the more predictable and stable your codebase will be, with as an added benefit fewer bugs and less chance of runtime issues.
In most FP languages it is simple to print to screen and get a random number.
Pure functions often exist in practice and are useful for preventing many bugs. Sure, they may not be suitable for some situations but they can prevent a lot of foot guns.
Here's a Haskell example with all of the above:
import System.Random (randomRIO)
main :: IO ()
main = do
num <- randomRIO (1, 100)
print $ pureFunction num
pureFunction :: Int -> Int
pureFunction x = x * x + 2 * x + 1
Time to dig up a classic story about Tom Knight, who designed the first prototype of the Lisp Machine at MIT in the mid-70's. It's in the form of a classic Zen koan. This copy comes from https://jargondb.org/some_ai_koans but I've seen plenty of variations floating around.
A novice was trying to fix a broken Lisp machine by turning the power off and on.
Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”
Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.
DonHopkins on June 14, 2014
Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.
I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":
There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.
It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).
He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!
Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:
I'm a lisp machine romantic, but only for the software side. The hardware was neat, but nowadays I just want a more stable, graphically capable emacs that extends down through and out across more of userspace.
> emacs that extends down through and out across more of userspace
Making something like that has turned into a lifetime project for me. Implemented a freestanding lisp on top of Linux's stable system call interface. It's gotten to the point it has delimited continuations.
I liked the article, but I found the random remark about RISC vs CISC to be very similar to what the author is complaining about. The difference between the Apple M series and AMD's Zen series is NOT a RISC vs CISC issue. In fact, many would argue it's fair to say that ARM is not RISC and x86-64 is not CISC. These terms were used to refer to machines vastly different from what we have today, and the RISC vs CISC debate, like the LISP machine debate, really only lasted like 5 years. The fact is, we are all using out-of-order superscalar hardware where the decoder(s) of the CPU is not even close to the main thing consuming power and area on these chips. Under the hood they are all doing pretty much the same thing. But because it has a name and a marketable "war" and that people can easily understand the difference between fixed-width vs variable-width encodings, people overestimate the significance of the one part they understand compared to the internal engineering choices and process node choices that actually matter that people don't know about or understand. Unfortunately a lot of people hear the RISC vs CISC bedtime story and think there's no microcode on their M series chips.
You can go read about the real differences on sites like Chips and Cheese, but those aren't pop-sciencey and fun! It's mostly boring engineering details like the size of reorder buffers and the TSMC process node and it takes more than 5 minutes to learn. You can't just pick it up one day like a children's story with a clear conclusion and moral of the story. Just stop. If I can acquire all of your CPU microarchitecture knowledge from a Linus Tech tips video, you shouldn't have an opinion on it.
If you look at the finished product and you prefer the M series, that's great. But that doesn't mean you understand why it's different from the Zen series.
> In fact, many would argue it's fair to say that ARM is not RISC
It isn't now... ;-)
It's interesting to look at how close old ARM2/ARM3 code was to 6502 machine code. It's not totally unfair to think of the original ARM chip as a 32-bit 6502 with scads of registers.
Symbolics’ big fumble was thinking their CPU was their special sauce for way too long.
They showed signs that some people there understood that their development environment was it, but it obviously never fully got through to decision-makers: They had CLOE, a 386 PC deployment story in partnership with Gold Hill, but they’d have been far better served by acquiring Gold Hill and porting Genera to the 386 PC architecture.
For those unaware, Symbolics eventually "pivoted" to DEC Alpha, a supposedly "open" architecture, which is how Genera became Open Genera, like OpenVMS. (And still, like OpenVMS, heavily proprietary.)
Wasn’t the “open” at the time meaning “open system” as a system that is open for external connections (aka networking) and not so much open as in “open source”?
> Wasn’t the “open” at the time meaning “open system” as a system that is open for external connections (aka networking) and not so much open as in “open source”?
Networking was the initial impetus, but the phrase came to include programming interfaces, which is why POSIX was considered such a big deal. The idea was to promote interoperability and portability, as oposed to manufacturer-specific islands like those from IBM and DEC.
I was both Alpha being quasi-open itself, like OpenPOWER today, and like earlier PDP minis had been, whereas VAX had been pretty locked down, and OpenVMS getting POSIX compatibility (admittedly probably more the latter than the former, but DEC was big on branding things "open" at the time, partly because they were losing ground):
> Although Alpha was declared an "open architecture" right from the start, there was no consortium to develop it. All R&D actions were handled by DEC itself, and sometimes in cooperation with Mitsubishi. In fact, though the architecture was free de jure, most important hardware designs of it were pretty much closed de facto, and had to be paid-licensed (if possible at all). So, it wasn't that thing helping to promote the architecture. To mention, soon after introduction of EV4, DEC's high management offered to license manufacturing rights to Intel, Motorola, NEC, and Texas Instruments. But all these companies were involved in different projects and were of very little to no interest in EV4, so they refused. Perhaps, the conditions could be also unacceptable, or something else. Mistake #5.
Okay they're dead, but I think the interesting thing here is the relationship between hardware and the way mathematicians (potentially) think about problem solving. The established practices massively constrain the solutions we find, but I do wonder what a Turing Machine would look like if FPGAs had been around in 1930. FPGAs keep getting used to implement processors, but using one to make a c interpreter and then using it to run a vision library is probably not the best way to use FPGAs to recognise tanks with a drone. Which is, presumably, what a Zala Lancet is doing with it's FPGA.
- Nicklisch-Franken and Feizerakhmanov (2024) “Massimult: A Novel Parallel CPU Architecture Based on Combinator Reduction”: https://arxiv.org/abs/2412.02765v1
A lot of this could be said about specialized machines in general. I remember visiting the local university last century where a guy was demonstrating a US-made Word Processor machine they had bought, and around the same time a local company was developing something similar. And they looked very cool indeed. But in both cases I thought.. "eh, won't that be total overkill now when we can see standard word processing software on standard computers already arriving? Even if a normal PC doesn't look that cool?" And, as predicted (and I most certainly couldn't be the only one predicting that), the US company as well as the local one folded. At least the company I worked for got to hire some good people from there when the inevitable happened.
It's hard to find where to draw the line when it comes to specialized hardware, and the line moves forth and back all the time. From personal experience it went from something like "multiple input boards, but handle the real time Very Fast interrupts on the minicomputer". And spend six months shaving off half a millisecond so that it worked (we're in the eighties here). Next step - shift those boards into a dedicated box, let it handle the interrupts and DMA and all that, and just do the data demuxing on the computer. Next step (and I wasn't involved in that): Do all the demuxing in the box, let the computer sit back and just shove all of that to disk. And that's the step which went too far, the box got slow. Next step: Make the box simpler again, do all of the heavy demuxing and assembling on the computer, computers are fast after all..
“ I am just really bored by Lisp Machine romantics at this point: they should go away. I expect they never will.”
What? They’re awesome. They present a vision of the future that never happened. And I don’t think anyone serious expects lisp machines to come back btw.
As an Amiga romantic, I’d say we have no illusions about a late-80s Amiga being a good idea if it existed today. But it captured my imagination (and at just the right age) like nothing else.
I'm honestly surprised nobody tried to capitalize on the early 2000s Java hype by making some kind of Java box (there were a few things labeled as a Java OS or a Java workstation but none of these were really a "Java Machine")
I was aware of these, it's kinda what I meant by "None of these were really Java Machines". They were just shitty sparc machines that had Java OS in flash. It didn't have some kind of Java co-processor and still relied on a JVM. Java OS was pretty neat but I wouldn't really consider it a "Java OS" since it was basically just a microkernel that bootstrapped a JVM from what I've read. An actual Java machine IMO would have to at least have some kind of Java co-processor and not rely on a software based JVM
I'm not so sure it's down to the hardware. With something like 180-bit wide microcode store - a very very horizontal microarchitecture - the hardware sure was specialised, but I think it's fundamentally down to Lisp itself.
I don't know a lot of Lisp. I did some at school as a teenager, on BBC Micros, and it was interesting, but I never did anything really serious with it. I do know about Forth though, so perhaps people with a sense of how both work can correct me here.
Sadly, Forth, much as I love it and have done since I got my hands on a Jupiter Ace when I was about 9 or 10 years old, has not been a success, and probably for the same reasons as Lisp.
It just looks plain weird.
It does. I mean I love how elegant Forth is, you can implement a basic inner interpreter and a few primitives in a couple of hundred lines of assembler and then the rest is just written in Forth in terms of those primitives (okay pages and pages of dw ADDRESS_OF_PRIMITIVE instructions rather Forth proper). I'm told that you can do the same trick with Lisp, and maybe I'll look into that soon.
But the code itself looks weird.
Every language that's currently successful looks like ALGOL.
At uni, I learned Turbo Pascal. That have way to Modula-2 in "real" programming but by then I'd gotten my hands on an account on the Sun boxes and was writing stuff in C. C looked kind of like Pascal once you got round the idea that curly brackets weren't comments any more, so it wasn't a hard transition. I wrote lots of C, masses and masses, and eventually shifted to writing stuff in Python for doing webby stuff and C for DSP. Python... looks kind of like ALGOL, actually, you don't use "begin" and "end", you just indent properly, which you should be doing. Then Go, much later, which looks kind of like Pascal to me, which in turn looks kind of like ALGOL.
And so on.
You write line after line after line of "this thing does this to that", and it works. It's like writing out a recipe, even more so if you declare your ingredients^W variables at the top.
I love Forth, I really want to love Lisp but I don't know enough about it, but everyone uses languages that look like ALGOL.
In the late 1960s Citroën developed a car where the steering and speed were controlled by a single joystick mounted roughly where the steering wheel would be. No throttle, no clutch, no gears, just a joystick with force feedback to increase the amount of force needed to steer as the car sped up. Very comfortable, very natural, even more so when the joystick was mounted in the centre console like in some aircraft. Buuuuut, everyone uses steering wheels and pedals. It was too weird for people.
I'm sure the Lisp machines were very impressive compared to a DOS or Unix prompt, but today I can run like ten Amber or Newspeak environments on a constantly networked many-core system I carry around in my pocket. I'm not sure whether the CL folks have created similar web interfaces to the running image but I wouldn't be surprised if they have.
I feel it would be cool to sometime run code on a radiation hardened Forth chip, or some obscure Lisp hardware, but would it be life changing? I doubt it.
> I’d be saying that in a few years there are going to be a lot of huge farms of GPUs going very cheap if you can afford the power. People could be looking at whether those can be used for anything more interesting than the huge neural networks they were designed for.
Author falls into the same trap he talks about in the article. AI is not going away, we are not going back to the pre-AI world.
AI will not go away, I agree. But many of the companies now betting the farm on AI are going to lose, and there will be server farms going for sale cheap. I'm hearing more and more people outside the tech world talk about the AI bubble, and predicting it's going to pop. When that happens and investors lose confidence, suddenly companies who need the next round of financing to pay off their current debts won't get it, and will go under.
I can't predict when the shakeout will be, but I can predict that not every AI company is going to survive when it happens. The ones that do survive will be the ones that found a viable niche people are willing to pay for, just as the dot-com bubble bursting didn't kill Paypal, eBay, and so on. But there are definitely going to be some companies going bankrupt, that's pretty clear even at this point.
> I'm hearing more and more people outside the tech world talk about the AI bubble, and predicting it's going to pop
I'm juuust about old enough to remember the end of the Lisp Machine bubble (we had one or two at uni in the early 90s, and they were archaic by then). But obviously Lisp machines were the wrong way to go, even if they were a necessary step - obviously, hardware-mediated permanent object storage is the way forwards! POP! Ah, maybe not. Okay but can't you see we need to run all this on a massive transputer plane? POP! Oh. Okay how about this, we actually treat the microcode as the machine language, so the user-facing opcodes are like 256 bits long, and then we translate other instruction sets into that on the fly, like this - the Transmeta Crusoe! It's going to revolutionise everything! POP! Ah, what? Okay well how about...
And we're only up to the early 2000s.
It's bubbles, all the way back. Many of these things were indeed necessary steps - if only so We Learned Not To Do That Again - but ultimately are a footnote in history.
In 30 years' time people will have blog posts about how in the mid-2020s people had this thing where they used huge sheds full of graphics cards to run not-working-properly Boolean algebra to generate page after page after page of pictures of wonky-looking dogs and Santa Clauses, and we'll look at that with the same bemused nostalgia as we do with the line printer Snoopy calendars today.
Lisp machines, Transputers, Transmeta, even RISC were all academic-driven bubbles. They were spun out of university research projects. (Transmeta went indirectly via Bell Labs and Sun, but it was still based on academic ideas.)
The culture was nerdy, and the product promises were too abstract to make sense outside of Nerdania.
They were fundamentally different to the dot com bubble, which was hype-driven, back when "You can shop online!" was a novelty.
The current AI bubble is an interesting hybrid. The tech is wobbly research-grade, but it's been hyped by a cut-throat marketing engine aimed at very specific pain points - addictive social contact for younger proles, "auto-marketing team" for marketers, and "cut staffing and make more money" promises for management.
Most will fail, but I don't say this because I'm a pessimist: it's just that for every AI business idea, there's always at least 10 different competitors.
As someone who used Franz LISP on Sun workstations while someone else nearby used a Symbolics 3600 refrigerator-sized machine, I was never all that impressed with the LISP machine. The performance wasn't all that great. Initially garbage collection took 45 minutes, as it tried to garbage-collect paged-out code. Eventually that was fixed.
The hardware was not very good. Too much wire wrap and slow, arrogant maintenance.
I once had a discussion with the developers of Franz LISP. The way it worked was that it compiled LISP source files and produced .obj files. But instead of linking them into an executable, you had to load them into a run-time environment. So I asked, "could you put the run time environment in another .obj file, so you just link the entire program and get a standalone executable"? "Why would you want to do that?" "So we could ship a product." This was an alien concept to them.
So was managing LISP files with source control, like everything else. LISP gurus were supposed to hack.
And, in the end, 1980s "AI" technology didn't do enough to justify that hardware.
Lisp Machines had versioning file systems IIRC. Kinda like on VMS. Was SCCS really that far ahead?
> "So we could ship a product." This was an alien concept to them.
This mentality seems to have carried over to (most) modern FP stacks
Nah, it carried over to scripting languages.
Most of them still require a very specific, very special, very fragile environment to run, and require multiple tools and carefully ran steps just so it does same you can do with a compiled executable linked to the OS.
They weren't made for having libraries, or being packaged to run in multiple machines, or being distributed to customers to run in their own computers. Perhaps JS was the exception but only to the last part.
Sure it mostly works today, but a lot of people put a lot of the effort so we can keep shoving square pegs into round roles.
Don't get me started. I tried to use a very simply python program the other day, to talk to a bluetooth module in a device I'm building. In the end I gave up and wrote the whole thing in another language, but that wasn't before fighting the python package system for a couple of hours thinking the solution is right around the corner, if only I can get rid of one more little conflict. Python is funny that way, it infantilized programming but then required you to become an expert at resolving package manager conflicts.
For a while Conda seemed to have cracked this, but there too I now get unresolvable conflicts. It is really boggling the mind how you could get this so incredibly wrong and still have the kind of adoption that python has.
Or, you know, it might just be that you're not very good at computers.
Instead of jamming in thing after thing after thing blindly hoping it's going to work, try reading the error messages and making sense of why it's doing what it's doing.
This is such Gen Z behaviour - it doesn't work first time so throw a strop and fling stuff.
This is such a hilarious comment.
Thank you for making my day.
Hey Gen Z, as long as I have you on the line, could you please explain 67 to me?
I've heard of "68 and I'll owe you one", so is 67 about owing you two?
I'm having a hard time coping with my social media addiction while doing some fairly hardcore development on an STM32 based platform so sorry :)
Incidentally, when will you (multiple) come and visit?
It's been too long.
I owe you at least one or two! Maybe we can test your drones out on that Russian guy with the GoFundMe campaign, then I'll owe you three! ;)
Oh yeah? Well the jerk store called, and they’re running out of you!
Yeah, anytime I see a useful tool, and then find out it's written in Python, I want to kms — ofc, unless it happens to work with UV, but they don't always
You are correct unfortunately
Not the ones I've used. Haskell compiles to executables, F# compiles to the same bytecode that C# does and can be shipped the same way (including compiling to executables if you need to deploy to environments where you don't expect the .NET runtime to be already set up), Clojure compiles to .jar files and deploys just like other Java code, and so on.
I'll grant that there are plenty of languages that seemed designed for research and playing around with cool concepts rather than for shipping code, but the FP languages that I see getting the most buzz are all ones that can ship working code to users, so the end users can just run a standard .exe without needing to know how to set up a runtime.
True but some still wants me to understand what a monofunctor is or something that sounds like a disease to do things like print to screen or get a random number
I feel that is the biggest barrier to their adoption nowadays (and also silly things like requiring ;; at the end of the line)
Pure functions are a good theoretical exercise but they can't exist in practice.
> Pure functions are a good theoretical exercise but they can't exist in practice.
Well, they can. But not all the way up to the top level of your program. But the longer you can hold off from your functions having side effects the more predictable and stable your codebase will be, with as an added benefit fewer bugs and less chance of runtime issues.
Yes, but they're "Hello world!" hostile, so traditional programming language pedagogy doesn't work well.
Q: How many Prolog programmers does it take to change a lightbulb?
A: Yes.
I imagine LLMs have already thrown traditional programming language pedagogy out the window.
In most FP languages it is simple to print to screen and get a random number.
Pure functions often exist in practice and are useful for preventing many bugs. Sure, they may not be suitable for some situations but they can prevent a lot of foot guns.
Here's a Haskell example with all of the above:
Wouldn't the whole system be the product then? There's tradeoffs, but that's just integration.
Time to dig up a classic story about Tom Knight, who designed the first prototype of the Lisp Machine at MIT in the mid-70's. It's in the form of a classic Zen koan. This copy comes from https://jargondb.org/some_ai_koans but I've seen plenty of variations floating around.
A novice was trying to fix a broken Lisp machine by turning the power off and on.
Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”
Knight turned the machine off and on.
The machine worked.
Everybody knows, you have to wait at least 5 tau.
That's one of the funniest and most enlightening classic AI Koans, originally from the ITS file "AI:HUMOR;AI KOANS".
Here's another Moon story from the humor directory:
https://github.com/PDP-10/its/blob/master/doc/humor/moon's.g...
Moon's I.T.S. CRASH PROCEDURE document from his home directory, which goes into much more detail than just turning it off and on:
https://github.com/PDP-10/its/blob/master/doc/moon/klproc.11
And some cool Emacs lore:
https://github.com/PDP-10/its/blob/master/doc/eak/emacs.lore
Reposting this from the 2014 HN discussion of "Ergonomics of the Symbolics Lisp Machine":
https://news.ycombinator.com/item?id=7878679
http://lispm.de/symbolics-lisp-machine-ergonomics
https://news.ycombinator.com/item?id=7879364
eudox on June 11, 2014
Related: A huge collections of images showing Symbolics UI and the software written for it:
http://lispm.de/symbolics-ui-examples/symbolics-ui-examples
agumonkey on June 11, 2014
Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.
DonHopkins on June 14, 2014
Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.
I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":
There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.
It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).
He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!
Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:
ftp://ftp.ai.sri.com/pub/mailing-lists/slug/900531/msg00339.html
Also eudox posted this link:
Related: A huge collections of images showing Symbolics UI and the software written for it:
http://lispm.de/symbolics-ui-examples/symbolics-ui-examples....
I'm a lisp machine romantic, but only for the software side. The hardware was neat, but nowadays I just want a more stable, graphically capable emacs that extends down through and out across more of userspace.
> emacs that extends down through and out across more of userspace
Making something like that has turned into a lifetime project for me. Implemented a freestanding lisp on top of Linux's stable system call interface. It's gotten to the point it has delimited continuations.
I liked the article, but I found the random remark about RISC vs CISC to be very similar to what the author is complaining about. The difference between the Apple M series and AMD's Zen series is NOT a RISC vs CISC issue. In fact, many would argue it's fair to say that ARM is not RISC and x86-64 is not CISC. These terms were used to refer to machines vastly different from what we have today, and the RISC vs CISC debate, like the LISP machine debate, really only lasted like 5 years. The fact is, we are all using out-of-order superscalar hardware where the decoder(s) of the CPU is not even close to the main thing consuming power and area on these chips. Under the hood they are all doing pretty much the same thing. But because it has a name and a marketable "war" and that people can easily understand the difference between fixed-width vs variable-width encodings, people overestimate the significance of the one part they understand compared to the internal engineering choices and process node choices that actually matter that people don't know about or understand. Unfortunately a lot of people hear the RISC vs CISC bedtime story and think there's no microcode on their M series chips.
You can go read about the real differences on sites like Chips and Cheese, but those aren't pop-sciencey and fun! It's mostly boring engineering details like the size of reorder buffers and the TSMC process node and it takes more than 5 minutes to learn. You can't just pick it up one day like a children's story with a clear conclusion and moral of the story. Just stop. If I can acquire all of your CPU microarchitecture knowledge from a Linus Tech tips video, you shouldn't have an opinion on it.
If you look at the finished product and you prefer the M series, that's great. But that doesn't mean you understand why it's different from the Zen series.
> In fact, many would argue it's fair to say that ARM is not RISC
It isn't now... ;-)
It's interesting to look at how close old ARM2/ARM3 code was to 6502 machine code. It's not totally unfair to think of the original ARM chip as a 32-bit 6502 with scads of registers.
And, for fairly obvious reasons!
Symbolics’ big fumble was thinking their CPU was their special sauce for way too long.
They showed signs that some people there understood that their development environment was it, but it obviously never fully got through to decision-makers: They had CLOE, a 386 PC deployment story in partnership with Gold Hill, but they’d have been far better served by acquiring Gold Hill and porting Genera to the 386 PC architecture.
To be fair to Symbolics: a lot of companies back then thought their CPU was the secret sauce. Some still do...
For those unaware, Symbolics eventually "pivoted" to DEC Alpha, a supposedly "open" architecture, which is how Genera became Open Genera, like OpenVMS. (And still, like OpenVMS, heavily proprietary.)
Wasn’t the “open” at the time meaning “open system” as a system that is open for external connections (aka networking) and not so much open as in “open source”?
> Wasn’t the “open” at the time meaning “open system” as a system that is open for external connections (aka networking) and not so much open as in “open source”?
Networking was the initial impetus, but the phrase came to include programming interfaces, which is why POSIX was considered such a big deal. The idea was to promote interoperability and portability, as oposed to manufacturer-specific islands like those from IBM and DEC.
No, it meant industry standards, instead of proprietary ones, that is why POSIX, Motif, and others are under The Open Group.
I was both Alpha being quasi-open itself, like OpenPOWER today, and like earlier PDP minis had been, whereas VAX had been pretty locked down, and OpenVMS getting POSIX compatibility (admittedly probably more the latter than the former, but DEC was big on branding things "open" at the time, partly because they were losing ground):
https://www.digiater.nl/openvms/decus/vmslt05a/vu/alpha_hist...
> Although Alpha was declared an "open architecture" right from the start, there was no consortium to develop it. All R&D actions were handled by DEC itself, and sometimes in cooperation with Mitsubishi. In fact, though the architecture was free de jure, most important hardware designs of it were pretty much closed de facto, and had to be paid-licensed (if possible at all). So, it wasn't that thing helping to promote the architecture. To mention, soon after introduction of EV4, DEC's high management offered to license manufacturing rights to Intel, Motorola, NEC, and Texas Instruments. But all these companies were involved in different projects and were of very little to no interest in EV4, so they refused. Perhaps, the conditions could be also unacceptable, or something else. Mistake #5.
Yes, but also. OpenGenera was ported to x86 some time ago.
I believe it's even been ported to the M1 a few years ago: https://x.com/gmpalter/status/1361855786603929601
Kinda sad seeing those follow-up tweets about licensing issues years later.
Lisp is alive as ever in Emacs and Common Lisp, and Clojure and Racket
Okay they're dead, but I think the interesting thing here is the relationship between hardware and the way mathematicians (potentially) think about problem solving. The established practices massively constrain the solutions we find, but I do wonder what a Turing Machine would look like if FPGAs had been around in 1930. FPGAs keep getting used to implement processors, but using one to make a c interpreter and then using it to run a vision library is probably not the best way to use FPGAs to recognise tanks with a drone. Which is, presumably, what a Zala Lancet is doing with it's FPGA.
Some things have been tried; some things continue to be tried.
- Naylor and Runciman (2007) ”The Reduceron: Widening the von Neumann Bottleneck for Graph Reduction using an FPGA”: https://mn416.github.io/reduceron-project/reduceron.pdf
- Burrows (2009) “A combinator processor”: https://q4.github.io/dissertations/eb379.pdf
- Ramsay and Stewart (2023) “Heron: Modern Hardware Graph Reduction”: https://dl.acm.org/doi/10.1145/3652561.3652564
- Nicklisch-Franken and Feizerakhmanov (2024) “Massimult: A Novel Parallel CPU Architecture Based on Combinator Reduction”: https://arxiv.org/abs/2412.02765v1
- Xie, Ramsay, Stewart, and Loidl (2025) “From Haskell to a New Structured Combinator Processor” (KappaMutor): https://link.springer.com/chapter/10.1007/978-3-031-99751-8_...
More: https://haflang.github.io/history.html
A lot of this could be said about specialized machines in general. I remember visiting the local university last century where a guy was demonstrating a US-made Word Processor machine they had bought, and around the same time a local company was developing something similar. And they looked very cool indeed. But in both cases I thought.. "eh, won't that be total overkill now when we can see standard word processing software on standard computers already arriving? Even if a normal PC doesn't look that cool?" And, as predicted (and I most certainly couldn't be the only one predicting that), the US company as well as the local one folded. At least the company I worked for got to hire some good people from there when the inevitable happened.
It's hard to find where to draw the line when it comes to specialized hardware, and the line moves forth and back all the time. From personal experience it went from something like "multiple input boards, but handle the real time Very Fast interrupts on the minicomputer". And spend six months shaving off half a millisecond so that it worked (we're in the eighties here). Next step - shift those boards into a dedicated box, let it handle the interrupts and DMA and all that, and just do the data demuxing on the computer. Next step (and I wasn't involved in that): Do all the demuxing in the box, let the computer sit back and just shove all of that to disk. And that's the step which went too far, the box got slow. Next step: Make the box simpler again, do all of the heavy demuxing and assembling on the computer, computers are fast after all..
And so on and so forth.
The Lisp environments are definitely around, in LispWorks and Allegro Common Lisp.
And Emacs. Sure Elisp isn't the best lisp around (Personally I would give that title to Common Lisp), Emacs is a good Lisp environment.
Also Maxima.
And portacle.
Although Portacle isn't being maintained any more (at least as far as the main developer was concerned last time I looked a few months ago).
More information here:
https://github.com/portacle/portacle/issues/182
In 2020 they went full-time on developing the game Kandria.
They're still active:
https://shinmera.com/projects.html
https://shinmera.com/bio.html
You may try CADR (precursor to Genera) on-line: https://lispcafe.org/cadr/usim.html
“ I am just really bored by Lisp Machine romantics at this point: they should go away. I expect they never will.”
What? They’re awesome. They present a vision of the future that never happened. And I don’t think anyone serious expects lisp machines to come back btw.
>They present a vision of the future that never happened
Hauntology strikes again
See also:
Amiga romantics.
8-bit romantics.
PDP-10 romantics.
Let them stay. Let them romantizice. <glasses tint="rose">
Without those people lots of history (and hence knowledge) would be lost. I’m happy they are around.
As an Amiga romantic, I’d say we have no illusions about a late-80s Amiga being a good idea if it existed today. But it captured my imagination (and at just the right age) like nothing else.
You're using an 8-bit machine right now.
I'm honestly surprised nobody tried to capitalize on the early 2000s Java hype by making some kind of Java box (there were a few things labeled as a Java OS or a Java workstation but none of these were really a "Java Machine")
Coincidentally on the front page, https://news.ycombinator.com/item?id=45989650
ARM also used to have opcodes for Java: https://en.wikipedia.org/wiki/Jazelle
Sun JavaStation: https://en.wikipedia.org/wiki/JavaStation
I was aware of these, it's kinda what I meant by "None of these were really Java Machines". They were just shitty sparc machines that had Java OS in flash. It didn't have some kind of Java co-processor and still relied on a JVM. Java OS was pretty neat but I wouldn't really consider it a "Java OS" since it was basically just a microkernel that bootstrapped a JVM from what I've read. An actual Java machine IMO would have to at least have some kind of Java co-processor and not rely on a software based JVM
Sun also tried, and failed, to bring to market, a microprocessor architecture for running Java on metal - https://en.wikipedia.org/wiki/MAJC
In theory you could say that simcards were / (are?) Tiny java on a chip machines.
Azul Systems was making Java machines a while ago.
"Old man yells at Lisp Machines (And their enthusiasts)"
i.e., at other old men.
I'm not so sure it's down to the hardware. With something like 180-bit wide microcode store - a very very horizontal microarchitecture - the hardware sure was specialised, but I think it's fundamentally down to Lisp itself.
I don't know a lot of Lisp. I did some at school as a teenager, on BBC Micros, and it was interesting, but I never did anything really serious with it. I do know about Forth though, so perhaps people with a sense of how both work can correct me here.
Sadly, Forth, much as I love it and have done since I got my hands on a Jupiter Ace when I was about 9 or 10 years old, has not been a success, and probably for the same reasons as Lisp.
It just looks plain weird.
It does. I mean I love how elegant Forth is, you can implement a basic inner interpreter and a few primitives in a couple of hundred lines of assembler and then the rest is just written in Forth in terms of those primitives (okay pages and pages of dw ADDRESS_OF_PRIMITIVE instructions rather Forth proper). I'm told that you can do the same trick with Lisp, and maybe I'll look into that soon.
But the code itself looks weird.
Every language that's currently successful looks like ALGOL.
At uni, I learned Turbo Pascal. That have way to Modula-2 in "real" programming but by then I'd gotten my hands on an account on the Sun boxes and was writing stuff in C. C looked kind of like Pascal once you got round the idea that curly brackets weren't comments any more, so it wasn't a hard transition. I wrote lots of C, masses and masses, and eventually shifted to writing stuff in Python for doing webby stuff and C for DSP. Python... looks kind of like ALGOL, actually, you don't use "begin" and "end", you just indent properly, which you should be doing. Then Go, much later, which looks kind of like Pascal to me, which in turn looks kind of like ALGOL.
And so on.
You write line after line after line of "this thing does this to that", and it works. It's like writing out a recipe, even more so if you declare your ingredients^W variables at the top.
I love Forth, I really want to love Lisp but I don't know enough about it, but everyone uses languages that look like ALGOL.
In the late 1960s Citroën developed a car where the steering and speed were controlled by a single joystick mounted roughly where the steering wheel would be. No throttle, no clutch, no gears, just a joystick with force feedback to increase the amount of force needed to steer as the car sped up. Very comfortable, very natural, even more so when the joystick was mounted in the centre console like in some aircraft. Buuuuut, everyone uses steering wheels and pedals. It was too weird for people.
I'm sure the Lisp machines were very impressive compared to a DOS or Unix prompt, but today I can run like ten Amber or Newspeak environments on a constantly networked many-core system I carry around in my pocket. I'm not sure whether the CL folks have created similar web interfaces to the running image but I wouldn't be surprised if they have.
I feel it would be cool to sometime run code on a radiation hardened Forth chip, or some obscure Lisp hardware, but would it be life changing? I doubt it.
> I’d be saying that in a few years there are going to be a lot of huge farms of GPUs going very cheap if you can afford the power. People could be looking at whether those can be used for anything more interesting than the huge neural networks they were designed for.
Author falls into the same trap he talks about in the article. AI is not going away, we are not going back to the pre-AI world.
AI will not go away, I agree. But many of the companies now betting the farm on AI are going to lose, and there will be server farms going for sale cheap. I'm hearing more and more people outside the tech world talk about the AI bubble, and predicting it's going to pop. When that happens and investors lose confidence, suddenly companies who need the next round of financing to pay off their current debts won't get it, and will go under.
I can't predict when the shakeout will be, but I can predict that not every AI company is going to survive when it happens. The ones that do survive will be the ones that found a viable niche people are willing to pay for, just as the dot-com bubble bursting didn't kill Paypal, eBay, and so on. But there are definitely going to be some companies going bankrupt, that's pretty clear even at this point.
> I'm hearing more and more people outside the tech world talk about the AI bubble, and predicting it's going to pop
I'm juuust about old enough to remember the end of the Lisp Machine bubble (we had one or two at uni in the early 90s, and they were archaic by then). But obviously Lisp machines were the wrong way to go, even if they were a necessary step - obviously, hardware-mediated permanent object storage is the way forwards! POP! Ah, maybe not. Okay but can't you see we need to run all this on a massive transputer plane? POP! Oh. Okay how about this, we actually treat the microcode as the machine language, so the user-facing opcodes are like 256 bits long, and then we translate other instruction sets into that on the fly, like this - the Transmeta Crusoe! It's going to revolutionise everything! POP! Ah, what? Okay well how about...
And we're only up to the early 2000s.
It's bubbles, all the way back. Many of these things were indeed necessary steps - if only so We Learned Not To Do That Again - but ultimately are a footnote in history.
In 30 years' time people will have blog posts about how in the mid-2020s people had this thing where they used huge sheds full of graphics cards to run not-working-properly Boolean algebra to generate page after page after page of pictures of wonky-looking dogs and Santa Clauses, and we'll look at that with the same bemused nostalgia as we do with the line printer Snoopy calendars today.
Lisp machines, Transputers, Transmeta, even RISC were all academic-driven bubbles. They were spun out of university research projects. (Transmeta went indirectly via Bell Labs and Sun, but it was still based on academic ideas.)
The culture was nerdy, and the product promises were too abstract to make sense outside of Nerdania.
They were fundamentally different to the dot com bubble, which was hype-driven, back when "You can shop online!" was a novelty.
The current AI bubble is an interesting hybrid. The tech is wobbly research-grade, but it's been hyped by a cut-throat marketing engine aimed at very specific pain points - addictive social contact for younger proles, "auto-marketing team" for marketers, and "cut staffing and make more money" promises for management.
Most will fail, but I don't say this because I'm a pessimist: it's just that for every AI business idea, there's always at least 10 different competitors.
> I'm hearing more and more people outside the tech world talk about the AI bubble, and predicting it's going to pop
You know what they say about when the taxi driver is giving you strong financial opinions.