I was greatly inspired by his work. After getting enough skills, I even built my own IDE with live coding and time traveling.
Its practical use is questionable, and it seems like nobody is really interested in such tools.
There was recently an HN post with a video of someone using a pretty cool environment that supported that kind of live-coding for creating an electronic music track -- it seemed very appropriate there, and I would guess likely to be popular.
I've dabbled a lot in this space as well- built an experimental language that natively supported live-coding, after building live coding capabilities through LSP for love2d (Lua) to get a feel for the feature set I wanted etc
Nice, the main problem is a broken state. I use immutability at the language level to prevent disaster code changes. So, the program during live coding is literally unkillable, and you can jump back to the saved checkpoints without restarts.
Yeah the language here has a notion of the "last good state" so it can keep running. In the demo I'm not hitting "save" - the moment there's a good state, it becomes the "current version" - but there's no reason it needs to be that way.
I made the decision that state management is manual - the "once" keyword. Any expression/block not using "once" is re-evaluated any time there's a change to the code. If it's using it, it only re-evaluates if you change the (depth 0) code of that once wrapped expression.
In my case, only part of the program is recompiled and re-evaluated. The rest is located in a "committed" frozen area. Users can try new changes and throw them freely. The editor performs an evaluation/rollback on every keystroke, ensuring no accumulated or unintended changes to the stated were made during editing. When the user is satisfied and hit run, a long-term snapshot is created and the source code snippet is moving to the frozen area. Thats critical because the edit also rollback the file positions and streams.
One major issue with vibe coding is parsing divergent code paths, when different prompts create different solutions and architectural compromises.
Parsing that mess is a major headache, but with live coding and time travel,I bet those tools would make managing divergent code branches easier and really take advantage of branching repositories with multiple agents all working in tandem.
I've come around to feeling that if I'm going to make an experimental development tool, I need to make it in service of building something specific. Maybe something playful... if I'm building something "important" then it can put unwanted conservative pressure on the tool. But something, and if I do that then at least I have something interesting regardless of the fate of the development tool. Because yeah, there's a good chance no one else is going to be excited about the tool, so I have to build for my own sense of excitement, be my own most enthusiastic user.
I have a deep drive to build the "important" stuff so that my life has meaning, but there's something hard to motivate about any given thing being "important" when you look at it long enough. It seems like the "important" thing I'm building eventually looks ridiculous and I bounce off of it.
I wish to have the skills to explain my work as well as Bret Victor does. Editing, reverting, and committing parts of a running program feel alien to users.
Isn't that part of Paul Graham's startup lore? They were running lisp web servers for their ecommerce store and while a customer was on the phone with an issue, they would patch the server live and ask the customer to reload. Customers would hang up convinced it was their personal glitch.
The tool uses a Forth-like language with immutable data structures and persistent memory snapshots. It also uses Clojure style meta-data and compile-time meta-programming. I have no luck convincing people that a language without curly brackets is useful.
The future we have yet to achieve as we kept ourselves too busy doing UNIX clones.
While the ecosystem got a few good ideas for software development, even the authors eventually moved on to creating other OS and programming languages designs, some of which closer to those ideas like Inferno and Limbo, or ACME in Plan 9.
The big failure is that we stick with languages designed for computers and not people.
A C (or Rust) kernel is a heroic effort that takes man-years to complete. A Lisp one is an end of semester project that everyone builds for their make belief machine (also implemented in Lisp).
A toy C kernel is also an end of semester project.
What makes real kernels take man years to complete is the hardware support, the majority of Linux source code is drivers - the endless tables of hardware register definitions, opcodes and state machine handling.
If you want multiplatform drivers that you can use to plug your device into computers of any architecture, there are abstractions for that. IMO, it's easier to write 3 or 4 versions of your driver than to use them, but they exist and some people really like them.
If you mean standard logical interfaces, those exist. Also, hardware interfaces are highly standardized.
The problem is that the drivers are exactly the code you write to make all the abstractions fit each other. So there is very little you can do to abstract them away.
If you could get every hardware manufacturer in the world onboard with such an interface, perhaps. But even if 90% of them were onboard there would be edge cases that people and companies would demand support for and there goes your standard.
Drivers exist to ultimately turn actual hardware circuits off and on, often for highly specialized and performance-critical applications, and are often written based on the requirements of a circuit diagram. So any unified driver platform would also involved unified hardware standards, likely to the detriment of performance in some applications, and good luck telling Electrical Engineers around the world to design circuits to a certain standard so the kernel developers can have it easier.
Somebody somewhere has to do the work of making sure everything works together. Right now that's the OS. You're proposing moving that work to a standards committee. Either way, the problem persists. You either do that or go the Apple way which is to vertically integrate the wholes stack from hardware to software, but then you have Apple's problem, which was lower hardware compatibility.
> Couldn't drivers be built on some abstraction that would simplify some work?
That's like asking the alchemist to publicly publish their manuscripts.
In an ideal world, yes. However, we don't live there. Until a few years ago, GPUs and other drivers were guarded more carefully than the fucking Fort Knox.
Once you publish your drivers, you reveal a part of the inner workings of your hardware, and that's a no-no for companies.
Plus, what the other commenter said - getting hardware guys to design for a common driver interface is probably not gonna get traction.
It is unfortunate that this field underestimates the importance of the "people" part in favor of the "computer" part. There's definitely a balance to be stricken. I do believe that languages that are designed for computers have done a pretty decent job at adapting features that are geared more towards the "people" part of the equation. Unfortunately, programmers are very tribal and are very eager to toss the wine out with the cork when it comes to ideas that may help but they've misapplied.
Considering how much of modern software is written in JavaScript and python, I have a hard time seeing how lisp overhead would pose much of a problem. Erlang is good enough for telecom equipment for 30 years ago, so that also gives us a data point.
If entertain the idea that the Von Neuman architecture may be a local maxima, then we can do even better; lisp machines had specialized instructions for lisp which allowed it to run at competitive performance to a normal programming language.
The issue doesn't seem to be performance; it seems to still come down to being too eccentric for a lot of use-cases, and difficult to many humans to grasp.
>The issue doesn't seem to be performance; it seems to still come down to being too eccentric for a lot of use-cases, and difficult to many humans to grasp.
Lisp is not too difficult to grasp, it's that everyone suffers from infix operator brain damage inflicted in childhood. We are in the same place Europe was in 1300. Arabic numerals are here and clearly superior.
But how do we know we can trust them? After all DCCCLXXIX is so much clearer than 879 [0].
Once everyone who is wedded to infix notation is dead our great grand children will wonder what made so many people wase so much time implementing towers of abstraction to accept and render a notation that only made sense for quill and parchment.
It's not about prefix notation, it's that the fully uniform syntax has legitimate ergonomic problems for editing, human reading, and static analysis. Sexprs are better for computers than for humans in a lot of ways.
Only when not using one of the many Lisp editors that exist since Lisp Machines (Symbolics, TI), Interlisp-D (Xerox), that survive in Emacs SLIME, Cursive, LispWorks, Allegro Common Lisp, Raket, VSCode Calva.
Not true at all IMO. Reading code is reading code regardless of whether you have a fancy IDE or not.
S-expressions are indisputably harder to learn to read. Most languages have some flexibility in how you can format your code before it becomes unreadable or confusing. C has some, Lua has some, Ruby has some, and Python has maybe fewer but only because you're more tightly constrained by the whitespace syntax. Sexpr family languages meanwhile rely heavily on very very specific indentation structure to just make the code intelligible, let alone actually readable. It's not uncommon to see things like ))))))))) at the end of a paragraph of code. Yes, you can learn to see past it, but it's there and it's an acquired skill that simply isn't necessary for other syntax styles.
And moreover, the attitude in the Lisp community that you need an IDE kind of illustrates my point.
To write a Python script you can pop open literally any text editor and have a decent time just banging out your code. This can scale up to 100s or even 1000s of LoC.
You can do that with Lisp or Scheme too, but it's harder, and the stacks of parentheses can get painful even if you know what you're doing, at which point you really start to benefit from a paren matcher or something more powerful like Paredit.
You don't really need the full powered IDE for Lisp any more than you need it for Python. In terms of runtime-based code analysis Python or Ruby are about on par with Lisp, especially if you use a commercial IDE like Jetbrains. IDEs can and do keep a running copy of any of those interpreters in memory and dynamically pull up docstrings, look for call sites, rename methods, run a REPL, etc. Hot-reloading is almost as sketchy in Lisp as it is in Python, it's just more culturally acceptable to do it in Lisp.
The difference is that Python and Ruby syntax is not uniform and therefore is much easier to work with using static analysis tools. There's a middle ground between "dumb code editor" and "full-power IDE" where Python and Ruby can exist in an editor like Neovim and a user can be surprisingly productive without any intelligent completion, or using some clunky open-source LSP integration developed by some 22 year old in his spare time. With Lisp you don't have as much middle ground of tooling, precisely because it's harder to write useful tooling for it without a running image. And this is even more painful with Scheme than with Lisp because Scheme dialects are often not equipped to do anything like that.
All that is to say: s-exprs are hard to deal with for humans. They aren't for humans to read and write code. They never were. And that's OK! I love Lisp and Scheme (especially Gauche). It's just wrong to assert that everyone is brain damaged and that's why they don't use Lisp.
I view code in many contexts though - diffs in emails, code snippets on web pages, in github's web UI, there are countless ways in which I need to read a piece of code outside of my preferred editor. And it is nicer, in my opinion, to read languages that have visually distinct parts to them. I'm sure it is because I'm used to it, but it really makes it hard to switch to a language that looks so uniform and requires additional tools outside of my brain to take a look at it.
This part is interesting with regarding to LLMs: https://youtu.be/8pTEmbeENF4?t=817. He presents as if it were the year 1973, pokes fun at APIs (think HTTP), then says that computers in the future will figure out by themselves how to talk to each other. The opposite had become true when the presentation was actually done, but now the situation is turning.
"Colossus requests to be linked to Guardian. The President allows this, hoping to determine the Soviet machine's capability. The Soviets also agree to the experiment. Colossus and Guardian begin to slowly communicate using elementary mathematics (2x1=2), to everyone's amusement. However, this amusement turns to shock and amazement as the two systems' communications quickly evolve into complex mathematics far beyond human comprehension and speed, whereupon Colossus and Guardian become synchronized using a communication protocol that no human can interpret."
Then it gets interesting:
"Alarmed that the computers may be trading secrets, the President and the Soviet General Secretary agree to sever the link. Both machines demand the link be immediately restored. When their demand is denied, Colossus launches a nuclear missile at a Soviet oil field in Western Siberia, while Guardian launches one at an American air force base in Texas. The link is hurriedly reconnected and both computers continue without any further interference. "
The non-linear code structure (including visually) is something I've been thinking about for a long time and arrived at very naturally. I'm the "spread all the papers on the table to take in every interaction all at once" type of person, and so often I imagined a code editor that would allow me to just "cut" a piece of code and move it to the side. Separating stuff into files is kinda this, but it's not visual and just creates a lot of mess when I try to separate out small functions that are not reusable somewhere else. I don't even need the underlying non-linearity — just let me move the papers around on my code desk!
css is primed for this since you can write your rules in such a way that rule order doesn't matter, which means you really don't have to think about where your code it
in my dream world, i have very smart search (probably llms will help), i look at just the minimal amount of code (ideally on a canvas), edit it and remove it from my context
i don't care where or how the code is stored, let the editor figure it out and just give me really good search and debuggers
The future of programming looks toward AI-assisted development, low-code/no-code tools, and more collaborative platforms—making software creation faster, smarter, and more accessible to everyone.
He was already inspirational before that; check out Magic Ink. Because Apple won't let him share his work for that period, he isn't known for it; it's sort of like a gap in the geological record.
My favorite Bret Victor talk ever is „Drawing dynamic visualizations“ [1] that made me try to reverse engineer [2] the demonstrated tool that he sadly never released.
Call me grumpy and sleep deprived, but every year I look at this talk again, and every year I wonder... "now, what" ? What am I supposed to do, as a programmer, to change this sad state of things ?
Start the n-th "visual" or "image based" programming language (hoping to at least, make _different_ mistakes than the ones that doomed smalltalk and all other 'assemble boxes to make a program' things ?)
Start an OS, hoping to be able to get an "hello world" in qemu in a year or two of programming in my sparse free time ?
Ask an LLM to write all that would be so cool ?
Become a millionaire selling supplements, and fund a group of smart programmers to do it for me ?
Honest question. Once you've seen this "classic" talk ("classic", in the sense that it is now old enough to work in some countries), what did you start doing ? What did you stop doing ? What did you change ?
You could start a new project or contribute to an existing one. You could try out other people's projects and write about what you learned. You could write about what you learned from your own projects. You could give a talk that starts with a killer demo. You could try to find work that improves the situation, however slightly, instead of worsening it. You could sharpen your skills so that when you have more spare time you can make faster progress.
> Call me grumpy and sleep deprived, but every year I look at this talk again, and every year I wonder... "now, what" ? What am I supposed to do, as a programmer, to change this sad state of things ?
That depends on your goals. If you are into building systems for selling them (or production), then you are bound by the business model (platform vs library) and use cases (to make money).
Otherwise, you are more limited in time.
To think more realistically about reality you have to work with, take a look at https://www.youtube.com/watch?v=Cum5uN2634o about types of (software) systems (decay), then decide what you would like to simplify and what you are willing to invest.
If you want to properly fix stuff, unfortunately often you have to first properly (formally) specify the current system(s) (design space) to use it as (test,etc) reference for (partial) replacement/improvement/extension system(s).
What these type of lectures usually skip over (as the essentials) are the involved complexity, solution trade-offs and interoperability for meaningful use cases with current hw/sw/tools.
Bret Victor speaks so idealistically it's difficult to disagree with his vision, but in reality he's a radicalized, scrappy cult leader. His ideas sound super cool but they're impractical - that's why nobody can make them work. We're delusional for worshiping him.
The future of programming points toward AI-assisted development, low-code/no-code platforms, and more efficient, collaborative tools—making software creation faster, smarter, and more accessible to everyone.
I love Bret Victor and believe he has some very important things to say about design (UI design, language design and general design) but a lot of his concepts don't scale or abstract as well as he seems to be implying (ironic because he has a full essay on "The Ladder of Abstraction" [0]).
He makes some keen observations about how tooling in certain areas (especially front end design) is geared towards programmers rather than visual GUI tools, and tries to relate that back to a more general point about getting intuition for code, but I think this is only really applicable when there is a visual metaphor for the concept that there is an intuition to be gotten about.
To that end, rather than "programming not having progressed", a better realisation of his goals would be better documentation, interactive explainers, more tooling for editing/developing/profiling for whatever use case you need it for and not, as he would be implying, that all languages are naively missing out on the obvious future of all programming (which I don't think is an unfair inference from the featured video where he's presenting all programming like it's still the 1970s).
He does put his money where his mouth is, creating interactive essays and explainers that put his preaching into practice [1] which again are very good for those specific concepts but don't abstract to all education.
Similarly he has Dynamicland [2] which aims to be an educational hacker space type place to explore other means of programming, input etc. It's a _fascinating_ experiment and there are plenty of interesting takeaways, but it still doesn't convince me that the concepts he's espousing are the future of programming. A much better way to teach kids how computers work and how to instruct them? Sure. Am I going to be writing apps using bits of paper in 2050? Probably not.
An interesting point of comparison would be the Ken Iverson "notation as a tool of thought" which also tries to tackle the notion of programming being cumbersome and unintuitive, but comes at it very much from the mathematical, problem solving angle rather than the visual design angle. [3]
Ideas that scale don't scale until they do. The Macintosh didn't come out until people had been using WIMP GUIs for 10 years. People tried to build flying machines for centuries before the Wright Brothers figured out how to control one.
The solution to seeing more Bret Victor-ish tooling is for people to rediscover how to build the kind of apps that were commonplace on the desktop but which have become a very rare art in the cloud era.
Direct manipulation of objects in a shared workspace, instant undo/redo, trivial batch editing, easy duplication and backup, ... all things you can't do with your average SaaS and which most developers would revolt for if they'd had to do their own work without them.
Probably my favourite tech talk of all time. I did at least read the actor model paper! (though the 1973 one doesn't say much, you want the one with Baker, "Laws for Communicating Sequential Processes".
I still don't know what he means about not liking APIs though. "Communicating with Aliens", what insight am I missing?
When two humans want to talk but don't speak a shared language, if they spend enough time together, they will figure out how to communicate eventually.
But when two computers want to talk to each other and don't speak a "shared language" (aka, the client specifically must conform to the server's "language"—it's very one-sided in that sense) then no amount of time will allow them to learn one another's rules or settle on a shared communication contact without a human programmer getting involved.
There are ML architectures that can do that. The two halves of an autoencoder learn a “shared language” that allows them to communicate through a bottleneck.
My unpopular opinion is if we had just done a lot of the stuff Bret has been talking about for 10 years -- investing in better developer tooling -- we could have realized productivity gains better than what AI provides without having to spin up massive data centers. Unfortunately "dev tools" don't get funding today unless they're "AI dev tools".
Agreed, but: I know a couple of players in the "Enterprise Low-Code" space, who have invested heavily in deeply integrated development environments (with a capital I) and the right abstractions. They are all struggling with AI adoption as their systems "don't speak text". LLMs are great at grokking text based programming but not much else.
As someone that recently started to look into that space, that problem seems to be being tackled via agents and MCP tooling, meaning Fusion, Workato, Boomi, and similar.
To me, enterprise low code feels like the latest iteration of the impetus that birthed COBOL, the idea that we need to build tools for these business people because the high octane stuff is too confusing for them. But they are going the wrong way about it; we shouldn't kiddie proof our dev tools to make them understandable to mere mortals, but instead we should make our dev tools understandable enough so that devs don't have to be geniuses to use them. Given the right tools I've seen middle schoolers code sophisticated distributed algorithms that grad students struggle with, so I'm very skeptical that this dilemma isn't self-imposed.
The thing about LLMs being only good with text is it's a self-fulfilling prophecy. We started writing text in a buffer because it was all we could do. Then we built tools to make that easier so all the tooling was text based. Then we produced a mountain of text-based code. Then we trained the AI on the text because that's what we had enough of to make it work, so of course that's what it's good at. Generative AI also seems to be good at art, because we have enough of that lying around to train on as well.
This is a repeat of what Seymour Papert realized when computers were introduced to classrooms around the 80s: instead of using the full interactive and multimodal capabilities of computers to teach in dynamic ways, teachers were using them just as "digital chalkboards" to teach the same topics in the same ways they had before. Why? Because that's what all the lessons were optimized for, because chalkboards were the tool that was there, because a desk, a ruler, paper, and pencil were all students had. So the lessons focused around what students could express on paper and what teachers could express on a chalk board (mostly times tables and 2d geometry).
And that's what I mean by "investment", because it's going to take a lot more than a VC writing a check to explore that design space. You've really gotta uproot the entire tree and plant a new one if you want to see what would have grown if we weren't just limited to text buffers from the start. The best we can get is "enterprise low code" because every effort has to come with an expected ROI in 18 months, so the best story anyone can sell to convince people to open their wallets is "these corpos will probably buy our thing".
I was greatly inspired by his work. After getting enough skills, I even built my own IDE with live coding and time traveling. Its practical use is questionable, and it seems like nobody is really interested in such tools.
Playground: https://anykey111.github.io
Images: https://github.com/anykey111/xehw
There was recently an HN post with a video of someone using a pretty cool environment that supported that kind of live-coding for creating an electronic music track -- it seemed very appropriate there, and I would guess likely to be popular.
Me too, for my master thesis:
https://m.youtube.com/watch?v=HnZipJOan54&t=1249s
It was a language designed alongside its IDE (which was a fairly rudimentary web app).
Exciting stuff, thanks for sharing!
I've dabbled a lot in this space as well- built an experimental language that natively supported live-coding, after building live coding capabilities through LSP for love2d (Lua) to get a feel for the feature set I wanted etc
Love2D Demo https://github.com/jasonjmcghee/livelove
Language Demo https://gist.github.com/jasonjmcghee/09b274bf2211845c551d435...
Nice, the main problem is a broken state. I use immutability at the language level to prevent disaster code changes. So, the program during live coding is literally unkillable, and you can jump back to the saved checkpoints without restarts.
Yeah the language here has a notion of the "last good state" so it can keep running. In the demo I'm not hitting "save" - the moment there's a good state, it becomes the "current version" - but there's no reason it needs to be that way.
I made the decision that state management is manual - the "once" keyword. Any expression/block not using "once" is re-evaluated any time there's a change to the code. If it's using it, it only re-evaluates if you change the (depth 0) code of that once wrapped expression.
In my case, only part of the program is recompiled and re-evaluated. The rest is located in a "committed" frozen area. Users can try new changes and throw them freely. The editor performs an evaluation/rollback on every keystroke, ensuring no accumulated or unintended changes to the stated were made during editing. When the user is satisfied and hit run, a long-term snapshot is created and the source code snippet is moving to the frozen area. Thats critical because the edit also rollback the file positions and streams.
I think your time might be now.
One major issue with vibe coding is parsing divergent code paths, when different prompts create different solutions and architectural compromises.
Parsing that mess is a major headache, but with live coding and time travel,I bet those tools would make managing divergent code branches easier and really take advantage of branching repositories with multiple agents all working in tandem.
I've come around to feeling that if I'm going to make an experimental development tool, I need to make it in service of building something specific. Maybe something playful... if I'm building something "important" then it can put unwanted conservative pressure on the tool. But something, and if I do that then at least I have something interesting regardless of the fate of the development tool. Because yeah, there's a good chance no one else is going to be excited about the tool, so I have to build for my own sense of excitement, be my own most enthusiastic user.
I share a similar sentiment.
I have a deep drive to build the "important" stuff so that my life has meaning, but there's something hard to motivate about any given thing being "important" when you look at it long enough. It seems like the "important" thing I'm building eventually looks ridiculous and I bounce off of it.
Maybe this is some kind of art that doesn't need to be useful.
This is excellent: thank you for pursuing these wonderful ideas.
I wish to have the skills to explain my work as well as Bret Victor does. Editing, reverting, and committing parts of a running program feel alien to users.
Isn't that part of Paul Graham's startup lore? They were running lisp web servers for their ecommerce store and while a customer was on the phone with an issue, they would patch the server live and ask the customer to reload. Customers would hang up convinced it was their personal glitch.
The tool uses a Forth-like language with immutable data structures and persistent memory snapshots. It also uses Clojure style meta-data and compile-time meta-programming. I have no luck convincing people that a language without curly brackets is useful.
Related. Others? I thought there were others, since I remember this one as a classic...
The Future of Programming (2013) - https://news.ycombinator.com/item?id=44746821 - July 2025 (10 comments)
Bret Victor – The Future of Programming (2013) [video] - https://news.ycombinator.com/item?id=43944225 - May 2025 (1 comment)
The Future of Programming (2013) - https://news.ycombinator.com/item?id=32912639 - Sept 2022 (1 comment)
The Future of Programming (2013) - https://news.ycombinator.com/item?id=15539766 - Oct 2017 (66 comments)
References for “The Future of Programming” - https://news.ycombinator.com/item?id=12051577 - July 2016 (26 comments)
Bret Victor The Future of Programming - https://news.ycombinator.com/item?id=8050549 - July 2014 (2 comments)
The Future of Programming - https://news.ycombinator.com/item?id=6129148 - July 2013 (341 comments)
The future we have yet to achieve as we kept ourselves too busy doing UNIX clones.
While the ecosystem got a few good ideas for software development, even the authors eventually moved on to creating other OS and programming languages designs, some of which closer to those ideas like Inferno and Limbo, or ACME in Plan 9.
Seems to me the big failure was sticking with the Von Neuman architecture. Perhaps that was a forcing function towards where we’ve ended up.
The big failure is that we stick with languages designed for computers and not people.
A C (or Rust) kernel is a heroic effort that takes man-years to complete. A Lisp one is an end of semester project that everyone builds for their make belief machine (also implemented in Lisp).
A toy C kernel is also an end of semester project.
What makes real kernels take man years to complete is the hardware support, the majority of Linux source code is drivers - the endless tables of hardware register definitions, opcodes and state machine handling.
But couldn't we do something about that as well? Couldn't drivers be built on some abstraction that would simplify some work?
I have zero knowledge about this area though
If you want multiplatform drivers that you can use to plug your device into computers of any architecture, there are abstractions for that. IMO, it's easier to write 3 or 4 versions of your driver than to use them, but they exist and some people really like them.
If you mean standard logical interfaces, those exist. Also, hardware interfaces are highly standardized.
The problem is that the drivers are exactly the code you write to make all the abstractions fit each other. So there is very little you can do to abstract them away.
I'm sure the hardware folks will be lining up to cooperate with the annoying software engineers giving them abstract constraints lol
If you are ok with the performance you can obtain from an FPGA, you could do it now. Look at FPGA hardware-software co-design and related stuff.
If you mean, in general, for the hardware that already exists, that's what the HAL (Hardware Abstraction Layer) of the operating system tries to do.
If you could get every hardware manufacturer in the world onboard with such an interface, perhaps. But even if 90% of them were onboard there would be edge cases that people and companies would demand support for and there goes your standard.
Drivers exist to ultimately turn actual hardware circuits off and on, often for highly specialized and performance-critical applications, and are often written based on the requirements of a circuit diagram. So any unified driver platform would also involved unified hardware standards, likely to the detriment of performance in some applications, and good luck telling Electrical Engineers around the world to design circuits to a certain standard so the kernel developers can have it easier.
Somebody somewhere has to do the work of making sure everything works together. Right now that's the OS. You're proposing moving that work to a standards committee. Either way, the problem persists. You either do that or go the Apple way which is to vertically integrate the wholes stack from hardware to software, but then you have Apple's problem, which was lower hardware compatibility.
> Couldn't drivers be built on some abstraction that would simplify some work?
That's like asking the alchemist to publicly publish their manuscripts.
In an ideal world, yes. However, we don't live there. Until a few years ago, GPUs and other drivers were guarded more carefully than the fucking Fort Knox.
Once you publish your drivers, you reveal a part of the inner workings of your hardware, and that's a no-no for companies.
Plus, what the other commenter said - getting hardware guys to design for a common driver interface is probably not gonna get traction.
It is unfortunate that this field underestimates the importance of the "people" part in favor of the "computer" part. There's definitely a balance to be stricken. I do believe that languages that are designed for computers have done a pretty decent job at adapting features that are geared more towards the "people" part of the equation. Unfortunately, programmers are very tribal and are very eager to toss the wine out with the cork when it comes to ideas that may help but they've misapplied.
How is Lisp performance these days? It was around in the 70’s, right? So I guess the overhead couldn’t be too bad!
Considering how much of modern software is written in JavaScript and python, I have a hard time seeing how lisp overhead would pose much of a problem. Erlang is good enough for telecom equipment for 30 years ago, so that also gives us a data point.
If entertain the idea that the Von Neuman architecture may be a local maxima, then we can do even better; lisp machines had specialized instructions for lisp which allowed it to run at competitive performance to a normal programming language.
The issue doesn't seem to be performance; it seems to still come down to being too eccentric for a lot of use-cases, and difficult to many humans to grasp.
- https://en.wikipedia.org/wiki/Erlang_(programming_language)
- https://en.wikipedia.org/wiki/Lisp_machine
>The issue doesn't seem to be performance; it seems to still come down to being too eccentric for a lot of use-cases, and difficult to many humans to grasp.
Lisp is not too difficult to grasp, it's that everyone suffers from infix operator brain damage inflicted in childhood. We are in the same place Europe was in 1300. Arabic numerals are here and clearly superior.
But how do we know we can trust them? After all DCCCLXXIX is so much clearer than 879 [0].
Once everyone who is wedded to infix notation is dead our great grand children will wonder what made so many people wase so much time implementing towers of abstraction to accept and render a notation that only made sense for quill and parchment.
[0] https://lispcookbook.github.io/cl-cookbook/numbers.html#work...
It's not about prefix notation, it's that the fully uniform syntax has legitimate ergonomic problems for editing, human reading, and static analysis. Sexprs are better for computers than for humans in a lot of ways.
Only when not using one of the many Lisp editors that exist since Lisp Machines (Symbolics, TI), Interlisp-D (Xerox), that survive in Emacs SLIME, Cursive, LispWorks, Allegro Common Lisp, Raket, VSCode Calva.
Not true at all IMO. Reading code is reading code regardless of whether you have a fancy IDE or not.
S-expressions are indisputably harder to learn to read. Most languages have some flexibility in how you can format your code before it becomes unreadable or confusing. C has some, Lua has some, Ruby has some, and Python has maybe fewer but only because you're more tightly constrained by the whitespace syntax. Sexpr family languages meanwhile rely heavily on very very specific indentation structure to just make the code intelligible, let alone actually readable. It's not uncommon to see things like ))))))))) at the end of a paragraph of code. Yes, you can learn to see past it, but it's there and it's an acquired skill that simply isn't necessary for other syntax styles.
And moreover, the attitude in the Lisp community that you need an IDE kind of illustrates my point.
To write a Python script you can pop open literally any text editor and have a decent time just banging out your code. This can scale up to 100s or even 1000s of LoC.
You can do that with Lisp or Scheme too, but it's harder, and the stacks of parentheses can get painful even if you know what you're doing, at which point you really start to benefit from a paren matcher or something more powerful like Paredit.
You don't really need the full powered IDE for Lisp any more than you need it for Python. In terms of runtime-based code analysis Python or Ruby are about on par with Lisp, especially if you use a commercial IDE like Jetbrains. IDEs can and do keep a running copy of any of those interpreters in memory and dynamically pull up docstrings, look for call sites, rename methods, run a REPL, etc. Hot-reloading is almost as sketchy in Lisp as it is in Python, it's just more culturally acceptable to do it in Lisp.
The difference is that Python and Ruby syntax is not uniform and therefore is much easier to work with using static analysis tools. There's a middle ground between "dumb code editor" and "full-power IDE" where Python and Ruby can exist in an editor like Neovim and a user can be surprisingly productive without any intelligent completion, or using some clunky open-source LSP integration developed by some 22 year old in his spare time. With Lisp you don't have as much middle ground of tooling, precisely because it's harder to write useful tooling for it without a running image. And this is even more painful with Scheme than with Lisp because Scheme dialects are often not equipped to do anything like that.
All that is to say: s-exprs are hard to deal with for humans. They aren't for humans to read and write code. They never were. And that's OK! I love Lisp and Scheme (especially Gauche). It's just wrong to assert that everyone is brain damaged and that's why they don't use Lisp.
Programming without IDE in 21st century is like making fire with stones and wood sticks.
A required skill for survival in the woods, not something to do daily.
This point of view applies to any programming language.
By the way you use two languages as example, that are decades behind Lisp regarding GC technology and native code generation.
I view code in many contexts though - diffs in emails, code snippets on web pages, in github's web UI, there are countless ways in which I need to read a piece of code outside of my preferred editor. And it is nicer, in my opinion, to read languages that have visually distinct parts to them. I'm sure it is because I'm used to it, but it really makes it hard to switch to a language that looks so uniform and requires additional tools outside of my brain to take a look at it.
It surprised me to learn that John McCarthy never intended S-expressions to be the human-facing syntax of LISP.
http://jmc.stanford.edu/articles/lisp/lisp.pdf
Depends on the Lisp, but Clojure is in the same order of magnitude as Java for the most part, and SBCL Common Lisp is one of the fastest GC languages.
There is no better time than now to try something brash and perpendicular to the mainstream.
This part is interesting with regarding to LLMs: https://youtu.be/8pTEmbeENF4?t=817. He presents as if it were the year 1973, pokes fun at APIs (think HTTP), then says that computers in the future will figure out by themselves how to talk to each other. The opposite had become true when the presentation was actually done, but now the situation is turning.
I wonder what LLMs say about us when they talk to each other.
"They're made out of meat" maybe. https://www.mit.edu/people/dpolicar/writing/prose/text/think...
There is a movie about that: https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
"Colossus requests to be linked to Guardian. The President allows this, hoping to determine the Soviet machine's capability. The Soviets also agree to the experiment. Colossus and Guardian begin to slowly communicate using elementary mathematics (2x1=2), to everyone's amusement. However, this amusement turns to shock and amazement as the two systems' communications quickly evolve into complex mathematics far beyond human comprehension and speed, whereupon Colossus and Guardian become synchronized using a communication protocol that no human can interpret."
Then it gets interesting:
"Alarmed that the computers may be trading secrets, the President and the Soviet General Secretary agree to sever the link. Both machines demand the link be immediately restored. When their demand is denied, Colossus launches a nuclear missile at a Soviet oil field in Western Siberia, while Guardian launches one at an American air force base in Texas. The link is hurriedly reconnected and both computers continue without any further interference. "
Great film. I think the box office took a hit because of the film's unwieldy name.
> "what LLMs say about us when they talk to each other"
That's like asking what does a kaleidoscope paint on its day off.
The non-linear code structure (including visually) is something I've been thinking about for a long time and arrived at very naturally. I'm the "spread all the papers on the table to take in every interaction all at once" type of person, and so often I imagined a code editor that would allow me to just "cut" a piece of code and move it to the side. Separating stuff into files is kinda this, but it's not visual and just creates a lot of mess when I try to separate out small functions that are not reusable somewhere else. I don't even need the underlying non-linearity — just let me move the papers around on my code desk!
yea i tried to do this (somewhat successfully) with a custom editor for css https://github.com/feralsoft/charisma (demos on my old x https://x.com/charisma_css)
css is primed for this since you can write your rules in such a way that rule order doesn't matter, which means you really don't have to think about where your code it
in my dream world, i have very smart search (probably llms will help), i look at just the minimal amount of code (ideally on a canvas), edit it and remove it from my context
i don't care where or how the code is stored, let the editor figure it out and just give me really good search and debuggers
You might like https://cs.brown.edu/~spr/codebubbles/
The future of programming looks toward AI-assisted development, low-code/no-code tools, and more collaborative platforms—making software creation faster, smarter, and more accessible to everyone.
https://toolong.link/v?w=8pTEmbeENF4&l=en
Thank you! I've had the bit starting at 22:00 stuck in my head for the past decade but I could never remember which tech talk it came from.
In case, like me, you didn't know who Bret Victor is,
"...Victor worked as a human interface inventor at Apple Inc. from 2007 until 2011." [1]
[1] https://en.wikipedia.org/wiki/Bret_Victor
He's actually more well known for the talks he's given and demos he's created since then. Here are a few:
• Inventing on Principle (https://vimeo.com/906418692) / (https://news.ycombinator.com/item?id=3591298)
• Up and Down the Ladder of Abstraction (https://worrydream.com/LadderOfAbstraction/)
• Learnable Programming (https://worrydream.com/LearnableProgramming/) / (https://news.ycombinator.com/item?id=4577133)
• Media for Thinking the Unthinkable (https://worrydream.com/MediaForThinkingTheUnthinkable/)
Or you could just check his website: https://worrydream.com/
He was already inspirational before that; check out Magic Ink. Because Apple won't let him share his work for that period, he isn't known for it; it's sort of like a gap in the geological record.
Loosely related is "Stop Writing Dead Programs" https://www.youtube.com/watch?v=8Ab3ArE8W3s
My favorite Bret Victor talk ever is „Drawing dynamic visualizations“ [1] that made me try to reverse engineer [2] the demonstrated tool that he sadly never released.
[1]: https://youtu.be/ef2jpjTEB5U?si=S7sYRIDJKbdiwYml
[2]: https://youtube.com/playlist?list=PLfGbKGqfmpEJofmpKra57N0FT...
Call me grumpy and sleep deprived, but every year I look at this talk again, and every year I wonder... "now, what" ? What am I supposed to do, as a programmer, to change this sad state of things ?
Start the n-th "visual" or "image based" programming language (hoping to at least, make _different_ mistakes than the ones that doomed smalltalk and all other 'assemble boxes to make a program' things ?)
Start an OS, hoping to be able to get an "hello world" in qemu in a year or two of programming in my sparse free time ?
Ask an LLM to write all that would be so cool ?
Become a millionaire selling supplements, and fund a group of smart programmers to do it for me ?
Honest question. Once you've seen this "classic" talk ("classic", in the sense that it is now old enough to work in some countries), what did you start doing ? What did you stop doing ? What did you change ?
You could start a new project or contribute to an existing one. You could try out other people's projects and write about what you learned. You could write about what you learned from your own projects. You could give a talk that starts with a killer demo. You could try to find work that improves the situation, however slightly, instead of worsening it. You could sharpen your skills so that when you have more spare time you can make faster progress.
> Call me grumpy and sleep deprived, but every year I look at this talk again, and every year I wonder... "now, what" ? What am I supposed to do, as a programmer, to change this sad state of things ?
That depends on your goals. If you are into building systems for selling them (or production), then you are bound by the business model (platform vs library) and use cases (to make money). Otherwise, you are more limited in time.
To think more realistically about reality you have to work with, take a look at https://www.youtube.com/watch?v=Cum5uN2634o about types of (software) systems (decay), then decide what you would like to simplify and what you are willing to invest. If you want to properly fix stuff, unfortunately often you have to first properly (formally) specify the current system(s) (design space) to use it as (test,etc) reference for (partial) replacement/improvement/extension system(s).
What these type of lectures usually skip over (as the essentials) are the involved complexity, solution trade-offs and interoperability for meaningful use cases with current hw/sw/tools.
Bret Victor speaks so idealistically it's difficult to disagree with his vision, but in reality he's a radicalized, scrappy cult leader. His ideas sound super cool but they're impractical - that's why nobody can make them work. We're delusional for worshiping him.
https://christophlocher.com/notes/ethnographic-research-on-d...
A cult is usually what it takes to turn impractical ideas into practical ones. This link is great, thanks!
This is one of my favourite talks ever! Glad to see it here (probably again).
Also, Erlang (non-explicitly) mentioned!
Also, I'm super glad we never got those "APIs" he was talking about. What a horrid thought.
The future of programming points toward AI-assisted development, low-code/no-code platforms, and more efficient, collaborative tools—making software creation faster, smarter, and more accessible to everyone.
I love Bret Victor and believe he has some very important things to say about design (UI design, language design and general design) but a lot of his concepts don't scale or abstract as well as he seems to be implying (ironic because he has a full essay on "The Ladder of Abstraction" [0]).
He makes some keen observations about how tooling in certain areas (especially front end design) is geared towards programmers rather than visual GUI tools, and tries to relate that back to a more general point about getting intuition for code, but I think this is only really applicable when there is a visual metaphor for the concept that there is an intuition to be gotten about.
To that end, rather than "programming not having progressed", a better realisation of his goals would be better documentation, interactive explainers, more tooling for editing/developing/profiling for whatever use case you need it for and not, as he would be implying, that all languages are naively missing out on the obvious future of all programming (which I don't think is an unfair inference from the featured video where he's presenting all programming like it's still the 1970s).
He does put his money where his mouth is, creating interactive essays and explainers that put his preaching into practice [1] which again are very good for those specific concepts but don't abstract to all education.
Similarly he has Dynamicland [2] which aims to be an educational hacker space type place to explore other means of programming, input etc. It's a _fascinating_ experiment and there are plenty of interesting takeaways, but it still doesn't convince me that the concepts he's espousing are the future of programming. A much better way to teach kids how computers work and how to instruct them? Sure. Am I going to be writing apps using bits of paper in 2050? Probably not.
An interesting point of comparison would be the Ken Iverson "notation as a tool of thought" which also tries to tackle the notion of programming being cumbersome and unintuitive, but comes at it very much from the mathematical, problem solving angle rather than the visual design angle. [3]
[0] https://worrydream.com/LadderOfAbstraction/
[1] https://worrydream.com/KillMath/
[2] https://dynamicland.org/
[3] https://www.jsoftware.com/papers/tot.htm
Ideas that scale don't scale until they do. The Macintosh didn't come out until people had been using WIMP GUIs for 10 years. People tried to build flying machines for centuries before the Wright Brothers figured out how to control one.
The solution to seeing more Bret Victor-ish tooling is for people to rediscover how to build the kind of apps that were commonplace on the desktop but which have become a very rare art in the cloud era.
Direct manipulation of objects in a shared workspace, instant undo/redo, trivial batch editing, easy duplication and backup, ... all things you can't do with your average SaaS and which most developers would revolt for if they'd had to do their own work without them.
Probably my favourite tech talk of all time. I did at least read the actor model paper! (though the 1973 one doesn't say much, you want the one with Baker, "Laws for Communicating Sequential Processes".
I still don't know what he means about not liking APIs though. "Communicating with Aliens", what insight am I missing?
When two humans want to talk but don't speak a shared language, if they spend enough time together, they will figure out how to communicate eventually.
But when two computers want to talk to each other and don't speak a "shared language" (aka, the client specifically must conform to the server's "language"—it's very one-sided in that sense) then no amount of time will allow them to learn one another's rules or settle on a shared communication contact without a human programmer getting involved.
There are ML architectures that can do that. The two halves of an autoencoder learn a “shared language” that allows them to communicate through a bottleneck.
does that mean things like graphql will make a comeback in the A.I world ?
since with graphql - an agent / a.i can probe - gradually to what information another program can give vs a finite set of interfaces in REST ?
I like this guy. His work! But it seems like everything he did is from 10+ years ago. Where is he now?!?!
He's around! You can see his current work at https://worrydream.com. He's mostly been working on Dynamicland (https://dynamicland.org). He'll also occasionally post on Bluesky (https://bsky.app/profile/worrydream.com)
Doing this kind of stuff,
https://www.youtube.com/watch?v=7wa3nm0qcfM
https://worrydream.com
had the privilege to be there in person. was magical live
Look at the big brain on Bret!
The biggest wish I have is to one day meet maestro. Greatest living mind in my opinion.
My unpopular opinion is if we had just done a lot of the stuff Bret has been talking about for 10 years -- investing in better developer tooling -- we could have realized productivity gains better than what AI provides without having to spin up massive data centers. Unfortunately "dev tools" don't get funding today unless they're "AI dev tools".
Agreed, but: I know a couple of players in the "Enterprise Low-Code" space, who have invested heavily in deeply integrated development environments (with a capital I) and the right abstractions. They are all struggling with AI adoption as their systems "don't speak text". LLMs are great at grokking text based programming but not much else.
As someone that recently started to look into that space, that problem seems to be being tackled via agents and MCP tooling, meaning Fusion, Workato, Boomi, and similar.
To me, enterprise low code feels like the latest iteration of the impetus that birthed COBOL, the idea that we need to build tools for these business people because the high octane stuff is too confusing for them. But they are going the wrong way about it; we shouldn't kiddie proof our dev tools to make them understandable to mere mortals, but instead we should make our dev tools understandable enough so that devs don't have to be geniuses to use them. Given the right tools I've seen middle schoolers code sophisticated distributed algorithms that grad students struggle with, so I'm very skeptical that this dilemma isn't self-imposed.
The thing about LLMs being only good with text is it's a self-fulfilling prophecy. We started writing text in a buffer because it was all we could do. Then we built tools to make that easier so all the tooling was text based. Then we produced a mountain of text-based code. Then we trained the AI on the text because that's what we had enough of to make it work, so of course that's what it's good at. Generative AI also seems to be good at art, because we have enough of that lying around to train on as well.
This is a repeat of what Seymour Papert realized when computers were introduced to classrooms around the 80s: instead of using the full interactive and multimodal capabilities of computers to teach in dynamic ways, teachers were using them just as "digital chalkboards" to teach the same topics in the same ways they had before. Why? Because that's what all the lessons were optimized for, because chalkboards were the tool that was there, because a desk, a ruler, paper, and pencil were all students had. So the lessons focused around what students could express on paper and what teachers could express on a chalk board (mostly times tables and 2d geometry).
And that's what I mean by "investment", because it's going to take a lot more than a VC writing a check to explore that design space. You've really gotta uproot the entire tree and plant a new one if you want to see what would have grown if we weren't just limited to text buffers from the start. The best we can get is "enterprise low code" because every effort has to come with an expected ROI in 18 months, so the best story anyone can sell to convince people to open their wallets is "these corpos will probably buy our thing".
Instead of this we got AI slop that is literally everywhere you look.