One interesting thing I got in replies is Unison language (content adressed functions, function is defined by AST). Also, I recommend checking Dion language demo (experimental project which stores program as AST).
In general I think there's a missing piece between text and storage. Structural editing is likely a dead end, writing text seems superior, but storage format as text is just fundamentally problematic.
I think we need a good bridge that allows editing via text, but storage like structured database (I'd go as far as say relational database, maybe). This would unlock a lot of IDE-like features for simple programmatic usage, or manipulating langauge semantics in some interesting ways, but challenge is of course how to keep the mapping between textual input in shape.
Why would structural editing be a dead end? It has nothing to do with storage format. At least the meaning of the term I am familiar with, is about how you navigate and manipulate semantic units of code, instead of manipulating characters of the code, for example pressing some shortcut keys to invert nesting of AST nodes, or wrap an expression inside another, or change the order of expressions, all at the pressing of a button or key combo. I think you might be referring to something else or a different definition of the term.
I'm referring to UI interfaces that allow you to do structural editing only and usually only store the structural shape of the program (e.g. no whitespace or indentation). I think at this point nobody uses them for programming, it's pretty frustrating to use because it doesn't allow you to do edits that break the semantic text structure too much.
I guess the most used one is styles editor in chrome dev tools and that one is only really useful for small tweaks, even just adding new properties is already pretty frustrating experience.
[edit] otherwise I agree that structural editing a-la IDE shortcuts is useful, I use that a lot.
I would say structural editing is not a dead end, because as you mention projects like Unison and Smalltalk show us that storing structures is compatible with having syntax.
The real problem is that we need a common way of storing parse tree structures so that we can build a semantic editor that works on the syntax of many programming languages
>I definitely reject the "git compatible" approach
If your version control system is not compatible with GitHub it will be dead on arrival. The value of allowing people to gradually adopt a new solution can not be understated. There is also value in being compatible with existing git integrations or scripts in projects build systems.
Based on reading this, I don't see anything that would prevent keeping track of a repo tracked by this database with Git (and therefore GitHub) in addition to the database. I think the "compatible" bit means more that you have to think in terms of Git concepts everywhere.
Curious what the author thinks though, looks like it's posted by them.
A system as described could be forwards compatible with git without being backwards compatible with git. In other words let people migrate easily, but don't force the new system to have all the same flaws of the old
I wouldn't say I want to abandon anything git is doing as much as evolve it. Objects need to be able to contain syntax tree nodes, and patches need to be able to target changes to particular locations in a syntax tree instead of just by line/col.
And you can build near any VCS of your dream while still using Git as storage backend, as it is database of a linked snapshots + metadata. Bonus benefit: it will work with existing tooling
The whole article is "I don't know how git works, let's make something from scratch"
A syntax tree node does not fit into a git object. Too many children. This doesn't mean we shouldn't keep everything that's great about git in a next-gen solution, but it does mean that we'll have to build some new tools to experiment with features like semantic patching and merging.
Also I checked the author out and can confirm that they know how git works in detail.
Why do you think it has too many children? If we are talking direct descendents, I have seen way larger directories in file systems (git managed) than I've ever seen in an AST.
I don't think there's a limit in git. The structure might be a bit deep for git and thus some things might be unoptimized, but the shape is the same.
I've had this idea too, and think about it everytime I'm on a PR with lots of whitespace/non-functional noise how nice it would be if source code wern't just text and I could be looking at a cleaner higher level diff instead.. I think you have to go higher than AST though, it should at least be language-aware
(Author) In my current codebase, I preserve the whitespace nodes. Whitespace changes would not affect the other nodes though. My first attempt to recover whitespace algorithmically not exactly failed, but more like I was unable to verify it is OK enough. We clang-format or go fmt the entire thing anyway, and whitespace changes are mostly noise, but I did not find 100% sure approach yet.
Some languages are unfortunately whitespace sensitive, so a generic VCS cannot discard whitespace at all. But maybe the diffing tools themselves could be made language aware and hide not meaningful changes.
One fundamental deal-breaking problem with structure aware version control is that your vcs now needs to know all the versions of all the languages you're writing in. It gets non-trivial fast.
A big challenge will be the unfamiliarity of such a new system. Many people have found that coding agents work really well with terminals, unix tooling, and file systems. It's proven tech.
Where-as doing DB queries to navigate code would be quite unfamiliar.
> The monorepo problem: git has difficulty dividing the codebase into modules and joining them back
Can anyone explain this one? I use monorepos everyday and although tools like precommit can get a bit messy, I've never found git itself to be the issue?
Based on my personal experience, big-corp monorepos have all features of a black hole: they try to suck in all the existing code (vendor it) and once some code starts living in a monorepo, there is no way to separate it as it becomes entangled with the entire thing. There is no way to "blend it in", so to say.
This subject deserves a study of its own, but big-big-tech tends to use other things than git.
May get complicated at the edges, as files move across boundaries. It's a hard problem. And for some subset of those problems, subtree does give a way to split and join.
That black hole behavior is a result of corporate processes though.
Not a result of git.
Business continuity (no uncontrolled external dependencies) and corporate security teams wanting to be able to scan everything.
Also wanting to update everyone's dependencies when they backport something.
Once you got those requirements, most of the benefits of multi-repo / roundtripping over releases just don't hold anymore.
The entanglement can be stronger, but if teams build clean APIs it's no harder than removing it from a cluster of individual repositories.
That might be a pretty load bearing if though.
Let's say I want to fork one of your monorepo modules and maintain a version for myself. It's hard. I might have to fork 20 modules and 19 will be unwanted. They'll be deleted, go stale, or I'll have to do pointless work to keep them up to date. Either way the fork and merge model that drives OSS value creation is damaged when what should be small, lightweight focused repos are permanently chained to the weight of arbitrary other code, which from the perspective of the one thing I want to work on is dead weight.
You can also just tell that monorepos don't scale because eventually if you keep consolidating over many generations, all the code in the world would be in just one or two repos. Then these repos would be so massive that just breaking off a little independent piece to be able to work on would be quite crucial to being able to make progress.
That's why the alternative to monorepos are multirepos. Git handles multirepos with it's submodules feature. Submodules are a great idea in theory, offering git repos the same level of composability in your deps that a modern package manager offers. But unfortunately submodules are so awful in practice so that people cram all their code into one repo just to avoid having to use the submodule feature for the exact thing it was meant to be used for...
Hm, I never had much issues with submodules. It seems just to be something that when one has understood it, one can use it, but it might seem scary at first and one needs to know, that a repo uses submodules at all.
The author doesn't know how to use git or how git works.
If he knew how to use it, he'd be annoyed at some edge cases.
If he knew how it works, he'd know the storage subsystem is flexible enough to implement any kind of new VCS on top of it. The storage format doesn't need to change to improve/replace the user facing part
Joe Armstrong had a beautiful LEGO vs Meccano metaphor. Both things are cool and somewhat similar in their basic idea, but you cannot do with LEGO what you can do with Meccano and vice-versa. Also, you can not mix them.
For example there is no easy way to create a "localized" branch - this branch is only allowed to change files in "/backend", such that the agent doesn't randomly modify files elsewhere. This way you could let an agent loose on some subpart of the monorepo and not worry that it will break something unrelated.
You can do all kinds of workarounds and sandboxes, but it would be nice for git to support more modularity.
To me, git works. And LLMs understand that, unlike some yet-to-come-tool.
If you create a new tool for version control, go for it. Then see how it fares in benchmarks (for end-to-end tools) or vox populi - if people use you new tool/skill/workflow.
> Like everyone else with a keyboard, a brain and a vested interest in their future as a programmer, I’ve been using AI agents to write a lot of code recently.
Definitely agree that Git is mediocre-at-best VCS tool. This has always been the case. But LLMs are finally forcing the issue. It’s a shame a whole generation programmers has only used Git/GitHub and think it’s good.
Monorepo and large binary file support is TheWay. A good cross-platform virtual file system (VFS) is necessary; a good open source one doesn’t exist today.
Ideally it comes with a copy-on-write system for cross-repo blob caching. But I suppose that’s optional. It lets you commit toolchains for open source projects which is a dream of mine.
Not sure I agree that LSP like features need to be built in. That feels wrong. That’s just a layer on top.
Do think that agent prompts/plans/summaries need to be a first class part of commits/merges. Not sure the full set of features required here.
Continuing the theme, a new starter at my place (with about a decade of various experience, including an international financial services information player whose name is well known) had never used git or indeed any distributed, modern source control system.
HN is a tiny bubble. The majority of the world's software engineers are barely using source control, don't do code reviews, don't have continuous build systems, don't have configuration controlled release versions, don't do almost anything that most of HN's visitors think are the basic table stakes just to conduct software engineering.
I think agree (but I think I think about this maybe a one level higher). I wrote about this a while ago in https://yoyo-code.com/programming-breakthroughs-we-need/#edi... .
One interesting thing I got in replies is Unison language (content adressed functions, function is defined by AST). Also, I recommend checking Dion language demo (experimental project which stores program as AST).
In general I think there's a missing piece between text and storage. Structural editing is likely a dead end, writing text seems superior, but storage format as text is just fundamentally problematic.
I think we need a good bridge that allows editing via text, but storage like structured database (I'd go as far as say relational database, maybe). This would unlock a lot of IDE-like features for simple programmatic usage, or manipulating langauge semantics in some interesting ways, but challenge is of course how to keep the mapping between textual input in shape.
Why would structural editing be a dead end? It has nothing to do with storage format. At least the meaning of the term I am familiar with, is about how you navigate and manipulate semantic units of code, instead of manipulating characters of the code, for example pressing some shortcut keys to invert nesting of AST nodes, or wrap an expression inside another, or change the order of expressions, all at the pressing of a button or key combo. I think you might be referring to something else or a different definition of the term.
I'm referring to UI interfaces that allow you to do structural editing only and usually only store the structural shape of the program (e.g. no whitespace or indentation). I think at this point nobody uses them for programming, it's pretty frustrating to use because it doesn't allow you to do edits that break the semantic text structure too much.
I guess the most used one is styles editor in chrome dev tools and that one is only really useful for small tweaks, even just adding new properties is already pretty frustrating experience.
[edit] otherwise I agree that structural editing a-la IDE shortcuts is useful, I use that a lot.
Structural diff tools like difftastic[1] is a good middle ground and still underexplored IMO.
[1] https://github.com/Wilfred/difftastic
IntelliJ diffs are also really good, they are somewhat semi-structural I'd say. Not going as far as difftastic it seems (but I haven't use that one).
Come the BABLR side. We have cookies!
In all seriousness this is being done. By me.
I would say structural editing is not a dead end, because as you mention projects like Unison and Smalltalk show us that storing structures is compatible with having syntax.
The real problem is that we need a common way of storing parse tree structures so that we can build a semantic editor that works on the syntax of many programming languages
I think neither Unison nor Smalltalk use structural editing, though.
[edit] on the level of a code in a function at least.
>I definitely reject the "git compatible" approach
If your version control system is not compatible with GitHub it will be dead on arrival. The value of allowing people to gradually adopt a new solution can not be understated. There is also value in being compatible with existing git integrations or scripts in projects build systems.
Based on reading this, I don't see anything that would prevent keeping track of a repo tracked by this database with Git (and therefore GitHub) in addition to the database. I think the "compatible" bit means more that you have to think in terms of Git concepts everywhere.
Curious what the author thinks though, looks like it's posted by them.
Technically, exporting changes either way is not a challenge. It only becomes difficult if we have multiple gateways for some reason.
One way to do it is to use the new system for the messy part and git/GitHub for "publication".
A system as described could be forwards compatible with git without being backwards compatible with git. In other words let people migrate easily, but don't force the new system to have all the same flaws of the old
What issues do you see in git's data model to abandon it as wire format for syncing?
I wouldn't say I want to abandon anything git is doing as much as evolve it. Objects need to be able to contain syntax tree nodes, and patches need to be able to target changes to particular locations in a syntax tree instead of just by line/col.
An AST is a tree as much as the directory structure currently encoded in git.
It shouldn't be hard to build a bijective mapping between a file system and AST.
No we don't.
And you can build near any VCS of your dream while still using Git as storage backend, as it is database of a linked snapshots + metadata. Bonus benefit: it will work with existing tooling
The whole article is "I don't know how git works, let's make something from scratch"
A syntax tree node does not fit into a git object. Too many children. This doesn't mean we shouldn't keep everything that's great about git in a next-gen solution, but it does mean that we'll have to build some new tools to experiment with features like semantic patching and merging.
Also I checked the author out and can confirm that they know how git works in detail.
Why do you think it has too many children? If we are talking direct descendents, I have seen way larger directories in file systems (git managed) than I've ever seen in an AST.
I don't think there's a limit in git. The structure might be a bit deep for git and thus some things might be unoptimized, but the shape is the same.
Tree.
I've had this idea too, and think about it everytime I'm on a PR with lots of whitespace/non-functional noise how nice it would be if source code wern't just text and I could be looking at a cleaner higher level diff instead.. I think you have to go higher than AST though, it should at least be language-aware
(Author) In my current codebase, I preserve the whitespace nodes. Whitespace changes would not affect the other nodes though. My first attempt to recover whitespace algorithmically not exactly failed, but more like I was unable to verify it is OK enough. We clang-format or go fmt the entire thing anyway, and whitespace changes are mostly noise, but I did not find 100% sure approach yet.
You can build a mergetool (https://git-scm.com/docs/git-mergetool)
Some languages are unfortunately whitespace sensitive, so a generic VCS cannot discard whitespace at all. But maybe the diffing tools themselves could be made language aware and hide not meaningful changes.
One fundamental deal-breaking problem with structure aware version control is that your vcs now needs to know all the versions of all the languages you're writing in. It gets non-trivial fast.
Trustfall seems really promising for querying files as if they were a db
https://github.com/obi1kenobi/trustfall
A big challenge will be the unfamiliarity of such a new system. Many people have found that coding agents work really well with terminals, unix tooling, and file systems. It's proven tech.
Where-as doing DB queries to navigate code would be quite unfamiliar.
> The monorepo problem: git has difficulty dividing the codebase into modules and joining them back
Can anyone explain this one? I use monorepos everyday and although tools like precommit can get a bit messy, I've never found git itself to be the issue?
Based on my personal experience, big-corp monorepos have all features of a black hole: they try to suck in all the existing code (vendor it) and once some code starts living in a monorepo, there is no way to separate it as it becomes entangled with the entire thing. There is no way to "blend it in", so to say.
This subject deserves a study of its own, but big-big-tech tends to use other things than git.
> there is no way to separate it
There is
git subtree --help
May get complicated at the edges, as files move across boundaries. It's a hard problem. And for some subset of those problems, subtree does give a way to split and join.
That black hole behavior is a result of corporate processes though.
Not a result of git.
Business continuity (no uncontrolled external dependencies) and corporate security teams wanting to be able to scan everything. Also wanting to update everyone's dependencies when they backport something.
Once you got those requirements, most of the benefits of multi-repo / roundtripping over releases just don't hold anymore.
The entanglement can be stronger, but if teams build clean APIs it's no harder than removing it from a cluster of individual repositories. That might be a pretty load bearing if though.
Let's say I want to fork one of your monorepo modules and maintain a version for myself. It's hard. I might have to fork 20 modules and 19 will be unwanted. They'll be deleted, go stale, or I'll have to do pointless work to keep them up to date. Either way the fork and merge model that drives OSS value creation is damaged when what should be small, lightweight focused repos are permanently chained to the weight of arbitrary other code, which from the perspective of the one thing I want to work on is dead weight.
You can also just tell that monorepos don't scale because eventually if you keep consolidating over many generations, all the code in the world would be in just one or two repos. Then these repos would be so massive that just breaking off a little independent piece to be able to work on would be quite crucial to being able to make progress.
That's why the alternative to monorepos are multirepos. Git handles multirepos with it's submodules feature. Submodules are a great idea in theory, offering git repos the same level of composability in your deps that a modern package manager offers. But unfortunately submodules are so awful in practice so that people cram all their code into one repo just to avoid having to use the submodule feature for the exact thing it was meant to be used for...
Hm, I never had much issues with submodules. It seems just to be something that when one has understood it, one can use it, but it might seem scary at first and one needs to know, that a repo uses submodules at all.
The author doesn't know how to use git or how git works.
If he knew how to use it, he'd be annoyed at some edge cases.
If he knew how it works, he'd know the storage subsystem is flexible enough to implement any kind of new VCS on top of it. The storage format doesn't need to change to improve/replace the user facing part
Joe Armstrong had a beautiful LEGO vs Meccano metaphor. Both things are cool and somewhat similar in their basic idea, but you cannot do with LEGO what you can do with Meccano and vice-versa. Also, you can not mix them.
But you could create some parts that are enabling you to combine them easily. Which is what you could do with software. Write an adapter of sorts.
Sure you can. Hot glue, E6000, duct tape. This is to say, git's pack format has its shortcomings.
For example there is no easy way to create a "localized" branch - this branch is only allowed to change files in "/backend", such that the agent doesn't randomly modify files elsewhere. This way you could let an agent loose on some subpart of the monorepo and not worry that it will break something unrelated.
You can do all kinds of workarounds and sandboxes, but it would be nice for git to support more modularity.
Sounds like a submodule with restrictions for push access managed on the repo level of the submodule.
Somebody call the visualage and clearcase folks and let them know their time has come again!
https://wiki.c2.com/?VisualAge
> clear case
Please no.
To me, git works. And LLMs understand that, unlike some yet-to-come-tool.
If you create a new tool for version control, go for it. Then see how it fares in benchmarks (for end-to-end tools) or vox populi - if people use you new tool/skill/workflow.
I’ve recently been thinking about this too, here is my idea: https://clintonboys.com/projects/lit/
Using prompt as source of truth is not reliable, because outputs for the same prompt may vary
Of course. But if this is the direction we’re going in, we need to try and do something no? Did you read the post I linked?
Fascinating! https://github.com/clintonboys/lit-demo-crud isn't deep enough to really get a feel for how well this would work in practice.
> Like everyone else with a keyboard, a brain and a vested interest in their future as a programmer, I’ve been using AI agents to write a lot of code recently.
alrighty then..
Talk is cheap. Show me the code.
No, we don’t. The genius of git is in its simplicity (while also being pretty damn complicated too).
Missing 4 out of 5 parts. The attempt to wrest control away from the open source community doesn't even have the effort to keep going.
The gist was created 1 hour before your comment.
Yes, parts will keep coming, this weekend and the next week.
[dead]
[dead]
Definitely agree that Git is mediocre-at-best VCS tool. This has always been the case. But LLMs are finally forcing the issue. It’s a shame a whole generation programmers has only used Git/GitHub and think it’s good.
Monorepo and large binary file support is TheWay. A good cross-platform virtual file system (VFS) is necessary; a good open source one doesn’t exist today.
Ideally it comes with a copy-on-write system for cross-repo blob caching. But I suppose that’s optional. It lets you commit toolchains for open source projects which is a dream of mine.
Not sure I agree that LSP like features need to be built in. That feels wrong. That’s just a layer on top.
Do think that agent prompts/plans/summaries need to be a first class part of commits/merges. Not sure the full set of features required here.
> It’s a shame a whole generation programmers has only used Git/GitHub and think it’s good.
Well... I used SVN before that and it was way worse.
Clearly the poster has never encountered cvs or cvs-next.
And clear OP hasn't heard of vss..
RCS, anyone?
Continuing the theme, a new starter at my place (with about a decade of various experience, including an international financial services information player whose name is well known) had never used git or indeed any distributed, modern source control system.
HN is a tiny bubble. The majority of the world's software engineers are barely using source control, don't do code reviews, don't have continuous build systems, don't have configuration controlled release versions, don't do almost anything that most of HN's visitors think are the basic table stakes just to conduct software engineering.
> It lets you commit toolchains
Wasn't this done by IBM in the past? Rational Rose something?
Clearcase