> The choice of @form(vec) here is itself a real design decision, not an arbitrary one.
> The point of the surface isn’t completeness — it’s that every distinct kind of structural commitment a unit can make has a syntactic home. ... Each commitment is declared, not inferred from code.
> type is pure shape. A record. No lifecycle, no flow, no state machine, no bus participation.
And so on and so forth. Every paragraph, every sentence was transparently written by an LLM (sounds like Claude to me). It's difficult to get interested when the humans involved couldn't even be bothered to write down their own thoughts and make them coherent (and much of this text isn't, though it appears so at a glance).
As for the locus concept (https://aperio-lang.github.io/aperio/concepts/the-locus.html), the entire page reads like one of those LLM fever dreams in which it can't stop praising an idea you've pasted into the chat window. It's a kitchen sink primitive that codifies a specific architectural pattern. It's a program structure that probably fits the kind of problem the author has been seeing a lot lately.
Can we not post LLM generated prose on topics as subtle as programming language design? Am I alone when I see this type of stuff and immediately react in anger?
These things are not good technical writers - so why do people keep doing this? It is not possible to take a proposal seriously from a scientific perspective if the arguments are written by LLMs, I'm sorry, it's just terrible writing, terrible argumentation.
> Every language designed before 2023 was optimized for a single tradeoff: minimize friction between human cognitive capacity and machine execution. Assembly to C to managed runtimes to DSLs were different points on the same line. In an LLM-driven workflow, those languages don’t get cheaper to use — they get more expensive.
What does this mean? Why do they get more expensive? The claim is "the cost just hides in the LLM’s token count, its retry rate, and the latency it eats per turn" -- what is the cost? Am I supposed to infer what the fuck you are talking about?
Why don't you send the prompt for your programming language instead?
Also, the concept of "locus" has already been invented, it goes by the name of "entity" in the syndicated actor model: https://syndicate-lang.org/
I don't want to be seen as being a hater for LLM-driven language design -- totally go for it! I'm not sure if this language is by OP (if not ignore), but my advice is take some time to sharpen up the writing and argumentation or else you risk not being taken seriously.
I've been using the em-dash for years, having started doing so well before the dawn of LLMs -- needless to say the fact it is used as a telltale sign of LLM writing, doesn't gladden me one bit. Also because I value concise writing through picking correct grammatical elements -- like the em-dash.
> The structural correspondence is the point.
> The choice of @form(vec) here is itself a real design decision, not an arbitrary one.
> The point of the surface isn’t completeness — it’s that every distinct kind of structural commitment a unit can make has a syntactic home. ... Each commitment is declared, not inferred from code.
> type is pure shape. A record. No lifecycle, no flow, no state machine, no bus participation.
And so on and so forth. Every paragraph, every sentence was transparently written by an LLM (sounds like Claude to me). It's difficult to get interested when the humans involved couldn't even be bothered to write down their own thoughts and make them coherent (and much of this text isn't, though it appears so at a glance).
As for the locus concept (https://aperio-lang.github.io/aperio/concepts/the-locus.html), the entire page reads like one of those LLM fever dreams in which it can't stop praising an idea you've pasted into the chat window. It's a kitchen sink primitive that codifies a specific architectural pattern. It's a program structure that probably fits the kind of problem the author has been seeing a lot lately.
Read through the intro and didn't understand a thing. Either I am dumb or this is dumb or both.
Can we not post LLM generated prose on topics as subtle as programming language design? Am I alone when I see this type of stuff and immediately react in anger?
These things are not good technical writers - so why do people keep doing this? It is not possible to take a proposal seriously from a scientific perspective if the arguments are written by LLMs, I'm sorry, it's just terrible writing, terrible argumentation.
> Every language designed before 2023 was optimized for a single tradeoff: minimize friction between human cognitive capacity and machine execution. Assembly to C to managed runtimes to DSLs were different points on the same line. In an LLM-driven workflow, those languages don’t get cheaper to use — they get more expensive.
What does this mean? Why do they get more expensive? The claim is "the cost just hides in the LLM’s token count, its retry rate, and the latency it eats per turn" -- what is the cost? Am I supposed to infer what the fuck you are talking about?
Why don't you send the prompt for your programming language instead?
Also, the concept of "locus" has already been invented, it goes by the name of "entity" in the syndicated actor model: https://syndicate-lang.org/
I don't want to be seen as being a hater for LLM-driven language design -- totally go for it! I'm not sure if this language is by OP (if not ignore), but my advice is take some time to sharpen up the writing and argumentation or else you risk not being taken seriously.
I feel dirty using emdash as the discriminator between human effort and non-effort. But it sure has quite a few.
I've been using the em-dash for years, having started doing so well before the dawn of LLMs -- needless to say the fact it is used as a telltale sign of LLM writing, doesn't gladden me one bit. Also because I value concise writing through picking correct grammatical elements -- like the em-dash.
Seems interesting but the github repos seem private. Github links from the docs dont work.
I personally wish we could make a language LLMs would stay away from, rather than make it easier...
This seems quite novel? Not encountered the concept of a “locus” like this in code before.