> By default, variables are mutable. You can enable Immutable by Default mode using a directive.
> //> immutable-by-default
> var x = 10;
> // x = 20; // Error: x is immutable
> var mut y = 10;
> y = 20; // OK
Wait, but this means that if I’m reading somebody’s code, I won’t know if variables are mutable or not unless I read the whole file looking for such directive. Imagine if someone even defined custom directives, that doesn’t make it readable.
Given an option that is configurable, why would the default setting be the one that increases probability of errors?
For some niches the answer is "because the convenience is worth it" (e.g. game jams). But I personally think the error prone option should be opt in for such cases.
Or to be blunt: correctness should not be opt-in. It should be opt-out.
I have considered such a flag for my future language, which I named #explode-randomly-at-runtime ;)
> Or to be blunt: correctness should not be opt-in. It should be opt-out.
One can perfectly fine write correct programs using mutable variables. It's not a security feature, it's a design decision.
That being said, I agree with you that the author should decide if Zen-C should be either mutable or immutable by default, with special syntax for the other case. As it is now, it's confusing when reading code.
It's odd that the async/await syntax _exclusively_ uses threads under the hood. I guess it makes for a straightforward implementation, but in every language I've seen the point of async/await is to use an event loop/cooperative multitasking.
Same, is the memory layout deterministic (and optimized)?
> 2 | 3 => print("Two or Three")
Any reason not to use "2 || 3"?
> Traits
What if I want to remove or override the "trait Drawing for Circle" because the original implementation doesn't fit my constraints?
As long as traits are not required to be in a totally different module than the struct I will likely never welcome them in a programming language.
An interesting bit to me is that it compiles to (apparently) readable C, I'm not sure how one would use that to their advantage
I am not too familiar with C - is the idea that it's easier to incrementally have some parts of your codebase in this language, with other parts being in regular C?
one benefit is that a lot of tooling e.g. for verification etc. is built around C.
another is that it only has C runtime requirement, so no weird runtime stuff to impelement if youd say want to run on bare metal..you could output the C code and compile it to your target.
i think so. The biggest hurdle with new languages is that you are cut off from a 3rdparty library ecosystem. Being compatible with C 3rd party libraries is a big win.
Syntax aside, how does this compare to Nim? Nim does similar, I think Crystal does as well? Not entirely sure about Crystal tbh. I guess Nim and Vala, since I believe both transpile to C, so you really get "like C" output from both.
Quite easy to make apps with it and GNOME Builder makes it really easy to package it for distribution (creates a proper flatpak environment, no need to make all the boilerplate). It's quite nice to work with, and make stuff happen. Gtk docs and awful deprecation culture (deprecate functions without any real alternative) are still a PITA though.
So, the point of this language is to be able to write code with high productivity, but with the benefit of compiling it to a low level language? Overall it seems like the language repeats what ZIG does, including the C ABI support, manual memory management with additional ergonomics, comptime feature. The biggest difference that comes to mind quickly is that the creator of Zen-C states that it can allow for the productivity of a high level language.
It has stringly typed macros. It's not comparable to Zig's comptime, even if it calls it comptime:
fn main() {
comptime {
var N = 20;
var fib: long[20];
fib[0] = (long)0;
fib[1] = (long)1;
for var i=2; i<N; i+=1 {
fib[i] = fib[i-1] + fib[i-2];
}
printf("// Generated Fibonacci Sequence\n");
printf("var fibs: int[%d] = [", N);
for var i=0; i<N; i+=1 {
printf("%ld", fib[i]);
if (i < N-1) printf(", ");
}
printf("];\n");
}
print "Compile-time generated Fibonacci sequence:\n";
for i in 0..20 {
print f"fib[{i}] = {fibs[i]}\n";
}
}
It just literally outputs characters, not even tokens like rust's macros, into the compiler's view of the current source file. It has no access to type information, as Zig's does, and can't really be used for any sort of reflection as far as I can tell.
The Zig equivalent of the above comptime block just be:
I wonder, how can a programming language have the productivity of a high-level language ("write like a high-level language"), if it has manual memory management? This just doesn't add up in my view.
I'm writing my own programming language that tries "Write like a high-level language, run like C.", but it does not have manual memory management. It has reference counting with lightweight borrowing for performance sensitive parts: https://github.com/thomasmueller/bau-lang
I am working on mine as well. I think it is very sane to have some activity in this field. I hope we will have high level easy to write code that is fully optimized with very little effort.
They're are certainly going to be lots of languages because now with LLMs it's easier (trivial?) to make one + library (case in point: just within last month there're have been posted here ~20 new langs with codebases 20k~100k LOC) but don't really see them competing. Rust and Zig brought actual improvements and are basically replacing usecases that C++/C had limiting the space available to others.
I has been served for several decades, however since the late-90's many decided reducing to only C and C++ was the way going forward, now the world is rediscovering it doesn't have to be like that.
It’s not, it’s just how hackernews works. You’ll see new projects hit 1k-10k stars in a matter of a day. You can have the best project, best article to you but if everyone else doesn’t think so it’ll always be at the bottom. Some luck involved too. Bots upvoting a post not organically I doubt is gonna live long on first page.
The whole language examples seem pretty rational, and I'm especially pleased / shocked by the `loop / repeat 5` examples. I love the idea of having syntax support for "maximum number of iterations", eg:
...obviously not trying to start any holy wars around exceptions (which don't seem supported) or exponential backoff (or whatever), but I guess I'm kindof shocked that I haven't seen any other languages support what seems like an obvious syntax feature.
I guess you could easily emulate it with `for x in range(3): ...break`, but `repeat 3: ...break` feels a bit more like that `print("-"*80)` feature but for loops.
Assembly requires way more work than compiling to, say C. Clang and gcc do a lot of the heavy lifting regarding optimisation, spilling values to the stack, etc
I have a couple interpreters I've been poking at and one uses 'musttail' while the other uses a trampoline to get around blowing up the C stack when dispatching the operators. As for the GC, the trampoline VM has a full-blown one (with bells-and-whistles like an arena for short lived intermediate results which get pushed/popped by the compiled instructions) while the other (a peg parser VM) just uses an arena as the 'evaluation' is short lived and the output is the only thing really needing tracking so uses reference counting to be (easily) compatible with the Python C-API. No worries about the C stack at all.
I mean, I could have used the C stack as the VM's stack but then you have to worry about blowing up the stack, not having access (without a bunch of hackery, looking at you scheme people) to the values on the stack for GC and whatnot and, I imagine, all the other things you have issues with but it's not needed at all, just make your own (or, you know, tail call) and pretend the C one doesn't exist.
And I've started on another VM which does the traditional stack thing but it's constrained (by the spec) to have a maximum stack depth so isn't too much trouble.
There is an entire world in Rust where you never have to touch the borrow-checker or lifetimes at all. You can just clone or move everything, or put everything in an Arc (which is what most other languages are doing anyway). It's very easy to not fight the compiler if you don't want to.
Maybe the real fix for Rust (for people that don't want to care), is just a compiler mode where everything is Arc-by-default?
I thought the same and felt it looked really out of place to have I8 and F32 instead of i8 and f32 when so much else looks just like Rust. Especially when the rest of the types are all lower case.
Agreed, that really stood out as a ... questionable design decision, and felt extremely un-ergonomic which seems to go against the stated goals of the language.
Every language is apparently required to make one specific version of these totally arbitrary choices, like whether to call the keyword function, func, fun, fn, or def. Once they do, it’s a foolish inconsistency with everything else. What if the language supported every syntax?
> Mutability
> By default, variables are mutable. You can enable Immutable by Default mode using a directive.
> //> immutable-by-default
> var x = 10; > // x = 20; // Error: x is immutable
> var mut y = 10; > y = 20; // OK
Wait, but this means that if I’m reading somebody’s code, I won’t know if variables are mutable or not unless I read the whole file looking for such directive. Imagine if someone even defined custom directives, that doesn’t make it readable.
Given an option that is configurable, why would the default setting be the one that increases probability of errors?
For some niches the answer is "because the convenience is worth it" (e.g. game jams). But I personally think the error prone option should be opt in for such cases.
Or to be blunt: correctness should not be opt-in. It should be opt-out.
I have considered such a flag for my future language, which I named #explode-randomly-at-runtime ;)
> Or to be blunt: correctness should not be opt-in. It should be opt-out.
One can perfectly fine write correct programs using mutable variables. It's not a security feature, it's a design decision.
That being said, I agree with you that the author should decide if Zen-C should be either mutable or immutable by default, with special syntax for the other case. As it is now, it's confusing when reading code.
Yeah, immutability should probably use a `let` keyword and compiler analysis should enforce value semantics on those declarations.
It's odd that the async/await syntax _exclusively_ uses threads under the hood. I guess it makes for a straightforward implementation, but in every language I've seen the point of async/await is to use an event loop/cooperative multitasking.
That's a very nice project.
List of remarks:
> var ints: int[5] = {1, 2, 3, 4, 5};
> var zeros: [int; 5]; // Zero-initialized
The zero initialized array is not intuitive IMO.
> // Bitfields
If it's deterministically packed.
> Tagged unions
Same, is the memory layout deterministic (and optimized)?
> 2 | 3 => print("Two or Three")
Any reason not to use "2 || 3"?
> Traits
What if I want to remove or override the "trait Drawing for Circle" because the original implementation doesn't fit my constraints? As long as traits are not required to be in a totally different module than the struct I will likely never welcome them in a programming language.
An interesting bit to me is that it compiles to (apparently) readable C, I'm not sure how one would use that to their advantage
I am not too familiar with C - is the idea that it's easier to incrementally have some parts of your codebase in this language, with other parts being in regular C?
one benefit is that a lot of tooling e.g. for verification etc. is built around C.
another is that it only has C runtime requirement, so no weird runtime stuff to impelement if youd say want to run on bare metal..you could output the C code and compile it to your target.
i think so. The biggest hurdle with new languages is that you are cut off from a 3rdparty library ecosystem. Being compatible with C 3rd party libraries is a big win.
Makes it easy to "try before you buy", too. If you decide it's not for you, you can "step out" and keep the generated C code and go from there.
This isn't a very sane plan. The ~300 LOC example mini_grep (https://github.com/z-libs/Zen-C/blob/main/examples/tools/min...) compiles to a ~3.3k LOC monstrosity (https://pastebin.com/raw/6FBSpt1z). It's easier to rewrite the whole thing than going from the generated code.
At least for now, generated code shouldn't be considered something you're ever supposed to interact with.
Very good point that I never considered! Thanks.
Syntax aside, how does this compare to Nim? Nim does similar, I think Crystal does as well? Not entirely sure about Crystal tbh. I guess Nim and Vala, since I believe both transpile to C, so you really get "like C" output from both.
I was also going to mention this reminds me of Vala, which I haven't seen or heard from in 10+ years.
man I haven't heard anything about Vala in ages. is it still actively developed/used? how is it?
Yes, it is actively being developed.
Quite easy to make apps with it and GNOME Builder makes it really easy to package it for distribution (creates a proper flatpak environment, no need to make all the boilerplate). It's quite nice to work with, and make stuff happen. Gtk docs and awful deprecation culture (deprecate functions without any real alternative) are still a PITA though.
So, the point of this language is to be able to write code with high productivity, but with the benefit of compiling it to a low level language? Overall it seems like the language repeats what ZIG does, including the C ABI support, manual memory management with additional ergonomics, comptime feature. The biggest difference that comes to mind quickly is that the creator of Zen-C states that it can allow for the productivity of a high level language.
It has stringly typed macros. It's not comparable to Zig's comptime, even if it calls it comptime:
It just literally outputs characters, not even tokens like rust's macros, into the compiler's view of the current source file. It has no access to type information, as Zig's does, and can't really be used for any sort of reflection as far as I can tell.The Zig equivalent of the above comptime block just be:
Notice that there's no code generation step, the value is passed seamlessly from compile time to runtime code.I wonder, how can a programming language have the productivity of a high-level language ("write like a high-level language"), if it has manual memory management? This just doesn't add up in my view.
I'm writing my own programming language that tries "Write like a high-level language, run like C.", but it does not have manual memory management. It has reference counting with lightweight borrowing for performance sensitive parts: https://github.com/thomasmueller/bau-lang
Nim is a high-level language as well and compiles to C.
Odin and Jai are others.
Does Odin compile to C? I thought it only uses LLVM as a backend
I am working on mine as well. I think it is very sane to have some activity in this field. I hope we will have high level easy to write code that is fully optimized with very little effort.
There are going to be lots of languages competing with Rust and Zig. It's a popular, underserved market. They'll all have their unique angle.
They're are certainly going to be lots of languages because now with LLMs it's easier (trivial?) to make one + library (case in point: just within last month there're have been posted here ~20 new langs with codebases 20k~100k LOC) but don't really see them competing. Rust and Zig brought actual improvements and are basically replacing usecases that C++/C had limiting the space available to others.
I has been served for several decades, however since the late-90's many decided reducing to only C and C++ was the way going forward, now the world is rediscovering it doesn't have to be like that.
Initial commit was 24h ago, 363 stars, 20 forks already. Man, this goes fast.
man has been posting a lot before the initial commit about his library. following the guy on linkedin.
Could be bots.
It’s not, it’s just how hackernews works. You’ll see new projects hit 1k-10k stars in a matter of a day. You can have the best project, best article to you but if everyone else doesn’t think so it’ll always be at the bottom. Some luck involved too. Bots upvoting a post not organically I doubt is gonna live long on first page.
The stars are on GitHub, they can come from somewhere else, e.g. the author himself buying stars.
This is hella common. Companies have too much money to spend.
Definitely could be, but the dev has been posting updates on Twitter for a while now. It could be just some amount of hype they have built.
Basically C2/C3 but Rust influenced. Missed chance to call it C4.
Is this the Typescript of C ?
What's the performance hit?
18 commits! I hope you keep up with the project, it’s really cool, great work.
The whole language examples seem pretty rational, and I'm especially pleased / shocked by the `loop / repeat 5` examples. I love the idea of having syntax support for "maximum number of iterations", eg:
...obviously not trying to start any holy wars around exceptions (which don't seem supported) or exponential backoff (or whatever), but I guess I'm kindof shocked that I haven't seen any other languages support what seems like an obvious syntax feature.I guess you could easily emulate it with `for x in range(3): ...break`, but `repeat 3: ...break` feels a bit more like that `print("-"*80)` feature but for loops.
Ruby has a similarly intuitive `3.times do ... end` syntax
Why not compile to rust or assembly? C seems like an odd choice.
In fact why not simply write rust to begin with?
We don't all wear programming socks and fantasise about our anime profile pics. C is the patrician's choice.
> C is the patrician's choice.
Is this suppose to be a positive thing? I thought we all wanted to violently murder the patricians.
Regardless, C might be a valid IR. I apologize for being bigoted.
Try it again and see how it goes.
Assembly requires way more work than compiling to, say C. Clang and gcc do a lot of the heavy lifting regarding optimisation, spilling values to the stack, etc
Then you're stuck with the C stack, though, and no way to collect garbage.
really? you cant track and count your pointers in C? why not?
I have a couple interpreters I've been poking at and one uses 'musttail' while the other uses a trampoline to get around blowing up the C stack when dispatching the operators. As for the GC, the trampoline VM has a full-blown one (with bells-and-whistles like an arena for short lived intermediate results which get pushed/popped by the compiled instructions) while the other (a peg parser VM) just uses an arena as the 'evaluation' is short lived and the output is the only thing really needing tracking so uses reference counting to be (easily) compatible with the Python C-API. No worries about the C stack at all.
I mean, I could have used the C stack as the VM's stack but then you have to worry about blowing up the stack, not having access (without a bunch of hackery, looking at you scheme people) to the values on the stack for GC and whatnot and, I imagine, all the other things you have issues with but it's not needed at all, just make your own (or, you know, tail call) and pretend the C one doesn't exist.
And I've started on another VM which does the traditional stack thing but it's constrained (by the spec) to have a maximum stack depth so isn't too much trouble.
At times people think C is better. See recent discussion about https://sqlite.org/whyc.html
C is best
If I understand the history correctly then it started as a set of C preprocessor macros.
Am I the only one who saw this syntax and immediately though "Man, this looks almost identical to Rust with a few slight variations"?
It seems to just be Rust for people who are allergic to using Rust.
It looks like a fun project, but I'm not sure what this adds to the point where people would actually use it over C or just going to Rust.
> what this adds
I guess the point is what is subtracts, instead - answer being the borrow-checker.
> answer being the borrow-checker
There is an entire world in Rust where you never have to touch the borrow-checker or lifetimes at all. You can just clone or move everything, or put everything in an Arc (which is what most other languages are doing anyway). It's very easy to not fight the compiler if you don't want to.
Maybe the real fix for Rust (for people that don't want to care), is just a compiler mode where everything is Arc-by-default?
So it re-adds manual lifetime checking. Got it.
Maybe take the parts of rust the author likes, but still encourages pointers in high level operations?
I thought the same and felt it looked really out of place to have I8 and F32 instead of i8 and f32 when so much else looks just like Rust. Especially when the rest of the types are all lower case.
Agreed, that really stood out as a ... questionable design decision, and felt extremely un-ergonomic which seems to go against the stated goals of the language.
Every language is apparently required to make one specific version of these totally arbitrary choices, like whether to call the keyword function, func, fun, fn, or def. Once they do, it’s a foolish inconsistency with everything else. What if the language supported every syntax?
My immediate thought was it looked a lot like Swift