To summarize, he's sufficiently impressed with it that he's embarking on an attempt to rebuild an entire Debian system with it, and he's written some software (a GC shim library and build scripts) that are likely to be of interest to others who are attempting the same thing.
> I had originally configured the server phoenix with only 12GB swap. I then had to restart ./build_all_fast_glibc.sh a few times because the Fil-C compilation ran out of memory. Switching to 36GB swap made everything work with no restarts; monitoring showed that almost 19GB swap (plus 12GB RAM) was used at one point. A larger server, 128 cores with 512GB RAM, took 8 minutes for Fil-C plus 6 minutes for musl, with no restarts needed.
Yikes that’s a lot of memory! Filc is doing a lot of static analysis apparently.
RFC 1034 Domain Concepts and Facilities November 1987 [Page 8]
"A domain is identified by a domain name, and consists of that part of the domain name space that is at or below the domain name which specifies the domain. A domain is a subdomain of another domain if it is contained within that domain. This relationship can be tested by seeing if the subdomain's name ends with the containing domain's name. For example, A.B.C.D is a subdomain of B.C.D, C.D, D, and " "."
> Also maybe of interest is that the new cdb subdomain is using pqconnect instead of dnscurve
This is not correct. There isn't a cdb subdomain because cdb.cr.yp.to doesn't have NS records, which is where DNSCurve fits in. If you have a DNSCurve resolver, then your queries for cdb.cr.yp.to will use DNSCurve and will be sent to the yp.to nameservers.
From there, if you have pqconnect, your http(s) connection to cdb.cr.yp.to will happen over pqconnect.
Maybe the confusion is because both DNSCurve and pqconnect encode pubkeys in DNS, but they do different things.
I’m glad Phil’s work is finally getting the recognition it deserves.
There may be useful takeaways here for Rust’s “unsafe” mode - particularly for applications willing to accept the extra burden of statically linking Fil-C-compiled dependencies. Best of both worlds!
> particularly for applications willing to accept the extra burden of statically linking Fil-C-compiled dependencies. Best of both worlds!
As near as I can tell Fil-C doesn't support this, or any other sort of FFI, at all. Nor am I sure FFI would even make sense, it seems like an approach that has to take over the entire program so that it can track pointer provenance.
Cool project! I take it the goal is that, overhead being acceptable, most C / C++ programmes don't actually "have to be" rewritten in something like Rust?
Note that Fil-C is a garbage-collected language that is significantly slower than C.
It's not a target for writing new code (you'd be better off with C# or golang), but something like sandboxing with WASM, except that Fil-C crashes more precisely.
From the topic starter: "I've posted a graph showing nearly 9000 microbenchmarks of Fil-C vs. clang on cryptographic software (each run pinned to 1 core on the same Zen 4). Typically code compiled with Fil-C takes between 1x and 4x as many cycles as the same code compiled with clang"
Thus, Fil-C compiled code is 1 to 4 times as slow as plain C. This is not in the "significantly slower" ballpark, like where most interpreters are. The ROOT C/C++ interpreter is 20+ times slower than binary code, for example.
Cryptographic software is probably close to a best case scenario since there is very little memory management involved and runtime is dominated by computation in tight loops. As long as Fil-C is able to avoid doing anything expensive in the inner loops you get good performance.
> best case scenario since there is very little memory management involved and runtime is dominated by computation in tight loops.
This describes most C programs and many, if not most, C++ programs. Basically, this is how C/C++ code is being written, by avoiding memory management, especially in tight loops.
This depends heavily on what problem domain you're talking about. For example, a DBMS is necessarily going to shuffle a lot of data into and out of memory.
Most databases do almost no memory management at runtime, at least not in any conventional sense. They mostly just DMA disk into and out of a fixed set of buffers. Objects don't have a conventional lifetime.
Along with the sibling comment, microbenchmarks should not be used as authoritative data when the use case is full applications. For that matter, highly optimized Java or Go may be "1 to 4 times as slow as plain C". Fil-C has its merits, but they should be described carefully, just with any technology.
What language do people considering c as an option for a new project consider? Rust is the obvious one we aren't going to discuss because then we won't be able to talk about anything else, Zig is probably almost as well loved and defended, but it isn't actually memory safe, just much easier to be memory safe. As you say, c# and go, also maybe f# and ocaml if we are just writing simple c style stuff none of those would look all that different. Go jhs some ub related to concurrency that people run into, but most of these simple utilities are either single threaded or fine grained parallel which is pretty easy to get right. Julia too maybe?
WASM is a sandbox. It doesn't obviate memory safety measures elsewhere. A program with a buffer overflow running in WASM can still be exploited to do anything that program can do within in WASM sandbox, e.g. disclose information it shouldn't. WASM ensures such a program can't escape its container, but memory safety bugs within a container can still be plenty harmful.
For those, like me, that didn’t know what Fil-C is:
> Fil-C is a fanatically compatible memory-safe implementation of C and C++. Lots of software compiles and runs with Fil-C with zero or minimal changes. All memory safety errors are caught as Fil-C panics. Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (InvisiCaps). Every possibly-unsafe C and C++ operation is checked. Fil-C has no unsafe statement and only limited FFI to unsafe code.
Note that it is a garbage collector designed and implemented by one of the most experienced GC experts on earth. He previously designed and implemented WebKit's state of the art concurrent GC, for example. So—yes, but don't dismiss it too quickly.
If that's all you need, the state of the art is very available already through the JVM and the .NET CLR, as well as a handful others depending on your use case. Most of those also come with decent languages, and great facilities to leverage the GC to its maximum.
But GCs aren't magic and you will never get rid of all the overhead. Even if the CPU time is not noticeable in your use case, the memory usage fundamentally needs to be at least 2-4x the actual working set of your program for GCs to be efficient. That's fine for a lot of use cases, especially when RAM isn't scarce.
Most people who use C or C++ or Rust have already made this calculation and deemed the cost to be something they don't want to take on.
That's not to say Fil-C isn't impressive, but it fills a very particular niche. In short, if you're bothering with a GC anyway, why wouldn't you also choose a better language than C or C++?
I don't understand the need to hammer in the point that Fil-C is only valuable for this tiny, teeny, irrelevant microscopic niche, while not even talking about what the niche is? To be clear, the niche is rebuilding your entire GNU/Linux userland with full memory safety and completely acceptable performance, tomorrow, without rewriting anything, right? Is this such a silly little idiosyncratic hobby?
So I don’t want to come off as dismissive of the effort - it’s certainly impressive!
The reason I’m not super excited is based on the widely publicized findings from Google and Microsoft (IIRC) about memory safety issues in their code: The vast majority is in new code.
As such, the returns on running the entire userspace with Fil-C may be quite diminished from the get-go. Those who need to guard against UB bugs in seriously battle-hardened C software in production are definitely a small niche.
But that doesn’t mean it isn’t also very useful as a tool during development.
Hmm, so if they're writing new memory unsafe code in C/C++, presumably to remain within their already established and entrenched C/C++ ecosystems, why isn't Fil-C interesting as a way to thwart memory safety issues in that new code?
It seems like there are constant updates for 20 year old packages on my Ubuntu systems. Ubuntu 20.04 Focal Fossa (first released April 2020) glibc had an update on 2025-05-28. Current stable updated glibc 2025-09-22. To say nothing about the rest of the packages in that operating system.
Oh, look at the time, a few more CVEs in C code, posted 3 hours ago to Hacker News: "X.Org Security Advisory: multiple security issues X.Org X server and Xwayland"
There's a contingent of rust fans that show up on every story about C – their premise is that C code is unsafe and most safety-critical C code should be rewritten in rust.
Fil-C is new and is a viable competitor to rust, that's why you're hearing all asides about tiny niches, unacceptable performance degradation, etc.
Hacker News is not a place where any one group brigrades a thread. There are people who prefer C who don't want a GC, people who prefer Rust who don't want C, people who prefer Rust who agree with Fil-C for legacy C, people who don't prefer C or Rust and may use languages with GC.... We all have interests and face people who denigrate them in bad faith. If you have specific objections to inaccurate statements in this thread, then state them. I'll do the same for any technology if I'm qualified to make statements on it.
There’s no Rust fans here, only GC skeptics. GC skeptics existed long before anyone dreamed of Rust and will survive Rust as well.
It’s a pretty reasonable objection too (though I personally don’t agree). C has always been chosen when performance is paramount. For people who prioritise performance it must feel a bit weird to leave performance on the table in this way.
And Jesus Christ, give it a rest with this “Rust fans must be thinking” stuff. It sounds deranged.
No, back in the day C was used for everything. Vim was not written in C because it needed to wring every last bit of performance out of text editing.
Rewriting everything in rust "for memory-safety" is a false tradeoff given the millions of lines of C code out there and the fact that rewrites always introduce new bugs.
Please, I’m begging you, stop talking about Rust. You’re shoehorning Rust into a discussion where it hasn’t been mentioned, just to hate on some imaginary people you think are pushing Rust here. No one is talking about that. You sound deranged and obsessed.
The vast majority of the conversation here is about GC and the performance implications of that. Please stick to the rest of the thread.
I think Fil-C is for people who are using software that has already been written, not for people who are trying to pick what language to write new software in. A substantial amount of software has, after all, already been written.
It's super fun to write C and C++ code in Fil-C because it's like this otherworldly crossover between Java and C/C++:
- Unlike Java, you get fantastic startup times.
- Unlike Java, you get access to actual syscall APIs.
- Unlike Java, you can leverage the ecosystem of C/C++ libraries without having to write JNI wrappers (though you do have to be able to compile those libraries with Fil-C).
- Like Java, you can just `new` or `malloc` without `delete`ing or `free`ing.
I like C, have a probably unhealthy relationship with C++ where I am amazed by what it can do and then get unrealistic expectations it keeps failing to fulfill, and don't really like Java.
You know Julia Ecklar's song where she says that programming in assembler is like construction work with a toothpick for a tool? I feel like C, C++, or Java are like having a teaspoon instead. Maybe Java is a tablespoon. I'd rather use something like OCaml or a sane version of Python without the Mean Girls community infighting. I just haven't found it.
On the other hand, the supposedly more powerful languages don't have a great record of shipping highly usable production software. There's no Lisp or Ruby or Lua alternative to Firefox, Linux, or LLVM.
I do not think this is niche in the slightest. I would very happily take a 2-4x slowdown for almost all of the web facing C software I run if I get guaranteed memory safety. I will be using at the very least fil-c openssh (and likely much more) on every machine I run.
Sure, that makes sense. The point I’m making is just that from an engineering perspective, that also implies that there is no longer any reason for that software you’re running to be written in C at all.
Sure there is. Making tough choices between alternatives based on where to allocate a limited amount of manpower is an engineering choice. Choosing to use Fil-C to recompile existing (established, stabilized, functional...) software rather than rewrite it is an engineering choice.
Even if you can't use something like Fil-C in your release/production builds, being able to e.g. compile unit tests with it to catch memory safety bugs is a huge win. My team use gcc for its mips codegen, but I'm working on adopting the clang bounds-safety annotations for test builds for exactly this reason.
Yeah, I haven't yet taken a serious look into it from that perspective yet, but similar came to mind; while, outside of bootstrapping the JDK from GCJ, Boehm GC hasn't been super relevant to me for "release" builds of anything, it's been useful in leak detection mode on occasion.
I figure even if you cannot use, or do not want to use, something like Fil-C in production, there's solid potential for it to augment whatever existing suite of sanitizers and other tools that one may already build against.
If you write your software in a language that needs GC, everybody using your software needs GC, but they're guaranteed to get memory safety.
If you write your software in an unsafe, non-GC language, nobody needs GC, but nobody gets memory safety either.
This is why many software developers chose the latter option. If there were some use cases in which GC wasn't acceptable for their software, nobody would get GC, even the people who could afford it, and would prefer the increased memory safety.
Fil-C lets the user make this tradeoff. If you can accept the GC, you compile with Fil-C, otherwise you use a traditional C compiler.
I would say that Rust would be a better choice rarher than patching memory safety on top of C. But I think the reason for this is that most, if not all, cryptographic reference implementations are in C. So they want to use existing reference implementations without having to port them to Rust.
IMO cryptographers should start using Rust for their reference implementations, but I also get that they'd rather spend their time working on their next paper rather than learning a new language.
I'm not a practioner of cryptography, but I would be wary about timing attacks that might become possible if such a dynamic runtime is introduced. At least relevant pieces of code would need to be re-evaluated in the Fil-C environment.
But maybe you could use C as the "glue language" and then the build better performing libraries in Rust for C to use. Like in Python!
Good call! Fil-C does in fact have a way to let you build and run OpenSSL with its constant time crypto. I don't know how this works exactly but I guess it's relatively easy to guarantee it's safe.
> IMO cryptographers should start using Rust for their reference implementations
IMO they should not, because if I look at a typical Rust code, I have no clue what is going on even with a basic understanding of Rust. C, however, is as simple as it gets and much closer to pseudocode.
Good cryptographic code should match its algorithmic description. Rust enables abstractions that allow this. C does not. That you have some familiarity with C and not Rust should not be a contributing factor.
I say this as someone who has written cryptographic code that’s been downloaded millions of times.
The original poster got pretty much all of Debian running in Fil-C, in a fairly brief amount of time.
Re-writing even a single significant library or package in Rust would take exponentially longer, so in this case Rust would not be "a better choice", but rather a non-starter.
It's amazing how much technical discourse revolves around impressions.
"Oh, it has a GC! GC bad!"
"No, this GC by smart guy, so good!"
"No, GC always bad!"
People aren't engaging with the technical substance. GC based systems and can be plenty good and fast. How do people think JavaScript works? And Go? It's like people just absorbed from the discursive background radiation the idea GC is slow without understanding why that might be or whether it's even true. Of course it's not.
How many of the "GC is always slow" people would recognize those systems? Besides: V8 and JSC have pretty decent JITs nowadays. IME, performance of JIT systems has more to do with the structure of programs written in JS than with VM performance itself.
Maybe I don't know what I'm doing, but I rarely get performance within an order of magnitude of single-threaded C from V8. In those other systems I usually do, unless you count Java's startup time.
You can wrack some people's brains by stating that for some problems, a GC is a great way to alleviate the performance problems caused by manual memory management.
Yeah, but if you actually need to retain a live subgraph of the allocated heap, the arena can't help you. So you make an arena allocator that only frees its slab after moving out the reachable set to a new compacted arena. Congratulations, you've implemented a Cheney-style compacting GC!
The author of Fil-C does have some ideas to avoid a garbage collector [1], in summary: Use-after-free at worst means you might see an object of the same size, but you can not corrupt data structures (no pointer / integer confusion). This would be more secure than standard C, but less secure than Fil-C with GC.
It is definitely not fine. The argument seems to be that since you need to trust somebody, curl | bash is fine because you just trust whoever controls the webserver. I think this is missing the point.
HTTPS is there, so you go down to that level only if you want to distrust any element of the public key infrastructure. Which, to be fair, there are plenty of reasons if you are paranoid -- they do tell you who's doing what in a shady way as they revoke, so there's a huge list of transgressions.
It is not only that directly; the domain name might be reassigned to someone else, resulting in a valid certificate which is different than the one you wanted. (If you have the hash of the file which you have verified independently then it is more secure (if the hash algorithm is secure enough), although HTTPS is not needed in that case, it can still be used if you wish to avoid spies knowing which file you accessed. You can also use the server's public key if you know what it should be, although this has different issues, such as someone compromising the server (or the key) and modifying the script.) (There is also knowing if the script is what you intended or not anyways (or if there is something unexpected due to the configuration on your computer); if that is your issue, you can read it (and perhaps verifying the character encoding) before executing it, whether or not you trust the server operator and the author of that script.)
> the domain name might be reassigned to someone else
If that happens its game over. As the article I linked noted, the attackers can change the installation instructions to anything they want - even for packages that are available in Linux distros.
That you should be very careful about what you install. Cut&pasting some line from a website is the exact opposite of it. This is mostly about psychology and not technology. But there are also other issues with this, e.g. many independent failure points at different levels, no transparency, no audit chain, etc.
The counter model we tried to teach people in the past is that people select a linux distribution, independently verify fingerprints of the installation media, and then only install packages from the curated a list of packages. A lot of effort went into making this safe and close the remaining issues.
Be careful who you trust when installing software is a fine thing to teach. But that doesn't mean the only people you can trust are Linux distro packagers.
I think it has a lot to do with "curl|bash". Cut&paste a curl|bash command-line disables all inherent mechanisms and stumbling blocks that would ensure properly ensuring trust. It was basically invented to make it easy to install software by circumventing all protection a Linux distribution would traditionally provide. It also eliminates all possibility for independent verification about what was installed or done on the machine.
Downloading a deb via a package manager is more secure. Downloading a deb, comparing the hash (or at least noting down the hash) would also already be more secure.
But yes, that the run arbitrary scripts is also a known issue, but this is not the main point as most code you download will be run at some point (and ideally this needs sandboxing of applications to fix).
> Downloading a deb via a package manager is more secure.
Not what I meant. Getting software into 5 different distros and waiting years for it to be available to users is not really viable for most software authors.
Well, distros haven't really put any effort into making it viable as far as I know. They really should! Why isn't there a standard Linux package format that all distros support? Flatpak is fine for user GUI apps but I don't think it would be feasible to e.g. distribute Rust via a Flatpak.
(And when I say fine, I haven't actually used it successfully yet.)
I think distros don't want this though. They all want everyone to use their format, and spend time uploading software into their repo. Which just means that people don't.
Great to see some 3letter guy into this. This might be one of those rando things which gets posted on HN (and which doesn't involve me in the slightest), but a decade later is taking over the world. Rust and Go were like that.
Previously there was that Rust in APT discussion. A lot of this middle-aged linux infrastructure stuff is considered feature-complete and "done". Not many young people are coming in, so you either attract them with "heyy rewrite in rust" or maybe the best thing is to bottle it up and run in a VM.
Despite the cool shit the guy has done, keep in mind that "venerate" is not the word to use here. djb is very much not a shorthand used in any positive messaging pretty much ever by any cryptographer. He did it to himself, sadly.
... if he thinks some WG is making a mistake and he's not welcome there (everyone else seems to be okay with what's happening based on the quoted email on the first link), then - CoC or not - he should then leave, and publicly post distance himself from the outcome.
(Obviously he was never the one to back down from a just fight, but it's important to find the right hill to die on. And allies! And him not following RFC 2026 [from 1996, hardly the peak of Internet bureaucracy] is not a CoC thing anyway.)
The IETF is a global standards-setting organization, intentionally created without a membership structure so that anyone with the technical competency can participate in an individual capacity. This lack of membership ensures its position as the primary neutral standards body because participants cannot exert influence as they could in a pay-to-play organization where members, companies, or governments pay fees to set the direction. IETF standards are reached by rough consensus, allowing the ideas with the strongest technical merit to rise to the surface.
Further, these standards that advance technology, increase security, and further connect individuals on a global scale are freely available, ensuring small-to-midsize companies and entrepreneurs anywhere in the world are on equal footing with the large technology companies.
With a community from around the world, and an increased focus on diversity in all its forms, IETF seeks to ensure that the global Internet has input from the global community, and represents the realities of all who use it.
There is only one IETF, and telling dissenters to leave is like telling a dissenting citizen to go to another country. I don't think that people (apart from real spammers) were banned in 1996. The CoC discussion and power grab has reached the IETF around 2020 and it continues.
"Posting too many messages" has been deemed a CoC violation by for example the PSF and its henchmen, and functionally the IETF is using the same selective enforcement no matter what the official rationale is. They won't go after the "director" Wouters, even though his message was threatening and rude.
Not to trivialise but being a 3 letter guy means being old. So, it's at best a celebration of achieving longevity and at worst a celebration of creaky joints and a short temper.
Mate, we're not talking about the future, but about 3 letter guys now. I'm one, I've carried it with me for 40+ years as have the ten or twenty peers of mine I know by their tla. I got it at pobox.com when the door opened, the guy at the desk next door got a one letter name. I set up campus email for the entire uni in 1989 and gave myself the tla with my superuser rights before that. I'd done the same at ucl-cs in 85, and before that in Leeds and York.
My point here is we're not famous we're just old enough to have a tla from the time before HR demanded everyone get given.surname.
Every Unix system used to ship with a dmr account. It doesn't mean we all knew Dennis Ritchie, it means the account was in the release tape.
There are 17,000 odd of us. Ekr, Kre and Djb are famous but the other 17,573 of us exist.
I'm not sure what your point is here. OP was clearly using "three letter guy" in the sense "so famous people know them by their initials". This is hardly unread of, e.g. https://wiki.c2.com/?ThreeLetterPerson
It was the "Great to see _some_ 3letter guy into this" underlined some that.
It felt bit like s/some/random/g perhaps would apply when reading it. Intentional or not by writer. It made me long and write my comment. There are many 3letter user accounts, which some are more famous than others. To my generation not because they were early users, but great things what they have done. I'm early user too and done things then still quite widely being used with many distributions, but wouldn't compare my achievements to those who became famous and known widely by their account, short or long.
Anyhow I thought that "djb" ring bell anyone having been around for while. Not just those who have been around early 90 or so when he was held renegade opinions he expressed programming style (qmail, dj dns, etc.), dragged to court of ITAR issues etc.
But because of his latter work with cryptography and running cr.yp.to site for quite long time.
Is this because they're that famous though or simply because there weren't as many people in the scene back then? We just don't do the initials thing anymore.
Yes: the fame is the subtext. It's akin to mononyms; they'd be referring to famous people like Shakira, Madonna, or Beyoncé. A lot of us have first names, but the point isn't that one's family calls them "Dave" without ambiguity.
There were many unix instances, and likely multiple djb logins around the world, but there's only one considered to be the djb, and it's dur to fame.
I can't wait for all the delicious four-way flamewars. Choose your fighter!
1) Rewrite X in Rust
2) Recompile X using Fil-C
3) Recompile X for WASM
4) Safety is for babies
There are a lot of half baked Rust rewrites whose existence was justified on safety grounds and whose rationale is threatened now that HN has heard of Fil-C
It's strange how ideas seem to explode at random into the discourse despite being known for a long time. It's as if some critical mass stumbles on a thing and it becomes "the current thing" everyone talks about until the next current thing.
I agree. The main advantage of Fil-C is compatibility with C, in a secure way. The disadvantages are speed, and garbage collection. (Even thought, I read that garbage collection might not be needed in some cases; I would be very interested in knowing more details).
For new code, I would not use Fil-C. For kernel and low-level tools, other languages seem better. Right now, Rust is the only popular language in this space that doesn't have these disadvantages. But in my view, Rust also has issues, specially the borrow checker, and code verbosity. Maybe in the future there will be a language that resolves these issues as well (as a hobby, I'm trying to build such a language). But right now, Rust seems to be the best choice for the kernel (for code that needs to be fast and secure).
How does that compare with rust? You don't happen to have an example of a binary underway moving to rust in Ubuntu-land as well? Curious to see as I honestly don't know whether rust is nimble like C or not.
My impression is - rust fares a bit better on RAM footprint, and about as badly on disk binary size. It's darn hard to compare apples-to-apples, though - given it's a different language, so everything is a rewrite. One example:
Ubuntu 25.10's rust "coreutils" multicall binary: 10828088 bytes on disk, 7396 KB in RAM while doing "sleep".
Alpine 3.22's GNU "coreutils" multicall binary: 1057280 bytes on disk, 2320 KB in RAM while doing "sleep".
I don't have numbers, but Rust is also terrible for binary size. Large Rust binaries can be improved with various efforts, but it's not friendly by default. Rust focuses on runtime performance, high-level programming, and compile-time guarantees, but compile times and binary sizes are the drawback. Notably, Rust prefers static linking.
How slow? In some contexts, the trade-off might be acceptable. From what I've seen in pizlonator's tweets, in some cases the difference in speed didn't seem drastic to me.
Yeah, I would happily run a bunch of my network services in this. I have loads of services that are public-facing doing a lot of complex parsing and rule evaluation and are mostly idle. For example my whole mailserver stack could probably benefit from this. My few messages an hour can run 2x slower. Maybe I would leave dovecot native since the attack surface before authentication is much lower and the performance difference would be more noticeable (mostly for things like searches).
Apt has been painfully slow since I started using Debian last millennium, but I suspect it's not because it uses a lot of CPU, or it would be snappy by now.
Fil-C works because you recompile the whole C userspace. Unsafe Rust doesn't do that... and for many practical purposes you probably want to touch the non-safe-version of the C userspace.
Still, it's all LLVM, so perhaps unsafe Rust for Fil-space can be a thing, a useful one for catching (what would be) UBs even [Fil-C defines everything, so no UBs, but I'm assuming you want to eventually run it outside of Fil-space].
Now I actually wonder if Fil-C has an escape hatch somewhere for syscalls that it does not understand etc. Well it doesn't do inline assembly, so I shouldn't expect much... I wonder how far one needs to extend the asm clobber syntax for it to remotely come close to working.
Unsafe Rust actually has a great runtime analyzer: Miri. It's very easy to just run `cargo +nightly miri test` in your project to get some confidence in the more questionable choices along the way.
Look at how fanatic the compatibility actually is. Building Postgres or MySQL is conceivable but probably will require some changes. (SQLite compiles and runs with zero changes right now.)
If you run Nix (whether on NixOS or elsewhere) you can do `cachix use filc` and `nix run github:mbrock/filnix#sqlite` and it should drop you into a Fil-C SQLite after downloading the runtime dependencies from my binary cache (no warranty)!
To summarize, he's sufficiently impressed with it that he's embarking on an attempt to rebuild an entire Debian system with it, and he's written some software (a GC shim library and build scripts) that are likely to be of interest to others who are attempting the same thing.
> I had originally configured the server phoenix with only 12GB swap. I then had to restart ./build_all_fast_glibc.sh a few times because the Fil-C compilation ran out of memory. Switching to 36GB swap made everything work with no restarts; monitoring showed that almost 19GB swap (plus 12GB RAM) was used at one point. A larger server, 128 cores with 512GB RAM, took 8 minutes for Fil-C plus 6 minutes for musl, with no restarts needed.
Yikes that’s a lot of memory! Filc is doing a lot of static analysis apparently.
I think that's the build of LLVM+Clang itself.
Yes, linking LLVM takes up a lot of memory. The documented guidance is to allow one link job per 15 GB of RAM [1].
[1] https://llvm.org/docs/CMake.html#frequently-used-llvm-relate...
For those who might miss it, the notes cite a new 64-bit version of cdb that supports exabyte databases
https://cdb.cr.yp.to
Also maybe of interest is that the new cdb subdomain is using pqconnect instead of dnscurve
RFC 1034 Domain Concepts and Facilities November 1987 [Page 8]
"A domain is identified by a domain name, and consists of that part of the domain name space that is at or below the domain name which specifies the domain. A domain is a subdomain of another domain if it is contained within that domain. This relationship can be tested by seeing if the subdomain's name ends with the containing domain's name. For example, A.B.C.D is a subdomain of B.C.D, C.D, D, and " "."
In the terminology of RFC1034, cdb.cr.yp.to, a CNAME, can be described as a subdomain of cr.yp.to and yp.to(NB. The pq1 portion is not a public key, it is a hash of a server's long-term public key)
> Also maybe of interest is that the new cdb subdomain is using pqconnect instead of dnscurve
This is not correct. There isn't a cdb subdomain because cdb.cr.yp.to doesn't have NS records, which is where DNSCurve fits in. If you have a DNSCurve resolver, then your queries for cdb.cr.yp.to will use DNSCurve and will be sent to the yp.to nameservers.
From there, if you have pqconnect, your http(s) connection to cdb.cr.yp.to will happen over pqconnect.
Maybe the confusion is because both DNSCurve and pqconnect encode pubkeys in DNS, but they do different things.
Here is DNSCurve:
Here is pqconnect: Like CurveCP, pqconnect puts the pubkey into a CNAME.Use of pqconnect at yp.to is probably old news but the cdb.cr.yp.to CNAME does appear to be new as of around 21 Oct
https://news.ycombinator.com/item?id=45663435 (discussed 11d ago)
I’m glad Phil’s work is finally getting the recognition it deserves.
There may be useful takeaways here for Rust’s “unsafe” mode - particularly for applications willing to accept the extra burden of statically linking Fil-C-compiled dependencies. Best of both worlds!
> particularly for applications willing to accept the extra burden of statically linking Fil-C-compiled dependencies. Best of both worlds!
As near as I can tell Fil-C doesn't support this, or any other sort of FFI, at all. Nor am I sure FFI would even make sense, it seems like an approach that has to take over the entire program so that it can track pointer provenance.
Related:
Fil-C: A memory-safe C implementation - https://news.ycombinator.com/item?id=45735877 - Oct 2025 (130 comments)
Safepoints and Fil-C - https://news.ycombinator.com/item?id=45258029 - Sept 2025 (44 comments)
Fil's Unbelievable Garbage Collector - https://news.ycombinator.com/item?id=45133938 - Sept 2025 (281 comments)
InvisiCaps: The Fil-C capability model - https://news.ycombinator.com/item?id=45123672 - Sept 2025 (2 comments)
Just some of the memory safety errors caught by Fil-C - https://news.ycombinator.com/item?id=43215935 - March 2025 (5 comments)
The Fil-C Manifesto: Garbage In, Memory Safety Out - https://news.ycombinator.com/item?id=42226587 - Nov 2024 (1 comment)
Rust haters, unite Fil-C aims to Make C Great Again - https://news.ycombinator.com/item?id=42219923 - Nov 2024 (6 comments)
Fil-C a memory-safe version of C and C++ - https://news.ycombinator.com/item?id=42158112 - Nov 2024 (1 comment)
Fil-C: Memory-Safe and Compatible C/C++ with No Unsafe Escape Hatches - https://news.ycombinator.com/item?id=41936980 - Oct 2024 (4 comments)
The Fil-C Manifesto: Garbage In, Memory Safety Out - https://news.ycombinator.com/item?id=39449500 - Feb 2024 (17 comments)
In addition, here are the major related subthreads from other submissions:
https://news.ycombinator.com/item?id=45568231 (Oct 2025)
https://news.ycombinator.com/item?id=45444224 (Oct 2025)
https://news.ycombinator.com/item?id=45235615 (Sept 2025)
https://news.ycombinator.com/item?id=45087632 (Aug 2025)
https://news.ycombinator.com/item?id=44874034 (Aug 2025)
https://news.ycombinator.com/item?id=43979112 (May 2025)
https://news.ycombinator.com/item?id=43948014 (May 2025)
https://news.ycombinator.com/item?id=43353602 (March 2025)
https://news.ycombinator.com/item?id=43195623 (Feb 2025)
https://news.ycombinator.com/item?id=43188375 (Feb 2025)
https://news.ycombinator.com/item?id=41899627 (Oct 2024)
https://news.ycombinator.com/item?id=41382026 (Aug 2024)
https://news.ycombinator.com/item?id=40556083 (June 2024)
https://news.ycombinator.com/item?id=39681774 (March 2024)
https://news.ycombinator.com/item?id=39542944 (Feb 2024)
Cool project! I take it the goal is that, overhead being acceptable, most C / C++ programmes don't actually "have to be" rewritten in something like Rust?
I wonder how / where Epic Games comes in?
Note that Fil-C is a garbage-collected language that is significantly slower than C.
It's not a target for writing new code (you'd be better off with C# or golang), but something like sandboxing with WASM, except that Fil-C crashes more precisely.
From the topic starter: "I've posted a graph showing nearly 9000 microbenchmarks of Fil-C vs. clang on cryptographic software (each run pinned to 1 core on the same Zen 4). Typically code compiled with Fil-C takes between 1x and 4x as many cycles as the same code compiled with clang"
Thus, Fil-C compiled code is 1 to 4 times as slow as plain C. This is not in the "significantly slower" ballpark, like where most interpreters are. The ROOT C/C++ interpreter is 20+ times slower than binary code, for example.
Cryptographic software is probably close to a best case scenario since there is very little memory management involved and runtime is dominated by computation in tight loops. As long as Fil-C is able to avoid doing anything expensive in the inner loops you get good performance.
This depends heavily on what problem domain you're talking about. For example, a DBMS is necessarily going to shuffle a lot of data into and out of memory.
Most databases do almost no memory management at runtime, at least not in any conventional sense. They mostly just DMA disk into and out of a fixed set of buffers. Objects don't have a conventional lifetime.
Along with the sibling comment, microbenchmarks should not be used as authoritative data when the use case is full applications. For that matter, highly optimized Java or Go may be "1 to 4 times as slow as plain C". Fil-C has its merits, but they should be described carefully, just with any technology.
I replied to unwarranted (to my eye) call that Fil-C is significantly slower than plain C.
Fil-C has its drawbacks, but they should be described carefully, just with any technology.
What language do people considering c as an option for a new project consider? Rust is the obvious one we aren't going to discuss because then we won't be able to talk about anything else, Zig is probably almost as well loved and defended, but it isn't actually memory safe, just much easier to be memory safe. As you say, c# and go, also maybe f# and ocaml if we are just writing simple c style stuff none of those would look all that different. Go jhs some ub related to concurrency that people run into, but most of these simple utilities are either single threaded or fine grained parallel which is pretty easy to get right. Julia too maybe?
In terms of GC quality, Nim comes to mind.
WASM is a sandbox. It doesn't obviate memory safety measures elsewhere. A program with a buffer overflow running in WASM can still be exploited to do anything that program can do within in WASM sandbox, e.g. disclose information it shouldn't. WASM ensures such a program can't escape its container, but memory safety bugs within a container can still be plenty harmful.
I am a bit surprised that the build_all_fast_glibc.sh script requires 31Gbyte of memory to run. Can somebody explain? I would like to try out Fil-C.
Building and linking llvm sucks.
For those, like me, that didn’t know what Fil-C is:
> Fil-C is a fanatically compatible memory-safe implementation of C and C++. Lots of software compiles and runs with Fil-C with zero or minimal changes. All memory safety errors are caught as Fil-C panics. Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (InvisiCaps). Every possibly-unsafe C and C++ operation is checked. Fil-C has no unsafe statement and only limited FFI to unsafe code.
https://fil-c.org/
The posted article has a detailed explanation of djb successfully compiling a bunch of C and C++ codebases.
I guess to get on board with this, it is my understanding you have to accept the premise of a Garbage Collector in the runtime?
Note that it is a garbage collector designed and implemented by one of the most experienced GC experts on earth. He previously designed and implemented WebKit's state of the art concurrent GC, for example. So—yes, but don't dismiss it too quickly.
If that's all you need, the state of the art is very available already through the JVM and the .NET CLR, as well as a handful others depending on your use case. Most of those also come with decent languages, and great facilities to leverage the GC to its maximum.
But GCs aren't magic and you will never get rid of all the overhead. Even if the CPU time is not noticeable in your use case, the memory usage fundamentally needs to be at least 2-4x the actual working set of your program for GCs to be efficient. That's fine for a lot of use cases, especially when RAM isn't scarce.
Most people who use C or C++ or Rust have already made this calculation and deemed the cost to be something they don't want to take on.
That's not to say Fil-C isn't impressive, but it fills a very particular niche. In short, if you're bothering with a GC anyway, why wouldn't you also choose a better language than C or C++?
I don't understand the need to hammer in the point that Fil-C is only valuable for this tiny, teeny, irrelevant microscopic niche, while not even talking about what the niche is? To be clear, the niche is rebuilding your entire GNU/Linux userland with full memory safety and completely acceptable performance, tomorrow, without rewriting anything, right? Is this such a silly little idiosyncratic hobby?
So I don’t want to come off as dismissive of the effort - it’s certainly impressive!
The reason I’m not super excited is based on the widely publicized findings from Google and Microsoft (IIRC) about memory safety issues in their code: The vast majority is in new code.
As such, the returns on running the entire userspace with Fil-C may be quite diminished from the get-go. Those who need to guard against UB bugs in seriously battle-hardened C software in production are definitely a small niche.
But that doesn’t mean it isn’t also very useful as a tool during development.
Hmm, so if they're writing new memory unsafe code in C/C++, presumably to remain within their already established and entrenched C/C++ ecosystems, why isn't Fil-C interesting as a way to thwart memory safety issues in that new code?
It seems like there are constant updates for 20 year old packages on my Ubuntu systems. Ubuntu 20.04 Focal Fossa (first released April 2020) glibc had an update on 2025-05-28. Current stable updated glibc 2025-09-22. To say nothing about the rest of the packages in that operating system.
Oh, look at the time, a few more CVEs in C code, posted 3 hours ago to Hacker News: "X.Org Security Advisory: multiple security issues X.Org X server and Xwayland"
https://news.ycombinator.com/item?id=45790015
https://lists.x.org/archives/xorg-announce/2025-October/0036...
To torture the analogy: perhaps the "returns" are diminishing, but their absolute value is still a few million bucks, I'm happy to take those returns.
There's a contingent of rust fans that show up on every story about C – their premise is that C code is unsafe and most safety-critical C code should be rewritten in rust.
Fil-C is new and is a viable competitor to rust, that's why you're hearing all asides about tiny niches, unacceptable performance degradation, etc.
Hacker News is not a place where any one group brigrades a thread. There are people who prefer C who don't want a GC, people who prefer Rust who don't want C, people who prefer Rust who agree with Fil-C for legacy C, people who don't prefer C or Rust and may use languages with GC.... We all have interests and face people who denigrate them in bad faith. If you have specific objections to inaccurate statements in this thread, then state them. I'll do the same for any technology if I'm qualified to make statements on it.
> Fil-C is new and is a viable competitor to rust
I’ve no horse in the race here, but the Fil-C page talks about a 4x overhead from using it, which feels like it would make it less competitive
Currently measured worst case for some types or code.
There’s no Rust fans here, only GC skeptics. GC skeptics existed long before anyone dreamed of Rust and will survive Rust as well.
It’s a pretty reasonable objection too (though I personally don’t agree). C has always been chosen when performance is paramount. For people who prioritise performance it must feel a bit weird to leave performance on the table in this way.
And Jesus Christ, give it a rest with this “Rust fans must be thinking” stuff. It sounds deranged.
No, back in the day C was used for everything. Vim was not written in C because it needed to wring every last bit of performance out of text editing.
Rewriting everything in rust "for memory-safety" is a false tradeoff given the millions of lines of C code out there and the fact that rewrites always introduce new bugs.
Please, I’m begging you, stop talking about Rust. You’re shoehorning Rust into a discussion where it hasn’t been mentioned, just to hate on some imaginary people you think are pushing Rust here. No one is talking about that. You sound deranged and obsessed.
The vast majority of the conversation here is about GC and the performance implications of that. Please stick to the rest of the thread.
I am a member of this niche – thank you for the flake!! https://discourse.nixos.org/t/radically-improving-nix-nixos-...
I think Fil-C is for people who are using software that has already been written, not for people who are trying to pick what language to write new software in. A substantial amount of software has, after all, already been written.
It's super fun to write C and C++ code in Fil-C because it's like this otherworldly crossover between Java and C/C++:
- Unlike Java, you get fantastic startup times.
- Unlike Java, you get access to actual syscall APIs.
- Unlike Java, you can leverage the ecosystem of C/C++ libraries without having to write JNI wrappers (though you do have to be able to compile those libraries with Fil-C).
- Like Java, you can just `new` or `malloc` without `delete`ing or `free`ing.
It's so fun!
I like C, have a probably unhealthy relationship with C++ where I am amazed by what it can do and then get unrealistic expectations it keeps failing to fulfill, and don't really like Java.
You know Julia Ecklar's song where she says that programming in assembler is like construction work with a toothpick for a tool? I feel like C, C++, or Java are like having a teaspoon instead. Maybe Java is a tablespoon. I'd rather use something like OCaml or a sane version of Python without the Mean Girls community infighting. I just haven't found it.
On the other hand, the supposedly more powerful languages don't have a great record of shipping highly usable production software. There's no Lisp or Ruby or Lua alternative to Firefox, Linux, or LLVM.
I do not think this is niche in the slightest. I would very happily take a 2-4x slowdown for almost all of the web facing C software I run if I get guaranteed memory safety. I will be using at the very least fil-c openssh (and likely much more) on every machine I run.
Sure, that makes sense. The point I’m making is just that from an engineering perspective, that also implies that there is no longer any reason for that software you’re running to be written in C at all.
From an engineering perspective, the software is already written in C, and you're weighing the tradeoffs between rewriting it and recompiling it.
Sure there is. Making tough choices between alternatives based on where to allocate a limited amount of manpower is an engineering choice. Choosing to use Fil-C to recompile existing (established, stabilized, functional...) software rather than rewrite it is an engineering choice.
Even if you can't use something like Fil-C in your release/production builds, being able to e.g. compile unit tests with it to catch memory safety bugs is a huge win. My team use gcc for its mips codegen, but I'm working on adopting the clang bounds-safety annotations for test builds for exactly this reason.
Yeah, I haven't yet taken a serious look into it from that perspective yet, but similar came to mind; while, outside of bootstrapping the JDK from GCJ, Boehm GC hasn't been super relevant to me for "release" builds of anything, it's been useful in leak detection mode on occasion.
I figure even if you cannot use, or do not want to use, something like Fil-C in production, there's solid potential for it to augment whatever existing suite of sanitizers and other tools that one may already build against.
If you write your software in a language that needs GC, everybody using your software needs GC, but they're guaranteed to get memory safety.
If you write your software in an unsafe, non-GC language, nobody needs GC, but nobody gets memory safety either.
This is why many software developers chose the latter option. If there were some use cases in which GC wasn't acceptable for their software, nobody would get GC, even the people who could afford it, and would prefer the increased memory safety.
Fil-C lets the user make this tradeoff. If you can accept the GC, you compile with Fil-C, otherwise you use a traditional C compiler.
The point is that it can compile most existing C and C++ code as-is, and do it while providing complete memory safety.
That's the claim, anyway. Doesn't sound all that niche to me.
The user of the code may plausibly want to make a different tradeoff than the author, without wanting to rewrite the project from scratch.
The value prop here is for existing projects in C or C++, as is made abundantly clear in the linked article
I would say that Rust would be a better choice rarher than patching memory safety on top of C. But I think the reason for this is that most, if not all, cryptographic reference implementations are in C. So they want to use existing reference implementations without having to port them to Rust.
IMO cryptographers should start using Rust for their reference implementations, but I also get that they'd rather spend their time working on their next paper rather than learning a new language.
I'm not a practioner of cryptography, but I would be wary about timing attacks that might become possible if such a dynamic runtime is introduced. At least relevant pieces of code would need to be re-evaluated in the Fil-C environment.
But maybe you could use C as the "glue language" and then the build better performing libraries in Rust for C to use. Like in Python!
Good call! Fil-C does in fact have a way to let you build and run OpenSSL with its constant time crypto. I don't know how this works exactly but I guess it's relatively easy to guarantee it's safe.
How easy is it to link Rust code with C compiled with Fil-C's ABI?
> IMO cryptographers should start using Rust for their reference implementations
IMO they should not, because if I look at a typical Rust code, I have no clue what is going on even with a basic understanding of Rust. C, however, is as simple as it gets and much closer to pseudocode.
Good cryptographic code should match its algorithmic description. Rust enables abstractions that allow this. C does not. That you have some familiarity with C and not Rust should not be a contributing factor.
I say this as someone who has written cryptographic code that’s been downloaded millions of times.
The original poster got pretty much all of Debian running in Fil-C, in a fairly brief amount of time.
Re-writing even a single significant library or package in Rust would take exponentially longer, so in this case Rust would not be "a better choice", but rather a non-starter.
I expect Fil-C is not really aimed at green field projects. But rather at making existing projects safe.
It's amazing how much technical discourse revolves around impressions.
"Oh, it has a GC! GC bad!"
"No, this GC by smart guy, so good!"
"No, GC always bad!"
People aren't engaging with the technical substance. GC based systems and can be plenty good and fast. How do people think JavaScript works? And Go? It's like people just absorbed from the discursive background radiation the idea GC is slow without understanding why that might be or whether it's even true. Of course it's not.
> How do people think JavaScript works?
Very slowly. Java, OCaml, or LuaJIT would be better examples here!
How many of the "GC is always slow" people would recognize those systems? Besides: V8 and JSC have pretty decent JITs nowadays. IME, performance of JIT systems has more to do with the structure of programs written in JS than with VM performance itself.
Maybe I don't know what I'm doing, but I rarely get performance within an order of magnitude of single-threaded C from V8. In those other systems I usually do, unless you count Java's startup time.
You can wrack some people's brains by stating that for some problems, a GC is a great way to alleviate the performance problems caused by manual memory management.
For those problems arena allocators tend to perform even better.
Yeah, but if you actually need to retain a live subgraph of the allocated heap, the arena can't help you. So you make an arena allocator that only frees its slab after moving out the reachable set to a new compacted arena. Congratulations, you've implemented a Cheney-style compacting GC!
Not for all allocation patterns. It's hard to beat bump pointer allocation and escape analysis in general.
Hi, I noticed you made a typo in "JS bad, Go bad", it's not too late to edit your comment! /s
The author of Fil-C does have some ideas to avoid a garbage collector [1], in summary: Use-after-free at worst means you might see an object of the same size, but you can not corrupt data structures (no pointer / integer confusion). This would be more secure than standard C, but less secure than Fil-C with GC.
[1] https://x.com/filpizlo/status/1917410045320650839
So far we haven't found a viable alternative; CHERI has holes in its temporal integrity guarantees.
Interesting to see some bash curl being used by a renowned cryptologist...
Almost like it's actually fine.
https://medium.com/@ewindisch/curl-bash-a-victimless-crime-d...
It is definitely not fine. The argument seems to be that since you need to trust somebody, curl | bash is fine because you just trust whoever controls the webserver. I think this is missing the point.
s/webserver/DNS/
HTTPS is there, so you go down to that level only if you want to distrust any element of the public key infrastructure. Which, to be fair, there are plenty of reasons if you are paranoid -- they do tell you who's doing what in a shady way as they revoke, so there's a huge list of transgressions.
It is not only that directly; the domain name might be reassigned to someone else, resulting in a valid certificate which is different than the one you wanted. (If you have the hash of the file which you have verified independently then it is more secure (if the hash algorithm is secure enough), although HTTPS is not needed in that case, it can still be used if you wish to avoid spies knowing which file you accessed. You can also use the server's public key if you know what it should be, although this has different issues, such as someone compromising the server (or the key) and modifying the script.) (There is also knowing if the script is what you intended or not anyways (or if there is something unexpected due to the configuration on your computer); if that is your issue, you can read it (and perhaps verifying the character encoding) before executing it, whether or not you trust the server operator and the author of that script.)
> the domain name might be reassigned to someone else
If that happens its game over. As the article I linked noted, the attackers can change the installation instructions to anything they want - even for packages that are available in Linux distros.
It's missing which point?
That you should be very careful about what you install. Cut&pasting some line from a website is the exact opposite of it. This is mostly about psychology and not technology. But there are also other issues with this, e.g. many independent failure points at different levels, no transparency, no audit chain, etc. The counter model we tried to teach people in the past is that people select a linux distribution, independently verify fingerprints of the installation media, and then only install packages from the curated a list of packages. A lot of effort went into making this safe and close the remaining issues.
None of that has anything to do with curl|bash.
Be careful who you trust when installing software is a fine thing to teach. But that doesn't mean the only people you can trust are Linux distro packagers.
I think it has a lot to do with "curl|bash". Cut&paste a curl|bash command-line disables all inherent mechanisms and stumbling blocks that would ensure properly ensuring trust. It was basically invented to make it easy to install software by circumventing all protection a Linux distribution would traditionally provide. It also eliminates all possibility for independent verification about what was installed or done on the machine.
Downloading and installing a `.deb` or `.rpm` is going to be no more secure. They can run arbitrary scripts too.
Downloading a deb via a package manager is more secure. Downloading a deb, comparing the hash (or at least noting down the hash) would also already be more secure.
But yes, that the run arbitrary scripts is also a known issue, but this is not the main point as most code you download will be run at some point (and ideally this needs sandboxing of applications to fix).
> Downloading a deb via a package manager is more secure.
Not what I meant. Getting software into 5 different distros and waiting years for it to be available to users is not really viable for most software authors.
I think it would be quite viable if there is any willingness to work with the distributions in the interest in security.
Well, distros haven't really put any effort into making it viable as far as I know. They really should! Why isn't there a standard Linux package format that all distros support? Flatpak is fine for user GUI apps but I don't think it would be feasible to e.g. distribute Rust via a Flatpak.
(And when I say fine, I haven't actually used it successfully yet.)
I think distros don't want this though. They all want everyone to use their format, and spend time uploading software into their repo. Which just means that people don't.
Great to see some 3letter guy into this. This might be one of those rando things which gets posted on HN (and which doesn't involve me in the slightest), but a decade later is taking over the world. Rust and Go were like that.
Previously there was that Rust in APT discussion. A lot of this middle-aged linux infrastructure stuff is considered feature-complete and "done". Not many young people are coming in, so you either attract them with "heyy rewrite in rust" or maybe the best thing is to bottle it up and run in a VM.
>Great to see some 3letter guy into this
AFAIK, djb isn't for many "some 3letter guy" for over about thirty years but perhaps it's just age related issue with those less been around.
https://en.wikipedia.org/wiki/Daniel_J._Bernstein
Just to be clear, I mean to venerate Bernstein for earning his 3letters, not to trivialize him.
Despite the cool shit the guy has done, keep in mind that "venerate" is not the word to use here. djb is very much not a shorthand used in any positive messaging pretty much ever by any cryptographer. He did it to himself, sadly.
Sorry, can you explain what he did to himself?
I would like to know as well. All that is public is that a couple of IETF apparatchiks want to ban him for criticizing corporate and NSA influence:
https://web.archive.org/web/20250513185456/https://mailarchi...
The IETF has now accepted the required new moderation guidelines, which will essentially be a CoC troika that can ban selectively:
https://mailarchive.ietf.org/arch/msg/mod-discuss/s4y2j3Dv6D...
It is very sad that all open source and Internet institutions are being subverted by bureaucrats.
... if he thinks some WG is making a mistake and he's not welcome there (everyone else seems to be okay with what's happening based on the quoted email on the first link), then - CoC or not - he should then leave, and publicly post distance himself from the outcome.
(Obviously he was never the one to back down from a just fight, but it's important to find the right hill to die on. And allies! And him not following RFC 2026 [from 1996, hardly the peak of Internet bureaucracy] is not a CoC thing anyway.)
Why should he leave? The IETF pretends on its sponsor page (https://www.ietf.org/support-us/endowment/):
The IETF is a global standards-setting organization, intentionally created without a membership structure so that anyone with the technical competency can participate in an individual capacity. This lack of membership ensures its position as the primary neutral standards body because participants cannot exert influence as they could in a pay-to-play organization where members, companies, or governments pay fees to set the direction. IETF standards are reached by rough consensus, allowing the ideas with the strongest technical merit to rise to the surface.
Further, these standards that advance technology, increase security, and further connect individuals on a global scale are freely available, ensuring small-to-midsize companies and entrepreneurs anywhere in the world are on equal footing with the large technology companies.
With a community from around the world, and an increased focus on diversity in all its forms, IETF seeks to ensure that the global Internet has input from the global community, and represents the realities of all who use it.
There is only one IETF, and telling dissenters to leave is like telling a dissenting citizen to go to another country. I don't think that people (apart from real spammers) were banned in 1996. The CoC discussion and power grab has reached the IETF around 2020 and it continues.
"Posting too many messages" has been deemed a CoC violation by for example the PSF and its henchmen, and functionally the IETF is using the same selective enforcement no matter what the official rationale is. They won't go after the "director" Wouters, even though his message was threatening and rude.
Not to trivialise but being a 3 letter guy means being old. So, it's at best a celebration of achieving longevity and at worst a celebration of creaky joints and a short temper.
Most of us will have a problematic joint or two by a certain age. Almost none of us will be recognised by any name by that time.
Mate, we're not talking about the future, but about 3 letter guys now. I'm one, I've carried it with me for 40+ years as have the ten or twenty peers of mine I know by their tla. I got it at pobox.com when the door opened, the guy at the desk next door got a one letter name. I set up campus email for the entire uni in 1989 and gave myself the tla with my superuser rights before that. I'd done the same at ucl-cs in 85, and before that in Leeds and York.
My point here is we're not famous we're just old enough to have a tla from the time before HR demanded everyone get given.surname.
Every Unix system used to ship with a dmr account. It doesn't mean we all knew Dennis Ritchie, it means the account was in the release tape.
There are 17,000 odd of us. Ekr, Kre and Djb are famous but the other 17,573 of us exist.
I'm not sure what your point is here. OP was clearly using "three letter guy" in the sense "so famous people know them by their initials". This is hardly unread of, e.g. https://wiki.c2.com/?ThreeLetterPerson
It was the "Great to see _some_ 3letter guy into this" underlined some that.
It felt bit like s/some/random/g perhaps would apply when reading it. Intentional or not by writer. It made me long and write my comment. There are many 3letter user accounts, which some are more famous than others. To my generation not because they were early users, but great things what they have done. I'm early user too and done things then still quite widely being used with many distributions, but wouldn't compare my achievements to those who became famous and known widely by their account, short or long.
Anyhow I thought that "djb" ring bell anyone having been around for while. Not just those who have been around early 90 or so when he was held renegade opinions he expressed programming style (qmail, dj dns, etc.), dragged to court of ITAR issues etc.
But because of his latter work with cryptography and running cr.yp.to site for quite long time.
https://cr.yp.to/
I was just wondering, did not intend to start argument fight.
Is this because they're that famous though or simply because there weren't as many people in the scene back then? We just don't do the initials thing anymore.
Yes: the fame is the subtext. It's akin to mononyms; they'd be referring to famous people like Shakira, Madonna, or Beyoncé. A lot of us have first names, but the point isn't that one's family calls them "Dave" without ambiguity.
There were many unix instances, and likely multiple djb logins around the world, but there's only one considered to be the djb, and it's dur to fame.
It's wild how much he looks like ryg, another 3 letter genius
I would really like to see Omarchy go this direction. A fully memory-safe userland for Omarchy is possible with existing techhnology.
Can you elaborate why Omarchy? I'm asking, in context of recompiling with Fil-C, because that seems to be just Arch + configurations.
I can't wait for all the delicious four-way flamewars. Choose your fighter!
1) Rewrite X in Rust
2) Recompile X using Fil-C
3) Recompile X for WASM
4) Safety is for babies
There are a lot of half baked Rust rewrites whose existence was justified on safety grounds and whose rationale is threatened now that HN has heard of Fil-C
Fil-C has come up on HN plenty of times before. If it was going to make much of a dent in the discussions, it would have by now.
It's strange how ideas seem to explode at random into the discourse despite being known for a long time. It's as if some critical mass stumbles on a thing and it becomes "the current thing" everyone talks about until the next current thing.
Obviously someone needs to rewrite Rust in Fil-C
Yeah since Fil-C is just an LLVM transform we could make Rust memory safe with it
We have a saying that jam is made of fruit that gave up the fight becoming a brandy.
I'm on camp 2.
Wish we were talking about making Fil-C required for apt, not Rust...
Those seems to be independent issues. Fil-C is about the best way to compile/run C code.
Rust would be about what language to use for new code.
Now that I have been programming in Rust for a couple of years, I don't want to go back to C (except for some hobby projects).
I agree. The main advantage of Fil-C is compatibility with C, in a secure way. The disadvantages are speed, and garbage collection. (Even thought, I read that garbage collection might not be needed in some cases; I would be very interested in knowing more details).
For new code, I would not use Fil-C. For kernel and low-level tools, other languages seem better. Right now, Rust is the only popular language in this space that doesn't have these disadvantages. But in my view, Rust also has issues, specially the borrow checker, and code verbosity. Maybe in the future there will be a language that resolves these issues as well (as a hobby, I'm trying to build such a language). But right now, Rust seems to be the best choice for the kernel (for code that needs to be fast and secure).
> disadvantages are speed, and garbage collection.
And size. About 10x increase both on disk and in memory
How does that compare with rust? You don't happen to have an example of a binary underway moving to rust in Ubuntu-land as well? Curious to see as I honestly don't know whether rust is nimble like C or not.
My impression is - rust fares a bit better on RAM footprint, and about as badly on disk binary size. It's darn hard to compare apples-to-apples, though - given it's a different language, so everything is a rewrite. One example:
Ubuntu 25.10's rust "coreutils" multicall binary: 10828088 bytes on disk, 7396 KB in RAM while doing "sleep".
Alpine 3.22's GNU "coreutils" multicall binary: 1057280 bytes on disk, 2320 KB in RAM while doing "sleep".
I don't have numbers, but Rust is also terrible for binary size. Large Rust binaries can be improved with various efforts, but it's not friendly by default. Rust focuses on runtime performance, high-level programming, and compile-time guarantees, but compile times and binary sizes are the drawback. Notably, Rust prefers static linking.
Fil-C is slow.
There is no C or C++ memory safe compiler with acceptable performance for kernels, rendering, games, etc. For that you need Rust.
The future includes Fil-C for legacy code that isn’t performance sensitive and Rust for new code that is.
I imagine Apt is usually IO constrained?
That's my guess, yeah
Also, Fil-C's overheads are the lowest for programs that are pushing primitive bits around.
Fil-C's overheads are the highest for programs that chase pointers.
I'm guessing the CPU bound bits of apt (if there are any) are more of the former
How slow? In some contexts, the trade-off might be acceptable. From what I've seen in pizlonator's tweets, in some cases the difference in speed didn't seem drastic to me.
Yeah, I would happily run a bunch of my network services in this. I have loads of services that are public-facing doing a lot of complex parsing and rule evaluation and are mostly idle. For example my whole mailserver stack could probably benefit from this. My few messages an hour can run 2x slower. Maybe I would leave dovecot native since the attack surface before authentication is much lower and the performance difference would be more noticeable (mostly for things like searches).
You may be aware that one of the things Bernstein is famous for is revolutionizing mailserver security.
What does that have to do with apt?
Enough of it is performance sensitive that Fil-C is not an option.
Fil-C is useful for the long tail of C/C++ that no one will bother to rewrite and is still usable if slow.
How is apt performance sensitive?
Apt has been painfully slow since I started using Debian last millennium, but I suspect it's not because it uses a lot of CPU, or it would be snappy by now.
It parses formats and does TLS, I’m assuming it’d be quite bad. I don’t think you can mix and match.
stuff that talks to "the internet" and runs as "root" seems like a good thing to build with filc.
It probably uses OS sandboxing primitives already.
doesnt it only work on x86_64?
I wish, we will have something like Fil-C as an option for unsafe Rust.
Fil-C works because you recompile the whole C userspace. Unsafe Rust doesn't do that... and for many practical purposes you probably want to touch the non-safe-version of the C userspace.
Still, it's all LLVM, so perhaps unsafe Rust for Fil-space can be a thing, a useful one for catching (what would be) UBs even [Fil-C defines everything, so no UBs, but I'm assuming you want to eventually run it outside of Fil-space].
Now I actually wonder if Fil-C has an escape hatch somewhere for syscalls that it does not understand etc. Well it doesn't do inline assembly, so I shouldn't expect much... I wonder how far one needs to extend the asm clobber syntax for it to remotely come close to working.
Unsafe Rust actually has a great runtime analyzer: Miri. It's very easy to just run `cargo +nightly miri test` in your project to get some confidence in the more questionable choices along the way.
Building tools is one thing, building a system like Postgres or Databases is going to be another thing.
Anyone really tried building PG or MySQL or such a complex system which heavily relies on IO operations and multi threading capabilities
Look at how fanatic the compatibility actually is. Building Postgres or MySQL is conceivable but probably will require some changes. (SQLite compiles and runs with zero changes right now.)
Thanks for checking! I was wondering.
If you run Nix (whether on NixOS or elsewhere) you can do `cachix use filc` and `nix run github:mbrock/filnix#sqlite` and it should drop you into a Fil-C SQLite after downloading the runtime dependencies from my binary cache (no warranty)!
Thanks!