This OpenPGP and GnuPG criticism is brought up regularly here, but the proposed alternatives come with their own downsides: some of those are proprietary, some are centralized systems or depend on such. In addition to all the inconvenience, when such centralized systems are blocked, casual users switch to explicitly backdoored options. The advertised IMs are tied to phone numbers, introducing both privacy and availability issues. Almost nothing of that is available from Linux distributions' system repositories. Integration with other software and infrastructures is lacking. Dealing with multiple specialized tools is more of a headache even for expert users, especially when their added benefits do not make much sense given one's threat model. OpenPGP/GnuPG is more resilient and versatile than those, still usable where those are not.
I think such an article would seem more convincing, at least to me, if more sensible alternatives were proposed. Ideally without the advice to not encrypt email, without assumptions of continued availability of all the online services, of trust to certain third parties, and so on. Or it could be just a plain criticism without suggestions, which would still be somewhat informative.
Edit: there is another list of alternatives in a sibling comment, advising against (well, actually being quite hostile towards, and generally impolite) usage of what I had in mind as one of the possible more sensible alternatives: XMPP with OMEMO. Though upon skimming the criticism of that, I have not found it particularly convincing, either, and it just looks like some authors try to be particularly provocative/edgy.
I have no issues with it, and actually happy to see alternative implementations. Possibly because I did not use it much, but it does look fine to me. Not as a complete GPG replacement yet, since some software still depends on GPG, but a viable one, and a suitable one for most of the manual CLI usage (ignoring that its version on slightly older systems has a different interface, adding a bit of confusion; hopefully it is stable now). It was not listed among suggested alternatives in the linked article though, and from what I gather, the author would not be happy with it, either.
Since you mentioned me: what's the point? It would be one thing if you could (1) use Sequoia, (2) be assured of modern cryptography, and (3) maintain compatibility with the majority of the installed base of PGP users. But you can't. That being the case, why put up with all the PGP problems that Sequoia can't address? You're losing compatibility either way, so use an actually-good cryptosystem.
One of the premises of modern cryptographic engineering is security under a hostile setting: it shouldn’t matter to a chat protocol that a server is proprietary or a network is centralized if the design itself is provably end-to-end encrypted. The server could be run by Satan and it wouldn’t matter.
(Centralization itself is a red herring. One may as well claim that PGP is centralized, given that there’s only one prominent keyserver still limping around the Internet.)
But even this jumps ahead, given that the alternatives are not in fact proprietary. The list of open source tool alternatives has been the same for close to a decade now:
* For messaging/secure communication, use Signal. It’s open source.
* For file encryption, use age. It’s open source and has multiple mature implementations by well-regarded cryptographic engineers.
* For signing, use minisign, or Sigstore, or even ssh signing. All are open source.
Yes, but security usually includes availability, and I mentioned a setting with service blocking above. Like that by a government.
> (Centralization itself is a red herring. One may as well claim that PGP is centralized, given that there’s only one prominent keyserver still limping around the Internet.)
How is it a red herring?
> For messaging/secure communication, use Signal. It’s open source.
From my point of view, it is complicated by Signal being blocked here (and it being centralized helped to establish such blocking easily), likely the phone number verification won't work here, it is not available without a phone, and it is not available from F-Droid repositories on top of that. Currently money transfers are also complicated, so finding some foreign service that would help to circumvent phone number verification is also complicated, and not something I would normally do even without that. All this Internet blocking is a new development here, but such availability issues due to centralization were anticipated for a long time, and are a major motivation behind federated or distributed systems. Some mail servers are also being blocked, but generally mail still works, and less of a pain to use.
> For file encryption, use age. It’s open source and has multiple mature implementations by well-regarded cryptographic engineers.
> For signing, use minisign, or Sigstore, or even ssh signing. All are open source.
These I find to be okay. Having to install them in addition to GnuPG that is usually already available, but that is to be expected; they are available at least from Debian repositories, so not something to complain about when considering alternatives. Likewise with the key sharing: not getting to reuse OpenPGP's PKI, and will have to replace that somehow, but it is not like it is used widely and consistently anyway, so perhaps not much of a loss in practice. Likewise with familiarity of the users: I would expect a little more friction with such tools, compared to GnuPG, but not much more. And I don't see actual usage downsides apart from those. Though the benefits also seem a bit uncertain, but generally that sounds like a switch that makes sense to consider.
It’s a red herring because systems that achieve end-to-end security do so regardless of whether the underlying hosts are centralized or not. A typical network adversary wants you to downgrade the security properties of your protocol in the presence of an unreliable network, so they can pull more metadata out of you.
Sorry, that was an unduly inflammatory framing from me. You’re right that keys.openpgp.org runs smoothly, particularly in contrast to the previous generation of SKS hosts. I don’t think it comes close to meeting the definition of a decentralized identity distribution system, however.
I tried to find something in the article that bothered me, but I don’t find it very convincing. Points like "someone can forward your email unencrypted after they decrypt it" are just... well, yeah - that can happen no matter what method you choose. It feels like GPG gets hate for reasons other than what’s actually mentioned, and I'm completely oblivious to what those reasons might be.
It's not that someone can forward your mail unencrypted. It's that in the normal operation of the system, someone taking the natural next step in a conversation (replying) can --- and, in the experience of everyone I've talked to who has used PGP in anger for any extended period of time, inevitably does --- destroy the security of the entire conversation by accidentally replying in plaintext.
That can't happen in any modern encrypted messenger. It does happen routinely with encrypted email.
pgp as a tool could integrate with that, but in practice fails for... many reasons, the above included. All the other key exchange / etc issues as well.
well that's fair, but sounds more like a email client issue than an actual issue with gpg/pgp. My client shows pretty clearly when it gets encrypted. But maybe I am oblivious.
I agree that it's an email problem, which is why I wrote a whole article about why email can't be made secure with any reasonable client. But email is overwhelmingly the messaging channel PGP users use; in fact, it's a common-cited reason why people continue to use PGP (because it allows them to encrypt email).
A protocol that doesn’t enforce security and relies on clients to choose to implement it is a broken protocol, from a security standpoint.
Even if secure email clients exist that always make right choices, because you can’t know what client all your recipients are using, all it takes is one person with a “bad” client (which, keep in mind, is a client that accurately implements the protocol but doesn’t enforce additional security rules on top) to ruin things.
Some Ukrainians may regret that the followed the Signal marketing. I have never heard of a real world exploit that has actually been used like that against gpg.
Those people shouldn't be, and thankfully aren't, using PGP. Nobody is suppressing this report on phishing attacks against Signal users; it's just not as big a deal as what's wrong with PGP.
Accidentally replying in plaintext is a user error, scanning a QR code is a user error.
Yet one system is declared secure (Signal), the other is declared insecure. Despite the fact that the QR code issue happened in a war zone, whereas I have not heard of a similar PGP fail in the real world.
First of all, accidentally replying in plaintext is hardly the only problem with PGP, just the most obvious one. Secondly, it's not user error: modern messaging cryptography is designed not to allow it to happen.
Modern cryptography should also not allow users to activate a sketchy linked device feature by scanning a QR code:
"Because linking an additional device typically requires scanning a quick-response (QR) code, threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance."
This is a complete failure of the cryptosystem, worse than the issue of responding in plaintext. You can at least design an email client that simply refuses to send plaintext messages because PGP is modular.
As mentioned a few days ago, this post mainly covers a gpg problem not a PGP problem.
I recommend people to spend some time and try out sequoia (sq) [0][1], which is a sane, clean room re-implementation of OpenPGP in Rust. For crypto, it uses the backend you prefer (including openssl, no more ligcrypt!) and it isn't just a CLI application but also as a library you can invoke from many other languages.
It does signing and/or encryption, for modern crypto including AEAD, Argon2, PQC.
Sure, it still implements OpenPGP/RFC 9580 (which is not the ideal format most people would define from scratch today) but it throws away the dirty water (SHA1, old cruft) while keeping the baby (interoperability, the fine bits).
But if you use the modern crypto stuff you loose interoperability, right? What is the point of keeping the cruft of the format if you still won't have compatability if you use the modern crypto? The article mentions this:
> Take AEAD ciphers: the Rust-language Sequoia PGP defaulted to the AES-EAX AEAD mode, which is great, and nobody can read those messages because most PGP installs don’t know what EAX mode is, which is not great.
Other implementations also don't support stuff like Argon2.
So it feels like the article is on point when it says
> You can have backwards compatibility with the 1990s or you can have sound cryptography; you can’t have both.
When you encrypt something, you are the one deciding which level of interoperability
you want and you can select the crypto primitives matching capabilities you know you recipient reasonably have. I don't see anything special with this: when you run a web service, you also decide if you want to talk to TLS 1.0 clients (hopefully not).
sequoia's defaults are reasonable as far as I remember. It's also bit strange that the post found it defaulted to using AEAD in 2019 when AEAD was standardized only in 2024 with RFC 9580.
But the elephant in the room is that gpg famously decided to NOT adopt RFC 9580 (which Sequoia and Proton do support) and stick to a variant of the older RFC (LibrePGP), officially because the changes to the crypto were seen as too "ground-breaking".
I think GP’s point isn’t that you don’t have the freedom to decide your own interoperability (you clearly do), but that the primary remaining benefit of PGP as an ecosystem is that interoperability. If you’re throwing that away, then there’s very little reason to shackle yourself to a larger design that the cryptographic community (more or less) unanimously agrees is dangerous and antiquated.
It is not a coincidence that most of the various proposed alternatives to PGP (signal, wormhole, age, minisign, etc) are led by a single golden implementation and neither support nor promote community-driven specifications (e.g., at the IETF).
Over the decades, PGP has already transitioned out of old key formats or old crypto. None of us is expecting to receive messages encrypted with BassOmatic (the original encryption algorithm by Zimmermann) I assume? The process has been slow, arguably way slower than it should have after the advancements in attacks in the past 15 years (and that is exactly the crux behind the schism librepgp/opengpgp). Nonetheless, here we are, pointing at the current gpg as "the" interoperable (yet flawed) standard.
In this age, when implementations are expected (sometimes by law) to be ready to update more quickly, the introduction of new crypto can take into account adoption rates and the specific context one operates in. And still, that happens within the boundaries of a reasonably interoperable protocol.
TLS 1.3 is a case in point - from certain points of view, it has been a total revolution and break with the past. But from many others, it is still remarkably similar to the previous TLS as before, lots of concepts are reused, and it can be deemed as an iteration of the same standard. Nobody is questioning its level of interoperability, and nobody is shocked by the fact that older clients can't connect to a TLS 1.3-only server.
You're right, it's not a coincidence. The track record of standards-body-driven cryptography is wretched. It's why we all use WireGuard and not IPSEC. TLS 1.3 is an actually good protocol, but it took for-ev-er to get there, and part of that process involved basically letting the cryptographers seize the microphones and make decisions by fiat in the 1.2->1.3 shift (TLS 1.3 also follows a professionalization at CFRG). It's the exception that proves the rule. It's contemporaneous sibling is WPA3 and Dragonfly, and look how that went.
I wrote the post and object to the argument that it primarily covers GnuPG issues.
But stipulate that it does, and riddle me this: what's the point? You can use Sequoia set up for "modern crypto including AEAD", yes, but now you're not compatible with the rest of the installed base of PGP.
If you're going to surrender compatibility, why on Earth would you continue to use OpenPGP, a design mired in 1990s decisions that no cryptography engineer on the planet endorses?
You're missing my point. I agree that you can use Sequoia to communicate between peers also using Sequoia. But you're no longer compatible with the overwhelming majority of PGP deployments. So what's the point? Why not just use a modern tool with that same group of peers?
Even though I read so many posts criticizing PGP, it's still difficult for me to find an alternative. He states in the article that being a "Swiss Army Knife" is bad. I understand the argument, but this is precisely what makes GPG so powerful. The scheme of public keys, private keys, revoke, embedded WOT, files, texts, everything. They urgently need to make a "modern version" of GPG. He needs a replacement, otherwise he'll just be whining.
I was also frustrated with this criticism in the past, but there are definitely some concrete alternatives provided for many use cases there. (But not just with one tool.)
I’m still frustrated by the criticism because I internalized it a couple of years ago and tried to move to age+minisig because those are the only 2 scenarios I personally care about. The overall experience was annoying given that the problems with pgp/gpg are esoteric and abstract that unless I’m personally are worried about a targeted attack against me, they are fine-ish.
If someone scotch tapes age+minisig and convince git/GitHub/gitlab/codeberge to support it, I’ll be so game it’ll hurt. My biggest usage of pgp is asking people doing bug reports to send me logs and giving them my pgp keys if they are worried and don’t want to publicly post their log file. 99.9% of people don’t care, but I understand the 0.1% who do. The other use is to sign my commits and to encrypt my backups.
Ps: the fact that this post is recommending Tarsnap and magicwormhole shows how badly it has aged in 6 years IMO.
Is this about commit signing? Git and all of the mentioned forges (by uploading the public key in the settings) support SSH keys for that afaik.
git configuration:
gpg.format = ssh
user.signingkey = /path/to/key.pub
If you need local verification of commit signatures you need gpg.ssh.allowedSignersFile too to list the known keys (including yours). ssh-add can remember credentials. Security keys are supported too.
Has Tarsnap become inadequate, security-wise? The service may be expensive for a standard backup. It had a serious bug in 2011, but hasn't it been adequate since then?
I don’t know anything that makes me think it’s inadequate per se, but it’s also been more than 10 years since I thought about it. Restic, gocryptfs, and/or age are far more flexible, generic and flat out better in managing encrypted files/backups depending on how you want to orchestrate it. Restic can do everything, gocryptfs+rclone can do more, etc.
It’s just not the same thing. There is significant overlap, but it’s not enough to be a reasonable suggestion. You can’t suggest a service as a replacement for a local offline tool. It’s like saying “Why do you need VLC when you can just run peertube?”. Also since then, age is the real replacement for pgp in terms of sending encrypted files. Wormhole is a different use case.
There are two parts of "sending encrypted files": the encryption and the sending. An offline tool (e.g. PGP or age) seems only necessary when you want to decouple the two. After all, you can't do the sending with an offline tool (except insofar as you can queue up a message while offline, such as with traditional mail clients).
The question thereby becomes "Why decouple the sending from encryption?"
As far as I can see, the main (only?) reason is if the communication channel used for sending doesn't align with your threat model. For instance, maybe there are multiple parties at the other end of the channel, but you only trust one of them. Then you'd need to do something like encrypt the message with that person's key.
But in the use-case you mentioned (not wanting to publicly post a log file), I don't see why that reason would hold; surely the people who would send you logs can trust trust Signal every bit as easily as PGP. Share your Signal username over your existing channel (the mailing list), thereby allowing these people to effectively "upgrade" their channel with you.
Sticking to the use case of serving that 0.1% of users, why can’t a service or other encrypted transport be a solution? Why doesn’t Signal fit the bill for instance?
> The so-called web of trust is meaningless security theatre.
Ignoring your comment’s lack of constructive criticism, I’m going to post this meaningful implementation that an excellent cryptographer, Soatok Dreamseeker, is working on: [1].
You may also search for his posts in this HN thread, his nickname is “some_furry”.
Keyservers already “solved” this problem without needing federation because we only needed one keyserver anyway. Federating them isn’t going to do anything. Web of trust is a broken system that sounds super cool until you try to really use it. It has so many flaws that there’s really no way to revive it. Keybase tried to do something about it and also failed.
The biggest issue with PGP/gpg is the difficulty of getting rid of it. If you work on big distros, or know someone who works on big distros, please (start asking them to) add https://github.com/jedisct1/minisign to pre-installed packages to facilitate transition. It's almost a chicken egg problem but the sad thing is, no project wants to swap the signing tool to a better one until everyone can verify the new signatures.
All software has bugs. But having a small purpose-built program do one thing well is much smaller attack surface. The Unix philosophy also makes a pretty good security argument.
I wasn’t aware of the efail disclosure timeline. Apparently Koch responds to the report by noting that GPG prints an error when MDC is stripped, which has eerie parallels to the justification behind the recent gpg.fail WONTFIX response (see https://news.ycombinator.com/item?id=46403200)
I think the two cases are different. The EFAIL researchers were suggesting that the PGP code (whatever implementation) should throw an error on an MDC integrity error and then stop. The idea was that this would be a fix for EFAIL in that the modified message would not be passed on to the rest of the system and thus was failsafe. The rest of the system could not pass the modified message along to the HTML interpreter.
In the gpg.fail case the researchers suggested that GPG should, instead of returning the actual message structure error (a compression error in their case), return an MDC integrity error instead. I am not entirely clear why they thought this would help. I am also not sure if they intended all message structure errors to be remapped in this way or just the single error. A message structure error means that all bets are off so they are in a sense more serious than a MDC integrity error. So the suggestion here seems to be to downgrade the seriousness of the error. Again, not sure how that would help.
In both cases the researchers entirely ignored regular PGP authentication. You know, the thing that specifically is intended to address these sorts of things. The MDC was added as an afterthought to support anonymous messages. I have come to suspect that people are actually thinking of things in terms of how more popular systems like TLS work. So I recently wrote an article based on that idea:
It's occurred to me that it is possible that the GnuPG people are being unfairly criticized because of their greater understanding of how PGP actually works. They have been doing this stuff forever. Presumably they are quite aware of the tradeoffs.
I agree that age + minisign comprise a much neater stack that does basically everything I would need to use PGP for.
Neither of them supports hardware keys though, as much as I could see. OTOH ssh and GnuPG do support hardware keys, like smart cards or Yubikey-like devices. I suppose by the same token (not a pun, sadly) they don't support various software keychains provided by OSes, since they don't support any external PKCS11 providers (the way ssh does).
This may reduce the attack needed to steal a private key to a simple unprivileged infiltration, e.g. via code run during installation of a compromised npm package, or similar.
The minisign bug was much less severe than the (insane) GPG signing bugs, and the age bug wasn't a cryptographic thing at all, just a dumb path sanitization thing. Minisign was not in fact affected by most everything GPG was. The GnuPG team wontfixed one of the most significant bugs!
How does this help people who are not following this issue regularly? gpg protected Snowden, and this article promotes tools by one of the cryptographers who promoted non-hybrid encryption:
"In June 2013, Cryptocat was used by journalist Glenn Greenwald while in Hong Kong to meet NSA whistleblower Edward Snowden for the first time, after other encryption software failed to work."
So it was used when Snowden was already on the run, other software failed and the communication did not have to be confidential for the long term.
It would also be an indictment of messaging services as opposed to gpg. gpg has the advantage that there is no money in it, so there are unlikely to be industry or deep state shills.
Signal was made by people who then used it to push their get-rich-quick cryptocurrency scheme on users and who threw all their promises of being open-source and reproducible over board for it. The Signal people are absolutely not trustworthy for reasons of money and greed.
> Signal was made by people who then used it to push their get-rich-quick cryptocurrency scheme on users and who threw all their promises of being open-source and reproducible over board for it.
There's a lot to be said for the utility of reverse engineering tools and skills, but I did not need them, because it was open source. Because Signal's client software still is open source.
Whatever you think about MobileCoin, it doesn't actually intersect with the message encryption features at all. At all.
The only part in Signal that's not entirely open source are the anti-spam features baked into the Signal Server software.
And, frankly, the security of end-to-end encryption messaging apps has so little to do with whatever the server software is doing that it's frankly silly to consider that relevant to these discussions. https://soatok.blog/2025/07/09/jurisdiction-is-nearly-irrele...
> Because Signal's client software still is open source.
Only when you can trust that the published client source code is equivalent to the distributed client binaries. The only way to do this is reproducible builds, since building your own client is frowned upon and sometimes actively prevented by the signal people. Signal has always been a my-way-or-the-highway centralized cathedral, no alternate implementations, no federation, nothing. Which was always a suspicious thing. Also, "the signal client is open source software" only holds if you don't count in the proprietary Google blobs that the signal binary does contain: FCM and Maps. Those live in the same process and can do whatever to E2EE...
About the signal client that does the E2EE, reproducible builds are frequently broken for the signal client, e.g.:
https://github.com/signalapp/Signal-Android/issues/11352https://github.com/signalapp/Signal-Android/issues/13565 and many more. Just search their issue tracker. The latter one was open for 2 years, so reproducible builds were broken at least during 2024 and most of 2025 for the client. They don't keep their promise and don't prioritize fixing those issues, because they just don't care. People do trust them blindly and the Signal people rely on that blind trust. Case in point: you yourself reviewed their code and probably didn't notice that it wasn't the code for the binary they were distributing at the time.
Now you might say that reproducible builds in the client you reviewed weren't affected by their Mobilecoin cash grab, and you are right, but it shows a pattern in that they don't care, and even lots of professionals singing their praises don't care.
And their server code does affect your privacy even with E2EE. The server can still maliciously correlate who talks to whom. You have to trust their published source code correctly doing its obfuscation of that, otherwise you get metadata leaks the same as in all other messengers. The server can also easily impersonate you, read all your contacts and send them to evil people. "But Signal protects against this", you say? Well, it does by some SGX magic and the assurance that the code inside the enclave does the right thing. But they clearly don't care about putting their code where their mouth is, they rather put their code where the money was. Behind closed doors, until they could finish their Mobilecoin thingy.
>> The Signal people are absolutely not trustworthy for reasons of money and greed.
> I don't think you've raised sufficient justification for this point.
Trust is hard to earn and easy to squander. They squandered my trust and did nothing to earn it back. Their behavior clearly shows they don't care about trust, because they frequently break their reproducibility and are slow to fix it. They cared more about their coin thing. They are given trust, even by professionals who should know better, because their cryptography is cool. But cryptography isn't everything, and one should not trust them, because they obviously are more interested in Mobilecoin than in trust. What more is there to justify, it's obvious imho.
After reading the PyCon 2016 presentation about wormhole, and say my understanding of channels is correct (that is, each session on the same wireless network constitutes a session). What's stopping a hostile 3rd party, who wishes to stop a file transfer from happening, from spamming every channel with random codes?
One use case I've not seen covered is sending blobs asynchronously with forward secrecy. Wormhole requires synchronously communicating the password somehow, and Signal requires reasonable buy-in by the recipient.
Basically, I'd like to just email sensitive banking and customer data in an encrypted attachment without needing to trust that the recipient will never accidentally leak their encryption key.
One of the projects I alluded to in that post makes a technological solution to what you want easy to build, but the harder problem to solve is societal (i.e., getting it adopted).
My current project aims to bring Key Transparency to the Fediverse for building E2EE on ActivityPub so you can have DMs that are private even against instance moderators.
One of the things I added to this design was the idea of "Auxiliary Data" which would be included in the transparency log. Each AuxData has a type identifier (e.g. "ssh-v2", "age-v1", "minisign-v0", but on the client-side, you can have friendly aliases like just "ssh" or "age"). The type identifier tells the server (and other clients) which "extension" to use to validate that the data is valid. (This is to minimize the risk of abuse.)
As this project matures, it will be increasingly easy to do this:
// @var pkdClient -- A thin client-side library that queries the Public Key Directory
// @var age -- An implementation of age
async function forwardSecureEncrypt(file, identity) {
const agePKs = await pkdClient.FetchAuxData(identity, "age");
if (agePKs.length === 0) {
throw new Error("No age public keys found");
}
return age.Encrypt(file, agePKs[0]);
}
And then you can send the encrypted file in an email without a meaningful subject line and you'll have met your stated requirements.
(The degree of "forward secure" here depends on how often your recipient adds a new age key and revokes their old one. Revocation is also published through the transparency log.)
However, email encryption is such a mess that most people don't quite appreciate, so I'm blogging about that right now. :)
PGP/GPG is a complicated mess designed in the 1990's and only incrementally updated to add more complexity and cover more use-cases, most of which you'll never need. Part of PGP/GPG is supporting a large swath of algorithms (from DSA to RSA to ECDSA to EdDSA to whatever post-quantum abomination they'll cook up next).
Signify/Minisign is Ed25519. Boring, simple, fit-for-purpose.
PGP is horrible and way overly complicated but this article concludes by trading that for a long list of piecemeal solutions, some of which are cloud based and semi or fully proprietary.
PGP has hung on for a long time because it “works” and is a standard. The same can be said for Unix, which is not actually a great OS. A modern green field OS designed by experienced people with an eye to simplicity and consistency would almost certainly be better. But who’s going to use it?
GPG, as OpenSSL, are too huge and complex in order to use them on daily basis.
OpenBSD has signifiy, which works fine. But I wouldn't mind something like a cleaned up age(1) but without the mentioned issues.
GNU tends to stack features like crazy. It had sense over the limited Unix tools
in the 90's, but nowadays 'ls -F', oksh with completion and the like make them
decent enough while respecting your freedom and not being overfeatured.
If there's one thing we learned from the Snowden leaks is that the NSA can't break GPG.
Look at it from the POV of someone who like me isn't an expert: on the one hand I have ivory tower researchers telling me that GPG is "bad". On the other hand I have fact that the most advanced intelligence in the world can't break it. My personal conclusion is that GPG is actually fucking awesome.
My impression is that GPG when used correctly is secure. But there are so many problems with it that the chances of shooting yourself with one of the footguns is too high for it to be a reliable solution.
The alternatives support newer encryption methods but nothing has fundamentally changed that doesn't make them less secure, but they have less footguns to worry about.
The weakest link in cryptography is always people.
The NSA can't break GPG assuming everything is working properly. This blog post (which to be fair I only skimmed) explains that GPG is a mess which could lead to things not working properly, and also gives real life examples. You may also want to see https://gpg.fail (you can tell they're from the ivory tower by the cat ears). The blog post also mentions bad UX, which you and I can directly appreciate (if anything I might expect ivory tower types to dismiss UX issues).
I am well familiar with that presentation at CCC. Yes, the presentation are by people who live in the low stakes world of theoreticals as you can tell by the cat ears.
> If you’d like empirical data of your own to back this up, here’s an experiment you can run: find an immigration lawyer and talk them through the process of getting Signal working on their phone.
> Long term keys are almost never what you want. If you keep using a key, it eventually gets exposed.
Have a sentence praising Signal followed by a sentence explaining the main critique of Signal (requiring mobile number) makes me question the whole post for credibility
This OpenPGP and GnuPG criticism is brought up regularly here, but the proposed alternatives come with their own downsides: some of those are proprietary, some are centralized systems or depend on such. In addition to all the inconvenience, when such centralized systems are blocked, casual users switch to explicitly backdoored options. The advertised IMs are tied to phone numbers, introducing both privacy and availability issues. Almost nothing of that is available from Linux distributions' system repositories. Integration with other software and infrastructures is lacking. Dealing with multiple specialized tools is more of a headache even for expert users, especially when their added benefits do not make much sense given one's threat model. OpenPGP/GnuPG is more resilient and versatile than those, still usable where those are not.
I think such an article would seem more convincing, at least to me, if more sensible alternatives were proposed. Ideally without the advice to not encrypt email, without assumptions of continued availability of all the online services, of trust to certain third parties, and so on. Or it could be just a plain criticism without suggestions, which would still be somewhat informative.
Edit: there is another list of alternatives in a sibling comment, advising against (well, actually being quite hostile towards, and generally impolite) usage of what I had in mind as one of the possible more sensible alternatives: XMPP with OMEMO. Though upon skimming the criticism of that, I have not found it particularly convincing, either, and it just looks like some authors try to be particularly provocative/edgy.
What is your issue with Sequoia PGP? It is not proprietary, it is not centralized and it is much better than GunPG from what I can tell.
I have no issues with it, and actually happy to see alternative implementations. Possibly because I did not use it much, but it does look fine to me. Not as a complete GPG replacement yet, since some software still depends on GPG, but a viable one, and a suitable one for most of the manual CLI usage (ignoring that its version on slightly older systems has a different interface, adding a bit of confusion; hopefully it is stable now). It was not listed among suggested alternatives in the linked article though, and from what I gather, the author would not be happy with it, either.
Since you mentioned me: what's the point? It would be one thing if you could (1) use Sequoia, (2) be assured of modern cryptography, and (3) maintain compatibility with the majority of the installed base of PGP users. But you can't. That being the case, why put up with all the PGP problems that Sequoia can't address? You're losing compatibility either way, so use an actually-good cryptosystem.
One of the premises of modern cryptographic engineering is security under a hostile setting: it shouldn’t matter to a chat protocol that a server is proprietary or a network is centralized if the design itself is provably end-to-end encrypted. The server could be run by Satan and it wouldn’t matter.
(Centralization itself is a red herring. One may as well claim that PGP is centralized, given that there’s only one prominent keyserver still limping around the Internet.)
But even this jumps ahead, given that the alternatives are not in fact proprietary. The list of open source tool alternatives has been the same for close to a decade now:
* For messaging/secure communication, use Signal. It’s open source.
* For file encryption, use age. It’s open source and has multiple mature implementations by well-regarded cryptographic engineers.
* For signing, use minisign, or Sigstore, or even ssh signing. All are open source.
> security under a hostile setting
Yes, but security usually includes availability, and I mentioned a setting with service blocking above. Like that by a government.
> (Centralization itself is a red herring. One may as well claim that PGP is centralized, given that there’s only one prominent keyserver still limping around the Internet.)
How is it a red herring?
> For messaging/secure communication, use Signal. It’s open source.
From my point of view, it is complicated by Signal being blocked here (and it being centralized helped to establish such blocking easily), likely the phone number verification won't work here, it is not available without a phone, and it is not available from F-Droid repositories on top of that. Currently money transfers are also complicated, so finding some foreign service that would help to circumvent phone number verification is also complicated, and not something I would normally do even without that. All this Internet blocking is a new development here, but such availability issues due to centralization were anticipated for a long time, and are a major motivation behind federated or distributed systems. Some mail servers are also being blocked, but generally mail still works, and less of a pain to use.
> For file encryption, use age. It’s open source and has multiple mature implementations by well-regarded cryptographic engineers.
> For signing, use minisign, or Sigstore, or even ssh signing. All are open source.
These I find to be okay. Having to install them in addition to GnuPG that is usually already available, but that is to be expected; they are available at least from Debian repositories, so not something to complain about when considering alternatives. Likewise with the key sharing: not getting to reuse OpenPGP's PKI, and will have to replace that somehow, but it is not like it is used widely and consistently anyway, so perhaps not much of a loss in practice. Likewise with familiarity of the users: I would expect a little more friction with such tools, compared to GnuPG, but not much more. And I don't see actual usage downsides apart from those. Though the benefits also seem a bit uncertain, but generally that sounds like a switch that makes sense to consider.
I have no clue how you reached the conclusion of calling it a red herring.
It matters - because Satan can disconnect the centralized nodes.
It’s a red herring because systems that achieve end-to-end security do so regardless of whether the underlying hosts are centralized or not. A typical network adversary wants you to downgrade the security properties of your protocol in the presence of an unreliable network, so they can pull more metadata out of you.
> given that there’s only one prominent keyserver still limping around the Internet
Hey, I take issue with that. keys.openpgp.org is just about the only thing running smoothly in the openpgp ecosystem :P
Sorry, that was an unduly inflammatory framing from me. You’re right that keys.openpgp.org runs smoothly, particularly in contrast to the previous generation of SKS hosts. I don’t think it comes close to meeting the definition of a decentralized identity distribution system, however.
I tried to find something in the article that bothered me, but I don’t find it very convincing. Points like "someone can forward your email unencrypted after they decrypt it" are just... well, yeah - that can happen no matter what method you choose. It feels like GPG gets hate for reasons other than what’s actually mentioned, and I'm completely oblivious to what those reasons might be.
It's not that someone can forward your mail unencrypted. It's that in the normal operation of the system, someone taking the natural next step in a conversation (replying) can --- and, in the experience of everyone I've talked to who has used PGP in anger for any extended period of time, inevitably does --- destroy the security of the entire conversation by accidentally replying in plaintext.
That can't happen in any modern encrypted messenger. It does happen routinely with encrypted email.
Yes, it's a problem with _email_.
pgp as a tool could integrate with that, but in practice fails for... many reasons, the above included. All the other key exchange / etc issues as well.
well that's fair, but sounds more like a email client issue than an actual issue with gpg/pgp. My client shows pretty clearly when it gets encrypted. But maybe I am oblivious.
I agree that it's an email problem, which is why I wrote a whole article about why email can't be made secure with any reasonable client. But email is overwhelmingly the messaging channel PGP users use; in fact, it's a common-cited reason why people continue to use PGP (because it allows them to encrypt email).
out of curiosity, would you like to share why you think it's an email protocol problem? Because I see that more as an email client problem
A protocol that doesn’t enforce security and relies on clients to choose to implement it is a broken protocol, from a security standpoint.
Even if secure email clients exist that always make right choices, because you can’t know what client all your recipients are using, all it takes is one person with a “bad” client (which, keep in mind, is a client that accurately implements the protocol but doesn’t enforce additional security rules on top) to ruin things.
Yes, it is odd that this criticism is only allowed for gpg while worse Signal issues are not publicized here:
https://cloud.google.com/blog/topics/threat-intelligence/rus...
Some Ukrainians may regret that the followed the Signal marketing. I have never heard of a real world exploit that has actually been used like that against gpg.
Why would anyone care if you brought phishing attacks on Signal users up?
People who do not wish to get killed may care.
Those people shouldn't be, and thankfully aren't, using PGP. Nobody is suppressing this report on phishing attacks against Signal users; it's just not as big a deal as what's wrong with PGP.
Accidentally replying in plaintext is a user error, scanning a QR code is a user error.
Yet one system is declared secure (Signal), the other is declared insecure. Despite the fact that the QR code issue happened in a war zone, whereas I have not heard of a similar PGP fail in the real world.
First of all, accidentally replying in plaintext is hardly the only problem with PGP, just the most obvious one. Secondly, it's not user error: modern messaging cryptography is designed not to allow it to happen.
Modern cryptography should also not allow users to activate a sketchy linked device feature by scanning a QR code:
"Because linking an additional device typically requires scanning a quick-response (QR) code, threat actors have resorted to crafting malicious QR codes that, when scanned, will link a victim's account to an actor-controlled Signal instance."
This is a complete failure of the cryptosystem, worse than the issue of responding in plaintext. You can at least design an email client that simply refuses to send plaintext messages because PGP is modular.
As mentioned a few days ago, this post mainly covers a gpg problem not a PGP problem.
I recommend people to spend some time and try out sequoia (sq) [0][1], which is a sane, clean room re-implementation of OpenPGP in Rust. For crypto, it uses the backend you prefer (including openssl, no more ligcrypt!) and it isn't just a CLI application but also as a library you can invoke from many other languages.
It does signing and/or encryption, for modern crypto including AEAD, Argon2, PQC.
Sure, it still implements OpenPGP/RFC 9580 (which is not the ideal format most people would define from scratch today) but it throws away the dirty water (SHA1, old cruft) while keeping the baby (interoperability, the fine bits).
[0] https://sequoia-pgp.org/
[1] https://archive.fosdem.org/2025/events/attachments/fosdem-20...
But if you use the modern crypto stuff you loose interoperability, right? What is the point of keeping the cruft of the format if you still won't have compatability if you use the modern crypto? The article mentions this:
> Take AEAD ciphers: the Rust-language Sequoia PGP defaulted to the AES-EAX AEAD mode, which is great, and nobody can read those messages because most PGP installs don’t know what EAX mode is, which is not great.
Other implementations also don't support stuff like Argon2.
So it feels like the article is on point when it says
> You can have backwards compatibility with the 1990s or you can have sound cryptography; you can’t have both.
When you encrypt something, you are the one deciding which level of interoperability you want and you can select the crypto primitives matching capabilities you know you recipient reasonably have. I don't see anything special with this: when you run a web service, you also decide if you want to talk to TLS 1.0 clients (hopefully not).
sequoia's defaults are reasonable as far as I remember. It's also bit strange that the post found it defaulted to using AEAD in 2019 when AEAD was standardized only in 2024 with RFC 9580.
But the elephant in the room is that gpg famously decided to NOT adopt RFC 9580 (which Sequoia and Proton do support) and stick to a variant of the older RFC (LibrePGP), officially because the changes to the crypto were seen as too "ground-breaking".
I think GP’s point isn’t that you don’t have the freedom to decide your own interoperability (you clearly do), but that the primary remaining benefit of PGP as an ecosystem is that interoperability. If you’re throwing that away, then there’s very little reason to shackle yourself to a larger design that the cryptographic community (more or less) unanimously agrees is dangerous and antiquated.
It is not a coincidence that most of the various proposed alternatives to PGP (signal, wormhole, age, minisign, etc) are led by a single golden implementation and neither support nor promote community-driven specifications (e.g., at the IETF).
Over the decades, PGP has already transitioned out of old key formats or old crypto. None of us is expecting to receive messages encrypted with BassOmatic (the original encryption algorithm by Zimmermann) I assume? The process has been slow, arguably way slower than it should have after the advancements in attacks in the past 15 years (and that is exactly the crux behind the schism librepgp/opengpgp). Nonetheless, here we are, pointing at the current gpg as "the" interoperable (yet flawed) standard.
In this age, when implementations are expected (sometimes by law) to be ready to update more quickly, the introduction of new crypto can take into account adoption rates and the specific context one operates in. And still, that happens within the boundaries of a reasonably interoperable protocol.
TLS 1.3 is a case in point - from certain points of view, it has been a total revolution and break with the past. But from many others, it is still remarkably similar to the previous TLS as before, lots of concepts are reused, and it can be deemed as an iteration of the same standard. Nobody is questioning its level of interoperability, and nobody is shocked by the fact that older clients can't connect to a TLS 1.3-only server.
You're right, it's not a coincidence. The track record of standards-body-driven cryptography is wretched. It's why we all use WireGuard and not IPSEC. TLS 1.3 is an actually good protocol, but it took for-ev-er to get there, and part of that process involved basically letting the cryptographers seize the microphones and make decisions by fiat in the 1.2->1.3 shift (TLS 1.3 also follows a professionalization at CFRG). It's the exception that proves the rule. It's contemporaneous sibling is WPA3 and Dragonfly, and look how that went.
I wrote the post and object to the argument that it primarily covers GnuPG issues.
But stipulate that it does, and riddle me this: what's the point? You can use Sequoia set up for "modern crypto including AEAD", yes, but now you're not compatible with the rest of the installed base of PGP.
If you're going to surrender compatibility, why on Earth would you continue to use OpenPGP, a design mired in 1990s decisions that no cryptography engineer on the planet endorses?
If you use AEAD, you clearly expect your recipients to use a recent client. Same as if you want to use PQC or any other recent feature.
If your audience is wider, dont use AEAD but make sure to sign the data too.
With respect to the 90's design, yes, it is not pretty and it could be simpler. It is also not broken and not too difficult to understand.
You're missing my point. I agree that you can use Sequoia to communicate between peers also using Sequoia. But you're no longer compatible with the overwhelming majority of PGP deployments. So what's the point? Why not just use a modern tool with that same group of peers?
Even though I read so many posts criticizing PGP, it's still difficult for me to find an alternative. He states in the article that being a "Swiss Army Knife" is bad. I understand the argument, but this is precisely what makes GPG so powerful. The scheme of public keys, private keys, revoke, embedded WOT, files, texts, everything. They urgently need to make a "modern version" of GPG. He needs a replacement, otherwise he'll just be whining.
There's a section in this post with proposed replacements:
https://www.latacora.com/blog/2019/07/16/the-pgp-problem/#th...
I was also frustrated with this criticism in the past, but there are definitely some concrete alternatives provided for many use cases there. (But not just with one tool.)
I’m still frustrated by the criticism because I internalized it a couple of years ago and tried to move to age+minisig because those are the only 2 scenarios I personally care about. The overall experience was annoying given that the problems with pgp/gpg are esoteric and abstract that unless I’m personally are worried about a targeted attack against me, they are fine-ish.
If someone scotch tapes age+minisig and convince git/GitHub/gitlab/codeberge to support it, I’ll be so game it’ll hurt. My biggest usage of pgp is asking people doing bug reports to send me logs and giving them my pgp keys if they are worried and don’t want to publicly post their log file. 99.9% of people don’t care, but I understand the 0.1% who do. The other use is to sign my commits and to encrypt my backups.
Ps: the fact that this post is recommending Tarsnap and magicwormhole shows how badly it has aged in 6 years IMO.
> git/GitHub/gitlab/codeberge
Is this about commit signing? Git and all of the mentioned forges (by uploading the public key in the settings) support SSH keys for that afaik.
git configuration:
gpg.format = ssh
user.signingkey = /path/to/key.pub
If you need local verification of commit signatures you need gpg.ssh.allowedSignersFile too to list the known keys (including yours). ssh-add can remember credentials. Security keys are supported too.
Has Tarsnap become inadequate, security-wise? The service may be expensive for a standard backup. It had a serious bug in 2011, but hasn't it been adequate since then?
you cannot selfhost it. it's not verified and audited independently as a whole system.
for some people, that's important
I don’t know anything that makes me think it’s inadequate per se, but it’s also been more than 10 years since I thought about it. Restic, gocryptfs, and/or age are far more flexible, generic and flat out better in managing encrypted files/backups depending on how you want to orchestrate it. Restic can do everything, gocryptfs+rclone can do more, etc.
> the fact that this post is recommending Tarsnap and magicwormhole shows how badly it has aged in 6 years
What's wrong with magic wormhole?
It’s just not the same thing. There is significant overlap, but it’s not enough to be a reasonable suggestion. You can’t suggest a service as a replacement for a local offline tool. It’s like saying “Why do you need VLC when you can just run peertube?”. Also since then, age is the real replacement for pgp in terms of sending encrypted files. Wormhole is a different use case.
Adding to my comment since it was downvoted:
There are two parts of "sending encrypted files": the encryption and the sending. An offline tool (e.g. PGP or age) seems only necessary when you want to decouple the two. After all, you can't do the sending with an offline tool (except insofar as you can queue up a message while offline, such as with traditional mail clients).
The question thereby becomes "Why decouple the sending from encryption?"
As far as I can see, the main (only?) reason is if the communication channel used for sending doesn't align with your threat model. For instance, maybe there are multiple parties at the other end of the channel, but you only trust one of them. Then you'd need to do something like encrypt the message with that person's key.
But in the use-case you mentioned (not wanting to publicly post a log file), I don't see why that reason would hold; surely the people who would send you logs can trust trust Signal every bit as easily as PGP. Share your Signal username over your existing channel (the mailing list), thereby allowing these people to effectively "upgrade" their channel with you.
Sticking to the use case of serving that 0.1% of users, why can’t a service or other encrypted transport be a solution? Why doesn’t Signal fit the bill for instance?
The so-called web of trust is meaningless security theatre.
>They urgently need to make a "modern version" of GPG.
Absolutely not.
> The so-called web of trust is meaningless security theatre.
Ignoring your comment’s lack of constructive criticism, I’m going to post this meaningful implementation that an excellent cryptographer, Soatok Dreamseeker, is working on: [1].
You may also search for his posts in this HN thread, his nickname is “some_furry”.
[1]: https://github.com/fedi-e2ee/public-key-directory-specificat...
Keyservers already “solved” this problem without needing federation because we only needed one keyserver anyway. Federating them isn’t going to do anything. Web of trust is a broken system that sounds super cool until you try to really use it. It has so many flaws that there’s really no way to revive it. Keybase tried to do something about it and also failed.
Keybase was doing great until it got acquired by Zoom and people felt uneasy about the implications, IIRC
To be clear, this is not Web of Trust. It's using Key Transparency as a means to distribute public keys more securely than TOFU.
If people want to build WoT on top of ny design, I won't stop them, but it's not a goal of mine.
The biggest issue with PGP/gpg is the difficulty of getting rid of it. If you work on big distros, or know someone who works on big distros, please (start asking them to) add https://github.com/jedisct1/minisign to pre-installed packages to facilitate transition. It's almost a chicken egg problem but the sad thing is, no project wants to swap the signing tool to a better one until everyone can verify the new signatures.
For starters I'd like to see ssh-agent not being replaced with gpg-agent. Those who need it should install it themselves.
Note that minisign was also vulnerable in the gpg.fail exposures
Yes, but not nearly to the same extent. The GPG vulns are staggering in comparison.
All software has bugs. But having a small purpose-built program do one thing well is much smaller attack surface. The Unix philosophy also makes a pretty good security argument.
I wasn’t aware of the efail disclosure timeline. Apparently Koch responds to the report by noting that GPG prints an error when MDC is stripped, which has eerie parallels to the justification behind the recent gpg.fail WONTFIX response (see https://news.ycombinator.com/item?id=46403200)
I think the two cases are different. The EFAIL researchers were suggesting that the PGP code (whatever implementation) should throw an error on an MDC integrity error and then stop. The idea was that this would be a fix for EFAIL in that the modified message would not be passed on to the rest of the system and thus was failsafe. The rest of the system could not pass the modified message along to the HTML interpreter.
In the gpg.fail case the researchers suggested that GPG should, instead of returning the actual message structure error (a compression error in their case), return an MDC integrity error instead. I am not entirely clear why they thought this would help. I am also not sure if they intended all message structure errors to be remapped in this way or just the single error. A message structure error means that all bets are off so they are in a sense more serious than a MDC integrity error. So the suggestion here seems to be to downgrade the seriousness of the error. Again, not sure how that would help.
In both cases the researchers entirely ignored regular PGP authentication. You know, the thing that specifically is intended to address these sorts of things. The MDC was added as an afterthought to support anonymous messages. I have come to suspect that people are actually thinking of things in terms of how more popular systems like TLS work. So I recently wrote an article based on that idea:
* https://articles.59.ca/doku.php?id=pgpfan:pgpauth
It's occurred to me that it is possible that the GnuPG people are being unfairly criticized because of their greater understanding of how PGP actually works. They have been doing this stuff forever. Presumably they are quite aware of the tradeoffs.
The PGP could obviously should throw an error on any MDC integrity failure!
I agree that age + minisign comprise a much neater stack that does basically everything I would need to use PGP for.
Neither of them supports hardware keys though, as much as I could see. OTOH ssh and GnuPG do support hardware keys, like smart cards or Yubikey-like devices. I suppose by the same token (not a pun, sadly) they don't support various software keychains provided by OSes, since they don't support any external PKCS11 providers (the way ssh does).
This may reduce the attack needed to steal a private key to a simple unprivileged infiltration, e.g. via code run during installation of a compromised npm package, or similar.
> Neither of them supports hardware keys though, as much as I could see.
https://github.com/str4d/age-plugin-yubikey
BTW apparently age has plugins that allow to use FIDO2 and TPM for cryptography.
Probably resurfacing, because we have some new attacks thanks to CCC. [0]
[0] https://news.ycombinator.com/item?id=46453461
Worth noting: minisign and age were also affected by a couple things here.
GnuPG has decided a couple things are out of scope, fixed a couple others. Not all is in distro packages yet.
age didn't have the clearest way to report things - discord is apparently the point of contact. Which will probably improve soon.
minisign was affected by most everything GnuPG was, but had a faster turnaround to patching.
The minisign bug was much less severe than the (insane) GPG signing bugs, and the age bug wasn't a cryptographic thing at all, just a dumb path sanitization thing. Minisign was not in fact affected by most everything GPG was. The GnuPG team wontfixed one of the most significant bugs!
The mark of good security is not "has no bugs". It's how the maintainers respond to security-relevant bugs.
… in which case, ‘on Discord’ is not off to a good start.
Indeed. A mail list plus IRC would be a better start.
Go runs on far more platforms than Discord. And, worse, Discord it's propietary.
Indeed, I saw it linked to in that thread, read it and thought it'd be worth resurfacing.
How does this help people who are not following this issue regularly? gpg protected Snowden, and this article promotes tools by one of the cryptographers who promoted non-hybrid encryption:
https://blog.cr.yp.to/20251004-weakened.html#agreement
So what to do? PGP by the way never claimed to prevent traffic analysis, mixmaster was the layer that somehow got dropped, unlike Tor.
You could also say Cryptocat protected Snowden; he used it to communicate with reporters. So, that's how well that argument holds up.
https://en.wikipedia.org/wiki/Cryptocat#Reception_and_usage
"In June 2013, Cryptocat was used by journalist Glenn Greenwald while in Hong Kong to meet NSA whistleblower Edward Snowden for the first time, after other encryption software failed to work."
So it was used when Snowden was already on the run, other software failed and the communication did not have to be confidential for the long term.
It would also be an indictment of messaging services as opposed to gpg. gpg has the advantage that there is no money in it, so there are unlikely to be industry or deep state shills.
Huh? There's no money in anything we're talking about here.
No money in anything?
Signal was made by people who then used it to push their get-rich-quick cryptocurrency scheme on users and who threw all their promises of being open-source and reproducible over board for it. The Signal people are absolutely not trustworthy for reasons of money and greed.
> Signal was made by people who then used it to push their get-rich-quick cryptocurrency scheme on users and who threw all their promises of being open-source and reproducible over board for it.
I reviewed Signal's cryptography last year over a long weekend: https://soatok.blog/2025/02/18/reviewing-the-cryptography-us...
There's a lot to be said for the utility of reverse engineering tools and skills, but I did not need them, because it was open source. Because Signal's client software still is open source.
Whatever you think about MobileCoin, it doesn't actually intersect with the message encryption features at all. At all.
The only part in Signal that's not entirely open source are the anti-spam features baked into the Signal Server software.
And, frankly, the security of end-to-end encryption messaging apps has so little to do with whatever the server software is doing that it's frankly silly to consider that relevant to these discussions. https://soatok.blog/2025/07/09/jurisdiction-is-nearly-irrele...
And, yes, this is only a server-side feature. See spam-filter (a git submodule) in https://github.com/signalapp/Signal-Server but absent from https://github.com/signalapp/Signal-Android or https://github.com/signalapp/Signal-iOS
> The Signal people are absolutely not trustworthy for reasons of money and greed.
I don't think you've raised sufficient justification for this point.
> Because Signal's client software still is open source.
Only when you can trust that the published client source code is equivalent to the distributed client binaries. The only way to do this is reproducible builds, since building your own client is frowned upon and sometimes actively prevented by the signal people. Signal has always been a my-way-or-the-highway centralized cathedral, no alternate implementations, no federation, nothing. Which was always a suspicious thing. Also, "the signal client is open source software" only holds if you don't count in the proprietary Google blobs that the signal binary does contain: FCM and Maps. Those live in the same process and can do whatever to E2EE...
About the signal client that does the E2EE, reproducible builds are frequently broken for the signal client, e.g.: https://github.com/signalapp/Signal-Android/issues/11352 https://github.com/signalapp/Signal-Android/issues/13565 and many more. Just search their issue tracker. The latter one was open for 2 years, so reproducible builds were broken at least during 2024 and most of 2025 for the client. They don't keep their promise and don't prioritize fixing those issues, because they just don't care. People do trust them blindly and the Signal people rely on that blind trust. Case in point: you yourself reviewed their code and probably didn't notice that it wasn't the code for the binary they were distributing at the time.
Now you might say that reproducible builds in the client you reviewed weren't affected by their Mobilecoin cash grab, and you are right, but it shows a pattern in that they don't care, and even lots of professionals singing their praises don't care.
And their server code does affect your privacy even with E2EE. The server can still maliciously correlate who talks to whom. You have to trust their published source code correctly doing its obfuscation of that, otherwise you get metadata leaks the same as in all other messengers. The server can also easily impersonate you, read all your contacts and send them to evil people. "But Signal protects against this", you say? Well, it does by some SGX magic and the assurance that the code inside the enclave does the right thing. But they clearly don't care about putting their code where their mouth is, they rather put their code where the money was. Behind closed doors, until they could finish their Mobilecoin thingy.
>> The Signal people are absolutely not trustworthy for reasons of money and greed.
> I don't think you've raised sufficient justification for this point.
Trust is hard to earn and easy to squander. They squandered my trust and did nothing to earn it back. Their behavior clearly shows they don't care about trust, because they frequently break their reproducibility and are slow to fix it. They cared more about their coin thing. They are given trust, even by professionals who should know better, because their cryptography is cool. But cryptography isn't everything, and one should not trust them, because they obviously are more interested in Mobilecoin than in trust. What more is there to justify, it's obvious imho.
> They squandered my trust
Yes, fine, they squandered your trust.
You don't speak for all of us.
After reading the PyCon 2016 presentation about wormhole, and say my understanding of channels is correct (that is, each session on the same wireless network constitutes a session). What's stopping a hostile 3rd party, who wishes to stop a file transfer from happening, from spamming every channel with random codes?
Recently, this opinionated list of PGP alternatives went around:
https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/
One use case I've not seen covered is sending blobs asynchronously with forward secrecy. Wormhole requires synchronously communicating the password somehow, and Signal requires reasonable buy-in by the recipient.
Basically, I'd like to just email sensitive banking and customer data in an encrypted attachment without needing to trust that the recipient will never accidentally leak their encryption key.
One of the projects I alluded to in that post makes a technological solution to what you want easy to build, but the harder problem to solve is societal (i.e., getting it adopted).
https://github.com/fedi-e2ee/public-key-directory-specificat...
My current project aims to bring Key Transparency to the Fediverse for building E2EE on ActivityPub so you can have DMs that are private even against instance moderators.
One of the things I added to this design was the idea of "Auxiliary Data" which would be included in the transparency log. Each AuxData has a type identifier (e.g. "ssh-v2", "age-v1", "minisign-v0", but on the client-side, you can have friendly aliases like just "ssh" or "age"). The type identifier tells the server (and other clients) which "extension" to use to validate that the data is valid. (This is to minimize the risk of abuse.)
As this project matures, it will be increasingly easy to do this:
And then you can send the encrypted file in an email without a meaningful subject line and you'll have met your stated requirements.(The degree of "forward secure" here depends on how often your recipient adds a new age key and revokes their old one. Revocation is also published through the transparency log.)
However, email encryption is such a mess that most people don't quite appreciate, so I'm blogging about that right now. :)
Also, Filippo just created a transparency-based keyserver for age, fwiw: https://words.filippo.io/keyserver-tlog/
In case anyone reading this thread is curious about the blog post I was writing six hours ago: https://soatok.blog/2026/01/04/everything-you-need-to-know-a...
Tall order.
Anyone know why GitHub doesn't support signing commits with signify/minisign?
My first guess is, "Not enough people have asked for it."
So let's get the party started: https://github.com/orgs/community/discussions/183391
GitHub is not git and does not control what features get added to git.
It looks like there are some wrapper scripts to make git sign commits with other tools using the GPG cli interface but nothing official.
My comments on The PGP Problem:
* https://articles.59.ca/doku.php?id=pgpfan:tpp
I'm curious. What's the advantage of using signify/minisign instead of good old PGP/GPG?
PGP/GPG is a complicated mess designed in the 1990's and only incrementally updated to add more complexity and cover more use-cases, most of which you'll never need. Part of PGP/GPG is supporting a large swath of algorithms (from DSA to RSA to ECDSA to EdDSA to whatever post-quantum abomination they'll cook up next).
Signify/Minisign is Ed25519. Boring, simple, fit-for-purpose.
You can write an implementation of Minisign in most languages with little effort. I did in PHP years ago. https://github.com/soatok/minisign-php
Complexity is the enemy of security.
Is anyone else unable to read the report on mobile? Completely broken styling for me.
Can't confirm, works fine for me (Android, Firefox).
PGP is horrible and way overly complicated but this article concludes by trading that for a long list of piecemeal solutions, some of which are cloud based and semi or fully proprietary.
PGP has hung on for a long time because it “works” and is a standard. The same can be said for Unix, which is not actually a great OS. A modern green field OS designed by experienced people with an eye to simplicity and consistency would almost certainly be better. But who’s going to use it?
GPG, as OpenSSL, are too huge and complex in order to use them on daily basis.
OpenBSD has signifiy, which works fine. But I wouldn't mind something like a cleaned up age(1) but without the mentioned issues.
GNU tends to stack features like crazy. It had sense over the limited Unix tools in the 90's, but nowadays 'ls -F', oksh with completion and the like make them decent enough while respecting your freedom and not being overfeatured.
LibreSSL did the same over OpenSSL.
Which "mentioned issues"?
I still don't understand what's so difficult about using gpg. Seems straightforward to me.
i like the approach by the bsd people. shut the f* up and code.
as long as there's not (audited and verified) replacements for each niche, we still have to use it.
sadly even gpg (because of all this fud'ing around) even falls now the grace and tries to say "well, not THAT application, only THAT".. sigh.
Can the link be updated to not be to the end of the page?
Yes, that would be nice - when I posted it I forgot to clean the URL. I'm sorry! I've sent an email to moderators to request the change.
Update: URL has been updated
on another note: it's so funny that this says, that email should not be used, when the whole world uses email. it's so far detached from reality...
I feel like I'm taking pills, but hear me out.
If there's one thing we learned from the Snowden leaks is that the NSA can't break GPG.
Look at it from the POV of someone who like me isn't an expert: on the one hand I have ivory tower researchers telling me that GPG is "bad". On the other hand I have fact that the most advanced intelligence in the world can't break it. My personal conclusion is that GPG is actually fucking awesome.
What am I missing?
My impression is that GPG when used correctly is secure. But there are so many problems with it that the chances of shooting yourself with one of the footguns is too high for it to be a reliable solution.
The alternatives support newer encryption methods but nothing has fundamentally changed that doesn't make them less secure, but they have less footguns to worry about.
The weakest link in cryptography is always people.
The NSA can't break GPG assuming everything is working properly. This blog post (which to be fair I only skimmed) explains that GPG is a mess which could lead to things not working properly, and also gives real life examples. You may also want to see https://gpg.fail (you can tell they're from the ivory tower by the cat ears). The blog post also mentions bad UX, which you and I can directly appreciate (if anything I might expect ivory tower types to dismiss UX issues).
I am well familiar with that presentation at CCC. Yes, the presentation are by people who live in the low stakes world of theoreticals as you can tell by the cat ears.
> If you’d like empirical data of your own to back this up, here’s an experiment you can run: find an immigration lawyer and talk them through the process of getting Signal working on their phone.
> Long term keys are almost never what you want. If you keep using a key, it eventually gets exposed.
Have a sentence praising Signal followed by a sentence explaining the main critique of Signal (requiring mobile number) makes me question the whole post for credibility