ProTip: use PNPM, not NPM.
PNPM 10.x shutdown a lot of these attack vectors.
1. Does not default to running post-install scripts (must manually approve each)
2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
NPM is too insecure for production CLI usage.
And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local.
> And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local.
I've by now grown to like Hashicorp Vaults/OpenBao's dynamic secret management for this. It's a bit complicated to understand and get to work at first, but it's powerful:
You mirror/model the lifetime of a secret user as a lease. For example, a nomad allocation/kubernetes pod gets a lease when it is started and the lease gets revoked immediately after it is stopped. We're kinda discussing if we could have this in CI as well - create a lease for a build, destroy the lease once the build is over. This also supports ttl, ttl-refreshes and enforced max-ttls for leases.
With that in place, you can tie dynamically issued secrets to this lease and the secrets are revoked as soon as the lease is terminated or expires. This has confused developers with questionable practices a lot. You can print database credentials in your production job, run that into a local database client, but as soon as you deploy a new version, those secrets are deleted. It also gives you automated, forced database credential rotation for free through the max_ttl, including a full audit log of all credential accesses and refreshes.
I know that would be a lot of infrastructure for a FOSS project by Bob from Novi Zagreb. But with some plugin-work, for a company, it should be possible to hide long-term access credentials in Vault and supply CI builds with dropped, enforced, short-lived tokens only.
As much as I hate running after these attacks, they are spurring interesting security discussions at work, which can create actual security -- not just checkbox-theatre.
NPM was never "too insecure" and remains not "too insecure" today.
This is not an issue with npm, JavaScript, NodeJS, the NodeJS foundation or anything else but the consumer of these libraries pulling in code from 3rd parties and pushing it to production environments without a single review. How this still fly today, and have been since the inception of public "easy to publish" repositories remains a mystery to me even today.
If you're maintaining a platform like Zapier, which gets hacked because none of your software engineers actually review the code that ends up in your production environment (yes, that includes 3rd party dependencies, no matter where they come from), I'm not sure you even have any business writing software.
The internet been a hostile place for so long, that most of us "web masters" are used to it today. Yet it seems developers of all ages fall into the "what's the worst that can happen?" trap when pulling in either one dependency with 10K LoC without any review, or 1000s of dependencies with 10 lines each.
Until you fix your processes and workflows, this will continue to happen, even if you use pnpm. You NEED to be responsible for the code you ship, regardless of who wrote it.
“Personally, I never wear a seatbelt because all drivers on the road should just follow the road rules instead and drive carefully.”
I don’t control all the drivers on the road, and a company can’t magically turn all employees into perfect developers. Get off your high horse and accept practical solutions.
I never, ever, do development outside of a podman container these days.
Basically if I am going to run some code from somewhere and I haven't read it, it goes in a container.
I know its not foolproof, but I can't believe how often people run code they haven't read where it can make a huge mess, steal secrets, etc. I'll probably get owned someday, I'm sure, but this feels like a bare minimum.
You are just reducing the blast radius with use of podman; you will likely need secrets for your app to work, which will be exposed regardless of the podman approach.
>You have to make sure you're not putting any secrets in the container environment.
How does this work exactly?
containers still need env vars and access to databases and cloud environments. Without these the container is just useless isolated pod.
Not who you asked, but I have a similar setup. I can run everything I need for local development in that image (db, message queue emulator, cache, other services). So, setting things like environment variables or running postgres work the same as they do outside the container.
The image itself isn't the same image that the app gets deployed in, but is a portable dev environment with everything needed to build and run my apps baked in.
This comes with some nice side effects like being able to instantly spin up clean work environments on my laptop, someone elses, or a remote vm.
What do you mean? You can drop into bash in a container and run any arbitrary command, so `npm install foo` works just fine. Why would posthog's SDK be a special case?
I think the issue is more about what else has to go into or be connected to that container. Posthog isn't really useful if it's air-gapped. You're going to give it keys to access all kinds of juicy databases and analytics, and those NPM tokens, AWS/GCP/Azure credentials, and environment variables are exactly what it exfiltrates.
I don't run much on the root OS of my dev machine, basically everything is in a container or VM of some kind, but that's more so that I can reproduce my environment by copying a VMDK than in an effort to limit what the container can do to itself and data it has access to. Yeah, even with root access to a VM guest, an attacker they won't get my password manager, personal credit card, socials, etc. that I only use from the host OS... But they'll get everything that the container contains or has access to, which is often a lot of data!
You're severely limiting the blast radius. This malware works by exfiltrating secrets during installation, if I understood it correctly. If you would properly containerize your app and limit permissions to what is absolutely required, you could be compromised and still suffer little to no consequences.
Of course, this is not a real defense on its own, its just good practice to limit blast radius, much like not giving everybody admin rights.
Sure, but only the container is affected and it is always your responsibility to grant as little access as possible to the various credentials you may need to supply that environment. AFAICT with this worm, if you don't supply write-level GitHub credentials to the container (and you shouldn't!) and you install infected packages, the exploit goes no further.
Because PostHog's "Talk to a human" chat instead gets a grumpy gatekeeping robot (which also doesn't know how to get you to a working urgent support link), and there's nothing prominently on their home page or github about this:
Have a slack channel with them, these are the versions they mentioned:
posthog-node 4.18.1
posthog-js 1.297.3
posthog-react-native 4.11.1
posthog-docusaurus 2.0.6
Your status page isn't clear, but are all versions between the compromised and "safe to install" versions compromised or just the ones listed?
For example I installed `posthog-react-native` version `4.12.4` which is between the `4.11.1` version which is compromised and the safe to install version `4.13.0`. Is that version compromised or not?
The e18e community are reducing dependencies in popular libraries and building tools to prevent and reduce the impact of such attacks. Join if you want to help out! https://e18e.dev/
I don't know why you were downvoted. The actual page does not say SHA1, the attack as far as I know is not related to the SHA1 algorithm, and the name of the worm isn't intended as that sort of pun.
Why? Everyone won't use cooldowns, but the key is to have just enough people running brand new to set off a warning/have systems that check dependencies scan and find vulns go off and the packages get pulled before production builds them.
Monocultures where everyone pulls and builds with every brand new thing for the most minor changes is dangerous.
ProTip: use PNPM, not NPM. PNPM 10.x shutdown a lot of these attack vectors.
1. Does not default to running post-install scripts (must manually approve each)
2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
NPM is too insecure for production CLI usage.
And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local.
> And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local.
I've by now grown to like Hashicorp Vaults/OpenBao's dynamic secret management for this. It's a bit complicated to understand and get to work at first, but it's powerful:
You mirror/model the lifetime of a secret user as a lease. For example, a nomad allocation/kubernetes pod gets a lease when it is started and the lease gets revoked immediately after it is stopped. We're kinda discussing if we could have this in CI as well - create a lease for a build, destroy the lease once the build is over. This also supports ttl, ttl-refreshes and enforced max-ttls for leases.
With that in place, you can tie dynamically issued secrets to this lease and the secrets are revoked as soon as the lease is terminated or expires. This has confused developers with questionable practices a lot. You can print database credentials in your production job, run that into a local database client, but as soon as you deploy a new version, those secrets are deleted. It also gives you automated, forced database credential rotation for free through the max_ttl, including a full audit log of all credential accesses and refreshes.
I know that would be a lot of infrastructure for a FOSS project by Bob from Novi Zagreb. But with some plugin-work, for a company, it should be possible to hide long-term access credentials in Vault and supply CI builds with dropped, enforced, short-lived tokens only.
As much as I hate running after these attacks, they are spurring interesting security discussions at work, which can create actual security -- not just checkbox-theatre.
Both NPM and Yarn have a way to disable install scripts which everyone should do if at all possible.
Is there a way to set a minimum release age globally for my pnpm installation? I was only able to find a way to set it for each individual project.
ProTip: `use bun`
> NPM is too insecure for production CLI usage.
NPM was never "too insecure" and remains not "too insecure" today.
This is not an issue with npm, JavaScript, NodeJS, the NodeJS foundation or anything else but the consumer of these libraries pulling in code from 3rd parties and pushing it to production environments without a single review. How this still fly today, and have been since the inception of public "easy to publish" repositories remains a mystery to me even today.
If you're maintaining a platform like Zapier, which gets hacked because none of your software engineers actually review the code that ends up in your production environment (yes, that includes 3rd party dependencies, no matter where they come from), I'm not sure you even have any business writing software.
The internet been a hostile place for so long, that most of us "web masters" are used to it today. Yet it seems developers of all ages fall into the "what's the worst that can happen?" trap when pulling in either one dependency with 10K LoC without any review, or 1000s of dependencies with 10 lines each.
Until you fix your processes and workflows, this will continue to happen, even if you use pnpm. You NEED to be responsible for the code you ship, regardless of who wrote it.
“Personally, I never wear a seatbelt because all drivers on the road should just follow the road rules instead and drive carefully.”
I don’t control all the drivers on the road, and a company can’t magically turn all employees into perfect developers. Get off your high horse and accept practical solutions.
I never, ever, do development outside of a podman container these days. Basically if I am going to run some code from somewhere and I haven't read it, it goes in a container.
I know its not foolproof, but I can't believe how often people run code they haven't read where it can make a huge mess, steal secrets, etc. I'll probably get owned someday, I'm sure, but this feels like a bare minimum.
> if I am going to run some code from somewhere and I haven't read it, it goes in a container
How does this work? Every single npm package has tons of dependency tree nodes
Everything runs in the container and cannot escape it. Its like a sandbox.
You have to make sure you're not putting any secrets in the container environment.
You are just reducing the blast radius with use of podman; you will likely need secrets for your app to work, which will be exposed regardless of the podman approach.
>You have to make sure you're not putting any secrets in the container environment.
How does this work exactly? containers still need env vars and access to databases and cloud environments. Without these the container is just useless isolated pod.
Not who you asked, but I have a similar setup. I can run everything I need for local development in that image (db, message queue emulator, cache, other services). So, setting things like environment variables or running postgres work the same as they do outside the container.
The image itself isn't the same image that the app gets deployed in, but is a portable dev environment with everything needed to build and run my apps baked in.
This comes with some nice side effects like being able to instantly spin up clean work environments on my laptop, someone elses, or a remote vm.
Maybe don't use JavaScript on the backend.
All right then, keep your secrets.
I didn't read this as separate containers.
I ssh into a second local user and do development there instead with tmux.
I send mail to a demon which runs MsBuild and mails the output back to me.
How are you doing this in practice? These are npm packages. I don't see how could reasonably pull in Posthog's SDK in a container.
What do you mean? You can drop into bash in a container and run any arbitrary command, so `npm install foo` works just fine. Why would posthog's SDK be a special case?
I think the issue is more about what else has to go into or be connected to that container. Posthog isn't really useful if it's air-gapped. You're going to give it keys to access all kinds of juicy databases and analytics, and those NPM tokens, AWS/GCP/Azure credentials, and environment variables are exactly what it exfiltrates.
I don't run much on the root OS of my dev machine, basically everything is in a container or VM of some kind, but that's more so that I can reproduce my environment by copying a VMDK than in an effort to limit what the container can do to itself and data it has access to. Yeah, even with root access to a VM guest, an attacker they won't get my password manager, personal credit card, socials, etc. that I only use from the host OS... But they'll get everything that the container contains or has access to, which is often a lot of data!
You're severely limiting the blast radius. This malware works by exfiltrating secrets during installation, if I understood it correctly. If you would properly containerize your app and limit permissions to what is absolutely required, you could be compromised and still suffer little to no consequences.
Of course, this is not a real defense on its own, its just good practice to limit blast radius, much like not giving everybody admin rights.
Sure, but only the container is affected and it is always your responsibility to grant as little access as possible to the various credentials you may need to supply that environment. AFAICT with this worm, if you don't supply write-level GitHub credentials to the container (and you shouldn't!) and you install infected packages, the exploit goes no further.
Another effective strategy I learned of recently that seems like it would have avoided this is to wait months before using new versions of packages.
Most attacks on popular packages last at most a few months before detection.
Because PostHog's "Talk to a human" chat instead gets a grumpy gatekeeping robot (which also doesn't know how to get you to a working urgent support link), and there's nothing prominently on their home page or github about this:
Hey PostHog! What version do we need to avoid?
Have a slack channel with them, these are the versions they mentioned: posthog-node 4.18.1 posthog-js 1.297.3 posthog-react-native 4.11.1 posthog-docusaurus 2.0.6
co-founder here. We mentioned it in the main thread about this: https://news.ycombinator.com/item?id=46032650 and on status.posthog.com
- posthog-node 4.18.1, 5.13.3 and 5.11.3
- posthog-js 1.297.3
- posthog-react-native 4.11.1
- posthog-docusaurus 2.0.6
If you make sure you're on the latest version you should be good.
Your status page isn't clear, but are all versions between the compromised and "safe to install" versions compromised or just the ones listed?
For example I installed `posthog-react-native` version `4.12.4` which is between the `4.11.1` version which is compromised and the safe to install version `4.13.0`. Is that version compromised or not?
The only compromised versions are the ones listed. Any other versions are fine.
Thanks. Also - maybe change "talk to a human" to "talk to a grumpy robot" :)
Hm did you click on "help" (on the right side) -> "Email our support engineer" when logged in?
Hundreds of people had access to publish the Zapier SDK, so it's little surprise they were eventually compromised! (https://bsky.app/profile/benmccann.com/post/3m6fdecsbdk2u)
The e18e community are reducing dependencies in popular libraries and building tools to prevent and reduce the impact of such attacks. Join if you want to help out! https://e18e.dev/
Just this morning, after trying to make the case over the past year, we had a change landed to remove more than a dozen dependencies from typescript-eslint! https://bsky.app/profile/benmccann.com/post/3m6fcjax7ec2h
FYI your first link is the same as your third link. It's correct as the third link, so the Zapier one is missing.
fixed!
What we need is an open source industry standard stdlib equiv developed and maintained by MS, GOOG and some open source players.
All libraries should strive to have dependency only on it.
Perhaps it's time to organize a curated "stable" stream for npm packages.
If I want more stability for my OS I can choose Debian-stable rather than Ubuntu-nightly.
But for npm, there doesn't seem to be the same choice available. Either I sign up to the fire-hose or I don't.
I can choose to only upgrade once a month, but there's a chance I'm still getting a package that dropped 5 minutes before.
pnpm
Typo in title. Current title of HN post says:
> SHA1-Hulud the Second Comming – Postman, Zapier, PostHog All Compromised via NPM
Should be Shai-Hulud, not SHA1-Hulud.
That said, the secrets are uploaded to a repo named `Sha1-Hulud: The Second Coming`
Ah, I missed that detail.
The worm itself is posting the secrets in Github with the name Sha1-hulud: https://github.com/search?q=sha1-hulud&type=repositories
I don't know why you were downvoted. The actual page does not say SHA1, the attack as far as I know is not related to the SHA1 algorithm, and the name of the worm isn't intended as that sort of pun.
"This appears to have limited the imapct of the attack at this time."
typo after the listed affected packages
And this, kids, is why you should vendor your dependencies
Postman getting hit is scary. For many teams, it's effectively an unmanaged password manager for API keys.
Would the adoption of a Deno-like security posture in NPM have mitigated this?
See also: https://news.ycombinator.com/item?id=46005111
As it arguably would have reduced impact
(I'm one of the Renovate maintainers and have recently pushed for this to be more of a widely used feature)
I think everyone just gets hit after 7 days frankly.
Why? Everyone won't use cooldowns, but the key is to have just enough people running brand new to set off a warning/have systems that check dependencies scan and find vulns go off and the packages get pulled before production builds them.
Monocultures where everyone pulls and builds with every brand new thing for the most minor changes is dangerous.
[dupe] Discussion: https://news.ycombinator.com/item?id=46032539
Dup https://news.ycombinator.com/item?id=46032539
This article has quite a bit more information though.
Please use the word "Dup" for a resubmission of the same link and "See also" for a different submission.
Not a dup, this is a different article about the same event, with different information too.
See also: https://news.ycombinator.com/item?id=46032539 Shai-Hulud Returns: Over 300 NPM Packages Infected (helixguard.ai)
~6 hours ago | 430 comments