33 comments

  • eranation 18 hours ago
    I know people have opinions about cooldowns, but they would have saved you from axios, tanstack, and many other recent npm supply chain attacks. If you have Artifactory / Nexus, you probably already have cooldowns, but it's easy to set up if you don't.

    Why cooldowns? Most npm (or pypi) compromises were taken down within hours, cooldowns simply mean - ignore any package with release date younger than N days (1 day can work, 3 days is ok, 7 days is a bit of an overkill but works too)

    How to set them up?

    - use latest pnpm, they added 1 day cooldown by default https://pnpm.io/supply-chain-security

    - or if you want a one click fix, use https://depsguard.com (cli that adds cooldowns + other recommended settings to npm, pnpm, yarn, bun, uv, dependabot and, I’m the maintainer)

    - or use https://cooldowns.dev which is more focused on, well, cooldowns, with also a script to help set it up locally

    All are open source / free.

    If you know how to edit your ~/.npmrc etc, you don't really need any of them, but if you have a loved one who just needs a one click fix, these can likely save them from the next attack.

    Caveat - if you need to patch a new critical CVE, you need to bypass the cooldown, but each of them have a way to do so. In the past few weeks, while I don't have hard numbers, it seems more risk has come from Software Supply Chain attacks (malicious versions pushed) than from new zero day CVEs (even in the age of Mythos driven vulnerability discovery)

    • kelnos 16 hours ago
      The idea that 7 days is overkill is crazy to me. Unless you need a specific new feature, you should usually be fine with a dependency version that was released months ago when starting a new project. Ditto for doing regular dep upgrades.

      The only issue I see is responding to vulnerabilities, where you want to upgrade immediately. But I think in that case it's fine to require the developer to be explicit in the new version they want.

      • eranation 16 hours ago
        I agree, but in most recent cases a 1 day cooldown would have been enough.

        I added a “how to bypass if you have to patch a zero day CVE” section to depsguard for all supported package managers.

    • justsid 15 hours ago
      Doesn’t that just move the problem 7 days down the road? I always assumed these kinds of things just burn themselves because someone gets infected and realizes, not that there is an army of people auditing the changes. If everyone cooldowns for 7 days, it just happens later?
      • chowells 15 hours ago
        A large portion of the time, the maintainer notices what happened a few hours later. Maybe they were asleep or off doing other things for a while, but they eventually come back. And these kinds of takeovers frequently aren't complete enough to cover their tracks.

        So at the very least, adding a cooldown raises the difficulty of these attacks above that threshold.

        • Barbing 14 hours ago
          Would be bad for software/progress I guess but, got me thinking of if we had an expectation a dev would post an update checksum/hash, then follow it up a day later with the update itself...

          (well maybe that leads to kidnappings idk)

          edit - heh, sibling comment on package manager-level must be much smarter

          • latexr 11 hours ago
            > Would be bad for software/progress I guess but

            We all need to slow down and get some perspective. “Progress” doesn’t mean “rush everything and do it now now now”. Advancements should be slow, methodical, considered. That’s a good thing, not a weakness.

            • Barbing 7 hours ago
              :)

              I like it. Well, it would be tough for everybody whenever a long-awaited feature arrived but was out of touch just behind the glass. Maybe will improve our delayed gratification appreciation!

          • bot403 13 hours ago
            I fail to see how this isn't a simple cool down with more steps. It doesn't seem to add anything to the security posture of the package/update
            • Barbing 7 hours ago
              Nobody can expose themselves during the danger period

              Dev enforces cooldown on users, not users deciding they want to be safer. Dev has extra step of ensuring they check their accounts every ~23hr indefinitely.

              The simple cooldown scenario sees potentially thousands of downloads of a malicious package. The 24 hour developer delay scenario sees zero downloads during the same period.

        • nullhole 14 hours ago
          > large portion of the time, the maintainer notices what happened a few hours later.

          So add it at the package manager level instead of the user level then?

      • eranation 13 hours ago
        These get detected almost immediately, and removed by npm within hours (axios, tanstack at least)
        • Hackbraten 9 hours ago
          But who will detect them on day one once everyone ignores them for seven days?
          • eranation 57 minutes ago
            There are some companies that specialize in detecting those, they do it for free (and get lots of marketing for it…)
          • bakkoting 6 hours ago
            These things are usually caught by tools specifically scanning npm or by the maintainers noticing their account is compromised, not by people auditing their own installed packages.
          • aoeusnth1 5 hours ago
            AI agents
    • jagged-chisel 7 hours ago
      I'm not so sure cooldown would be effective. Someone still has to override the cooldown to install the (potentially) questionable releases and discover problems. If no one does, you've only delayed the problems by 3/7/10/14 days.

      After thinking more while typing this:

      I think I'd agree we should indeed have a 10-day cooldown (i.e. don't install anything released in the last ten days.) I suppose I just don't think anyone should expect it to be the only mitigation.

      • compel2160 5 hours ago
        I don't think anyone is saying cooldowns are the only thing you need - just that it's a 30sec change that should harden your code.

        Also, most malicious versions seem to be detected by tools scanning new packages. People updating without cooldowns probably aren't manualy inspecting diffs. Giving tools more time to detect things seems pretty obviously good to me. Add to that maintainers reporting they've been pwned, and the floor for sneaking malicious code is much higher.

    • wesselbindt 17 hours ago
      Seems like you dropped something:

      > Disclaimer: I maintain depsguard

      • eranation 17 hours ago
        Yikes. You are correct. Honest truth, I got a few downvotes (after a few more upvotes), thought this was the cause, but you’re right. Didn’t think that it matters much, I’ll add it back. Had no idea anyone noticed. Fair enough, thanks for keeping me honest.

        Edit: added it back, inline.

    • oneshtein 15 hours ago
      Why not to create a separate distribution or channel (bleeding edge/stable/long term), like Linux distributions do?
    • youre-wrong3 15 hours ago
      NPM doesn’t make it easy to do cooldowns because their apis prevent it.
    • boredhedgehog 14 hours ago
      > Why cooldowns? Most npm (or pypi) compromises were taken down within hours,

      But won't more people on cooldown mean less likelihood to catch the bug, thus extending the need for cooldowns?

      • reshlo 13 hours ago
        These compromises are usually caught within hours by security researchers performing automated scanning of all published packages.
        • Hackbraten 9 hours ago
          Except that exhaustively scanning for badness is provably impossible.

          It's inevitable that a false negative will slip through one day, and when that happens, it will compromise everyone who installs it, no matter if on day one or day eight.

          • compel2160 5 hours ago
            The idea isn't to comprehensively make malicious code impossible - the idea is to make it difficult to sneak in. If the NSA wants to spend 500 billion$ to compromise an NPM package, there's very little we can do. But if waiting 3 days for security scans catch even 10% of malicious packages, that 's 10% fewer incidents everyone else has to deal with. And now people pwning maintainers must be much more sophisticated so their attacks are entirely undetected for that period.
    • tkel 18 hours ago
      yes, props to pnpm for adding 1 day cooldown by default in v11.
    • themafia 17 hours ago
      Release escrow.

      Teams should be able to say "at least N developers have to agree to a release before it happens." This should be a policy they can control and lock down with a non developer account.

      • eranation 16 hours ago
        Interesting idea, but there are so many cases of solo maintainers.

        I think that npm can have its own cooldown and automated security scan. Socket.dev, StepSecurity both close a gap here by spending tokens to scan new popular packages. Whether they do it for marketing or out of the goodness of their heart, is irrelevant. They don’t charge for this service, and it’s something I’d expect Microsoft (who owns GitHub who owns npm) to do.

        • kentm 6 hours ago
          Heavy use of packages with solo maintainers is part of the problem here. Having multiple people looped in with proper governance doesn’t completely solve the issue but it makes it much harder to execute supply chain attacks.

          It’s a bitter pill that we collectively don’t want to swallow, because it has a lot of negative connotations on our ability to deliver individual impact quickly.

    • rapind 12 hours ago
      Honestly I'd prefer a system wide cooldown / age setting across all package managers and installers, with the option to poke holes / allow and also the option to deny / allow post installation runners. Something like a global asdf style installer that tracks and enforces these rules across all of it's managed package managers.

      Something like a proxy that intercepts and depending on the source, is intelligent enough to examine the package for age. That would be cool. Already sounds like a cloud product you could sell.

    • 0xbadcafebee 17 hours ago
      This is like buying something from the grocery store and then waiting a week to eat it in case the FDA put out a warning about it.
      • eranation 16 hours ago
        More akin to letting astronauts stay in quarantine for a day in case they caught space bugs.

        If every other week I would notice the FDA recalls a popular brand that would have taken over my brain and transmit my bank password and SSN to a stranger, I might prefer drinking week old milk.

        Edit: not dismissing your analogy, it’s pretty much it.

        • shermantanktop 13 hours ago
          If nobody drinks the milk until it’s a week old, that won’t help.

          I do think cooldowns help, it’s more that this analogy doesn’t help.

          The cow has to wake up and look at what milk she’s been putting out, and ideally the milk machine would use an early release channel so that some people will get the brain virus first.

          • hennell 3 hours ago
            Nobody has to drink it, just test it. The analogy is stupid, but it's more like if there was no FDA, you'd wait a week for food safe labs to test it, or you'd invest in your own testing.

            The early release channel is sensible, but if you're a bad actor who's compromised a package you're not going to early release are you, you get it straight out there.

      • dpark 16 hours ago
        If there was a good reason to believe the pop tarts you buy might unexpectedly be contaminated with dioxins, waiting a week would be prudent.
      • kelnos 16 hours ago
        No it's not. That's a terrible analogy.
        • 0xbadcafebee 13 hours ago
          It's exactly the same. With both you have no idea if you'll be compromised once you pick up a new item from the store. With both you wait a week, in case the authorities issue a recall. With both you use it after that one week of waiting. Both are relying on luck to be safe.

          The crazy thing is the risk from food is higher, we just don't really mind, because it's rare that we personally get affected

          • seba_dos1 11 hours ago
            As much as I dislike this distribution model, this is a completely misapplied analogy. In npm with cooldowns case you "buy" a thing and get to use it instantly without any delay, it just won't get improved until a few days later - exactly as if the project you installed would use some timed staging channel for testing before making releases, except you're the one who controls the timing here.
  • aselimov3 20 hours ago
    What are the actual guarantees that go/Rust make that Python/npm don’t? It seems like it might just be that Python/npm are juicier targets? I’m starting to try and avoid all third party packages
    • brunoborges 19 hours ago
      It is 100% up to the package manager's steward to control how ownership of packages and namespaces are granted.

      Maven Central exists for decades the amount of incidents of people stealing namespaces is minimal.

      One can't simply publish a package under the groupId "com.ycombinator" without having some way to verify that they own the domain ycombinator.com. Then, once a package is published, it is 100% immutable, even if it has malicious code in it. Certainly, that library is flagged everywhere as vulnerable.

      It baffles me that NPM for so long couldn't replicate the same guardrails as Maven Central.

      • SupLockDef 16 hours ago
        Also....

        Maven doesn't have "preinstall, install, post install", or " build.rs" for rust, executing arbitrary code during the installation.

        The code that's executing with Maven is in your pom.xml, not some hidden code from a transient dependency.

        That alone is a major design flaw in both npm and cargo.

        Java is boring, because it works. People don't like boring stuff. It's more exciting to play the Russian roulette on each install!

        • pkolaczk 14 hours ago
          As a heavy user of Java I can assure you that Java is very very far from boring, especially when building it with maven or gradle. There are millions ways something can screw up the build. Rust (and Go too) in comparison is much more boring actually - it maybe I was just lucky, but the majority of stuff just builds with zero issues.

          Especially the number of times I had to clean all the caches in order for maven and gradle to build the project is just far too high for me. It shouldn’t ever be needed if an ecosystem is meant to be considered boring. I feel like Java doesn’t build when I look at it wrong.

          • robotnikman 1 hour ago
            > I feel like Java doesn’t build when I look at it wrong.

            Hah, too true! I guess it is boring in the fact that it is not as... move fast and break things... as NPM. But Java build systems are still certainly fun and challenging in their own ways.

          • MattPalmer1086 13 hours ago
            Yep, sounds boring!
        • panzi 5 hours ago
          How does Maven handle JNI? Is it also a build system for C/C++, or do packages with native bindings require manual build steps?
      • cluckindan 19 hours ago
        How does that protect against credential theft? MFA required to sign published releases?
        • brunoborges 18 hours ago
          That is another important layer. Maven Central is not immune to credential theft. If a publisher token is stolen, an attacker may still be able to publish a malicious new version until the token is revoked or the account is suspended after reporting the problem to Sonatype.

          But in the Maven/Gradle ecosystem, most projects pin exact dependency versions. Support for version ranges and dynamic versions exist, but they are generally avoided because they hurt reproducible builds. That means a malicious new release does not automatically flow into most consumers’ builds just because it was published.

          I'd go as far to say that NPM should:

          1. Enforce scope (namespace) requirement, and require external verification (reverse DNS for example).

          2. Disable version range support out of the box. User must --enable this setting from the command line at all times.

          3. Remove support for install scripts completely. If someone wants to publish a ready-to-run software, there are plenty of other mechanisms.

          • TiddoLangerak 16 hours ago
            You're missing the biggest root cause though, and that significantly hinders how well this translates between languages: the Java community has settled on fewer but large monolithic dependencies, whereas the JavaScript community has settled on many but small composable dependencies (for good historical reasons, but that's a topic in and off itself).

            This directly influences how well e.g. version pinning works. In the Java world, package versions are _relatively_ independent from eachother and have few transitive dependencies, and as such version conflicts are relatively rare. This means you can get away with full pinning of all dependencies, with the occasional manual override of a conflicting transitive dependency.

            This doesn't work in JavaScript. The dependency ecosystem is massively intertwined, if every library would specify exact versions you'd end up with literally hundreds of conflicts to resolve. That's not feasible. As a result, they've chosen the middle ground of using lock files in addition to version ranges.

            This also hurts the effectiveness of verified namespaces: when packages come from hundreds of different sources, you're not going to notice 1 or 2 sketchy ones in there.

            Other consequences of the big monolithic packages in Java are that updates tend to be less frequent, and more often from large reputable venders. Both of these help to reduce the problem too.

            While the JavaScript toolchain can definitely learn a lot from the Java toolchains, the problems it needs to solve are not the same, and thus solutions don't translate 1-1.

            At least I hope that they'll get rid of install scripts, that's such a low hanging fruit that really should've be done a decade ago.

            • dns_snek 16 hours ago
              > At least I hope that they'll get rid of install scripts, that's such a low hanging fruit that really should've be done a decade ago.

              How will that help? It's just going to break things that legitimately require them.

              Instead of being infected upon running "npm install", you'll just get infected upon running "npm run" instead. The former is slightly more reliable but fixing that is just kicking the can down the road. Maybe we'll have a few days before the payloads get rewritten.

          • nayroclade 10 hours ago
            Dependency versions are also locked for npm projects via package-lock.json, and this has been the default behaviour for years. The version ranges specified in package.json don't mean you just pick up the latest whenever you run npm install. Unless you delete package-lock.json or run "npm update", you and everyone else gets the exact same dependency tree each time. So it is just as reproducible as a Maven build in that sense.
            • panzi 5 hours ago
              Plus the lock file doesn't just contain the exact versions, it contains hashes. Making sure that you actually got the package in the exact same version.
          • com2kid 18 hours ago
            > Enforce scope (namespace) requirement, and require external verification (reverse DNS for example).

            Who the heck says everyone who publishes a library has a domain? That seems absurd.

            • brunoborges 18 hours ago
              Sonatype allows "io.github.<username>" as a valid groupId and has a process to verify ownership. I am sure other providers like GitLab can work on this.
            • oarsinsync 14 hours ago
              You can get subdomains for free from a number of places, some of which are more reliable than others.

              This exists because domains (historically) used to be expensive by western standards. .com used to be $75/year back in the day.

            • chadgpt3 15 hours ago
              Why don't you? It costs around $20 per year. Every serious computer nerd should have one, and a web server with at least a basic homepage.
              • whatevaa 15 hours ago
                $20 per year on US is not the same value across the world. Would you say $60 per year is ok too, if you adjusted for income? 100$?

                Don't count other people money.

                • lelanthran 12 hours ago
                  The problem with this argument against, is that it reinforces the point it is arguing against: If a contributor cannot afford the $20/year to publish for a single 12-month period, then they are already a risk - someone could buy their account off them.

                  A small bar of $20/year is also enough to completely cut-down on contributors who sign up with the intention of publishing malicious packages: they have to pay $20/year for each malicious package they want to publish!

            • radlad 18 hours ago
              And domains can change hands legitimately.
              • whatevaa 15 hours ago
                Or be forgotten to renewed, lost and, depending on registrar, overtaken.
    • nirvdrum 19 hours ago
      Part of the point the article makes is that most other popular languages have a comprehensive standard library. JS has an astonishingly small on. Rather than have one vetted set of libraries that ship with the language, applications either need to roll it themselves or pull from a 3rd party package repository. We've drilled NIH into people, so they tend to reach for packages. That's not necessarily a bad thing, but it often means they're pulling in more code than they need. The JS ecosystem has also favored smaller modules, so you need many of them. And everyone builds on top of that, leading to massive growth in dependency graphs. It's a huge surface area for things to go wrong, intentionally or not.

      With many other languages, you have a lot of functionality out of the box. Certainly, there have been bugs and security issues, but they're a drop in the bucket compared to what you see in the JS ecosystem. With other languages, you have a much smaller external dependency graph and the core functionality is coming from a trusted 3rd party.

      • cluckindan 18 hours ago
        What important functionality do you feel is missing from the commonly used JS environments (node and browser) that is causing people to install it as a third party dependency?

        The issue isn’t that the functionality doesn’t exist, it’s always backwards compatibility with versions where it did not yet exist.

        • sidewndr46 7 hours ago
          How can one have a backwards compatibility issue if the solution didn't exist yet?
      • apothegm 19 hours ago
        Why Python, tho, in that case? Its stdlib is quite robust. Surprisingly so in some areas.
        • saghm 18 hours ago
          I'm not convinced that Python should be the standard for package management either. Earlier this week I was trying to publish a Python package for the first time wrapping a Rust library I wrote (for use only on Linux and Python 3.12+), and it literally took me hours to get from "I have a wheel that I can import and it works on my system" to "I have published that wheel and can install the package from PyPI on the set of systems that I'm trying to support and it actually works". Everything I've heard about this indicates that the situation for Python packaging is literally better than it ever has been before with the current tooling, so I can't even imagine how bad it was for the decades before. In comparison, having literally never touched npm before, I was able to publish a wrapper around the same library and validate that it was working in maybe 10 minutes (most of which were spent from not realizing that a certain tool was failing with a vague "file not found" error because I hadn't installed npm yet).

          I'm not saying that npm is doing everything right, but I suspect that beyond the obvious low-hanging fruit that we hear about pretty consistently with npm there's probably a long tail of less obvious stuff that can be exploited that will not be specific to npm. The fundamental problems with supply-chain vulnerabilities aren't going to go away if npm magically became pip or go modules overnight.

          • apothegm 9 hours ago
            I’m not suggesting pythons package management was good. This thread was started with a post about JS and Python, and I was responding to a message saying JS is so vulnerable to package repository attacks because its stdlib is so small. But Python’s been vulnerable too in spite of a robust stdlib.

            And IMO the complaints about Python packaging tooling are overblown. Setuptools on its own was a bit disappointing, but coming from PHP 20 years ago it was a revelation! Virtualenvs and requirements.txt were an further improvement and so was pip — in an era where most other scripting languages didn’t have pinning for sub dependencies either; but you could always “pip freeze” to capture everything.

            Later on, pipenv wasn’t perfect, but it was enough. I never ran into any of the headaches people keep saying poetry and uv solve. Poetry on the other hand always gives me one reason or another to beat my head against a wall.

            That said, I’ve never bothered to try to publish anything and can’t comment on that end of it.

          • otabdeveloper4 15 hours ago
            > Python should be the standard for package management

            Python is the antistandard for package management. Or maybe even the eldritch horror of package management.

            • Macha 9 hours ago
              Part of me wonders if the reason we see more npm attacks than pypi attacks is malware authors not wanting to deal with python packaging either
              • fud101 2 hours ago
                Hilarious but would they not abuse LLMs for this if so?
            • dv35z 13 hours ago
              Curious if we included package managers from operating system distros (example: Debian apt), in your experience, what do you suggest JavaScript/Python/Rust package managers learn / borrow from?
            • darkteflon 11 hours ago
              Thanks to uv, all is forgiven.
      • skydhash 17 hours ago
        > Part of the point the article makes is that most other popular languages have a comprehensive standard library.

        Both the Browser and Node.js standard library are fairly extensive. I don't think there's much you can do with other language you can't do with Node.js. And as a lot of newer languages have demonstrated (like zig and hare), you don't need an extensive one.

        • themafia 17 hours ago
          It used to be true. The early days of node were pretty paltry. I think a lot of developers and projects have picked up these dependencies by habit and accretion and have never factored them out.
          • skydhash 16 hours ago
            My pet peeve is when a developer picks up a library for just a few lines of code, and it turns that this library picks up another one that's not even relevant to its core domain. Whenever you get to the leaves of the dependency tree, it usually turns into a joke. Byte sized libraries everywhere.

            Like you have axios.js that decides in turn to depends on the "follow-redirects" library. IMO, the best move would be for axios to vendor the code. Same with "proxy-from-env" Just tiny libraries scattered all over the web. Something like axios, should purely depends on the runtime library.

    • jollyllama 19 hours ago
      > It seems like it might just be that Python/npm are juicier targets?

      Attackers go where the victims are. Frontend is a monoculture with the vast majority using NPM; backend, less so. This isn't an excuse for NPM, but another strike against it.

      You could also argue that the attacks make a deeper point about frontend vs backend devs, but I won't go there.

      • bichiliad 18 hours ago
        Why would you even imply something like that?
        • jollyllama 6 hours ago
          The fact that a package manager which keeps separate versions of each package for each dependency that it has became the accepted default that everyone in the frontend community uses as a foundation for their projects speaks to a lack of care or understanding within that community for technical matters.
        • llbbdd 16 hours ago
          They feel the need to compete given that jokes about "backend" devs write themselves
          • lucketone 15 hours ago
            Frontend has lower barrier of entry and more appeal for beginners, so its bell curve might have its left edge is thicker. It impacts the avg of problems and culture of dealing with them
            • llbbdd 4 hours ago
              More appeal I agree because it's easier to see useful results and iterate quicker. Lower barrier of entry I disagree with strongly; if the barrier to entry were so low I don't know why I've worked with so many otherwise-talented backend devs that can't wrap their heads around the frontend to save their lives. Frontend forces you to deal with real-world customer problems sooner rather than later; performance is more important, it has to work on more than one environment, you have a frame budget. It's like saying game development has a low barrier to entry; you might be able to get started quickly but you will run into constraints unless you learn fast. On the backend you can just pay another dollar for a VPS twice the size.
        • voidfunc 17 hours ago
          [flagged]
    • lostglass 19 hours ago
      To be honest Rust has the exact same supply chain attack pattern - it's just newer and more maintained at the moment. Give it a decade.
      • marcosdumay 17 hours ago
        Programs in Rust (or almost every other language) normally have fewer dependencies by 2 or 3 orders of magnitude.

        And that number tends to reduce even more when the ecosystem matures.

        • rascul 5 hours ago
          It may be fewer but it still doesn't feel good when cargo pulls in hundreds of deps for a seemingly simple application. But maybe it seems simple because of all the deps...
          • marcosdumay 2 hours ago
            Agreed, but that's the reason why it keeps being a huge problem in JS while other languages only have an eventual small trouble.

            But also, almost all of those deps on all simple apps are the same in Rust. They are the same for a large part in JS too, but it's for a smaller part than on most languages.

          • anthk 5 hours ago
            This; even Golang for medium sized projects (NNCP, Yggdrasil) have about 8-10 deps on average. Rust's dependency chain it's unmanageable for a distro manager.
        • anthk 5 hours ago
          From Golang the Rust dependency size it's closer to NPM than Go.
      • aselimov3 10 hours ago
        Yea I’m a big fan of rust but it does feel uncomfortable to see my dependency blowing up to the hundreds when I build
      • slopinthebag 17 hours ago
        Supply chain attacks are available to every language and framework that uses dependencies or modules you don’t control.
      • nothinkjustai 19 hours ago
        Rust doesn’t have post install scripts
        • est31 19 hours ago
          There is build.rs, proc macros are unsandboxed, and lastly you install the binary so that you can run it. Even if the build and install were fully sandboxed, the binary could still do malicious stuff if ran.
          • drdaeman 18 hours ago
            Even without post-install script, a malicious payload could be hiding in some function and just wait until the developer invokes `cargo run`. Not that many people audit the crates they pull into their projects.
          • nothinkjustai 17 hours ago
            Yeah no shit, if you download malicious code from the internet and run it on your computer you will get pwned. No matter if it’s from a package manager a zip file or a submodule.

            However the current npm vulns used a post install script.

            • mort96 16 hours ago
              I maintain that NPM malware use postinstall scripts just because they exist and are convenient. Had NPM not had postinstall scripts, the malware would have used a different mechanism and been almost exactly as effective.
        • fabrice_d 19 hours ago
          It has build.rs that will run as soon as you compile the dependency. That's not the same thing but pretty close to a post install script: it's very likely to run.
        • deeebug 19 hours ago
        • tasn 19 hours ago
          It has build.rs, which has essentially the same problems.
    • panzi 19 hours ago
      Last I checked npm had 2FA for publishing, but cargo didn't. I don't think cargo is any better than npm, just not that of an attractive target.
    • cookiengineer 19 hours ago
      I suppose that go's go:generate workflow can also be abused to land a worm like the ones spreading via npm, as you can build programs that just scrape the whole hard drive for git projects and patch the go.mod dependencies there, and you could also just write this in go as a toolchain script, for example.

      NPM's achilles is the pre/postinstall step which can run arbitrary commands and shell scripts without the user having any way to intervene.

      Dependencies must be run in isolated chroot sandboxes or better, inside containers. That would be the only way to mitigate this problem, as the filesystem of the operating system must be separated from the filesystem of the development workflow.

      On top of that most host based firewalls are per-binary instead of per-cmdline. That leads to the warnings and rules relying on that e.g. "python" or "nodejs" getting network access allowlisted, instead of say "nodejs myworm.js". So firewalls in general are pretty useless against this type of malware.

      • yegle 19 hours ago
        `go:generate` is for the package provider, the command never runs when someone `go install` or `go get` the package.
        • cookiengineer 19 hours ago
          Note that the NPM worms are spreading because the package providers are developing on their libraries without them noticing a malicious dependency. It is not users/consumers spreading the worm, it is developers spreading it.

          Your mismatch is that you think in policies, not assessments here. Nothing in my normal go workflow will ask me if I want to run "curl download whatever from the internet" when I run go build.

          Though I agree with the difference in workflow, there is not a single mechanism in go catching this. go.mod files can be just patched by the worm, and/or hidden behind a /v123 folder or whatever to play shenanigans on API differences.

      • xena 19 hours ago
        go:generate is done at dev time, not at build time.
        • cookiengineer 19 hours ago
          Actually bindings are usually generated like that, at build time (though with a build cache that nobody knows how it corrupts all the time).

          Examples that come to mind: webview/webview, webkit, cilium/ebpf and most other CGo projects that I have seen.

          • arccy 8 hours ago
            you only run them for your own project, not the generate directives of your dependencies though
    • raggi 19 hours ago
      none. they just have smaller target populations.
    • 0xbadcafebee 17 hours ago
      "What are the actual guarantees that <guy leaving his keys on his dashboard> make that <guy leaving his keys on an illuminated blinking sign outside his house> don't make?"
    • jiggawatts 19 hours ago
      Generally, other package managers aren't great either. Notably, crates.io / cargo has some of the same issues as NPM and the verbiage of their excuses for not fixing these problems is oddly similar.

      Something fascinating about the design and architecture of programming languages and their surrounding ecosystems is the enormous leverage that they provide to the "core team":

      For every 1 core language developer[1]...

      ... there may be 1,000 popular package developers...

      ... for which there may be 1,000,000 developers writing software...

      ... for over 1,000,000,000 users.

      This means that for every corner that is cut at the top of that pyramid, the harms are massively magnified at the lower tiers. A security vulnerability in a "top one thousand" package like log4j can cause billions of dollars in economic damage, man-centuries of remediation effort, etc.

      However, bizarrely, the funding at the top two levels is essentially a pittance! Most such projects are charities, begging for spare change with hat in hand on a street corner. Some of the most used libraries are often volunteer efforts, despite powering global e-commerce! cough-OpenSSL-cough.

      The result is that the people most empowered to fix the issues are the least funded to do so.

      This is why NPM, Crates.io, etc... flatly refuse to do even the most basic security checks like adding namespaces and verifying the identity of major publishers like Google, Microsoft, and the like.

      That's a non-zero amount of effort, and no matter how trivial to implement technically or how cheap to police, it would likely blow their tiny budget of unreliable donations.

      The exceptions to this rule are package managers with robust financial backing, such as NuGet, which gets reliable funding from Microsoft and supports their internal (for-profit!) workflows almost as much as it does external "free" users.

      "Free and open" is wonderful and all, but you get what you pay for.

      [1] Most of us can name them off the top of our heads: Guido van Rossum, Larry Wall, Kerningham & Richie, etc.

      • kibwen 17 hours ago
        You appear to have missed that NPM is owned by Microsoft.

        In addition, crates.io has not flatly refused to support namespaces, there's an entire accepted RFC for it: https://github.com/rust-lang/rfcs/pull/3243

        At the same time, note that namespacing does nothing to prevent any sort of problem here. Namespacing is great for package organization and making provenance more deliberately obvious, but beyond that it's not a security measure.

        • jiggawatts 15 hours ago
          > NPM is owned by Microsoft.

          I did not miss that.

          The "culture" of NPM was firmly established long before the acquisition by Microsoft.

          Similarly, there clearly isn't the same feeling of "ownership" over NPM and its giant pile of anonymously published packages as there is over NuGet where a substantial fraction of the traffic is Microsoft customers downloading Microsoft packages for Microsoft DotNet development on Microsoft Visual Studio for Microsoft Windows Server.

  • joeblubaugh 19 hours ago
    There has been a lot of pain at my various jobs installing a safe global npm config on every developer machine, asking people not to disable it, checking it with mdm tools. A safer out-of-the-box configuration is long overdue.
    • tkel 19 hours ago
      Just dont use npm. Use a package manager which doesn't execute postinstall by default. The switch is incredibly simple.
      • cluckindan 19 hours ago
        Which package manager is that, and what caveats does it offer?
        • timfsu 18 hours ago
          Pnpm - installs are faster to boot. We haven’t missed anything
        • ricardo_lien 18 hours ago
          pnpm
    • 10000truths 16 hours ago
      What do you mean by safe config? If you're trying to mandate a cooldown period or a whitelist/blacklist of packages, the correct approach is to configure a company-controlled registry that pulls from the upstream npm registry while enforcing your desired policies.
      • Macha 9 hours ago
        Whether the config is the registry URL or the cooldown timer you still need it on your dev machines and people to use tools that use it (the latter is especially a problem with docker in my experience, people find out testcontainers or whatever is pulling from docker hub rather than the company registry only when their CI build fails from rate limits)
        • 10000truths 24 minutes ago
          What I'm saying is that the policy should be enforced server-side. So you block the npm registry in the company firewall, and set up a company-specific registry that acts as a blessed proxy to the npm registry but enforces your desired policies. For example, if you configure your registry to refuse to pull packages published less than a week ago, then it doesn't matter if a client disables dependency cooldowns in their npm config - they still won't be able to "npm install totally-new-not-a-virus-pkg".

          People can still bypass these measures if they're determined enough (offline package installs, vendoring dependencies, etc.) but making circumvention impossible to do accidentally and inconvenient to do deliberately solves the problem 99% of the way.

      • tkel 15 hours ago
        Or even just a proxy that can enforce the constraints
  • brooksc 18 hours ago
    Thoughts and Prayers to those affected
    • themafia 17 hours ago
      We wish them well.
  • imrozim 13 hours ago
    Every node js project starts with npm install and suddenly you have a 500 packages you didnt ask for. Half of them haven't been touched in years.
  • 827a 19 hours ago
    There is no legitimate reason why postinstall scripts need to exist. The npm team needs to grow up and declare "starting with npm version whatever, npm will only run postinstall scripts for versions of packages published before ${today}".
    • tkel 18 hours ago
      I audited several postinstall scripts recently in popular packages. They seem to be mostly around using native binaries, downloading them, detecting if the platform is compatible, linking to it directly instead of having it bootstrapped by node, working around issues in older versions of npm, etc. Since dev toolchains (e.g. esbuild) are now being built in compiled languages and distributed as binaries via npm registry. If you are on a recent version of node/npm and a common/recent OS/platform, you should be able to disable all the postinstall scripts without legitimate issue.
      • tkel 18 hours ago
        [dead]
    • raggi 19 hours ago
      install scripts are a distraction, just like package signatures are a distraction. adding/removing either feature has no significant impact on the wormability of this package ecosystem. installed npm code is run, with nearly zero exceptions.
      • nine_k 19 hours ago
        The installed code may be run in different settings, under a different user, with different privileges. Say, it may not run in CI/CD at all, or run only with the test user's privileges.

        Postinstall scripts run at install time, with installer's privileges.

      • piperswe 19 hours ago
        A lot of it ends up bundled to run in a browser though, and doesn't end up running in Node.js
      • 827a 18 hours ago
        > There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
        • dns_snek 15 hours ago
          > Compromised code probably won't (maybe it will if your test cases test a compromised package).

          Code runs automatically on import, you don't have to call dependency.infectMePlease()

          Your code imports depA which imports depB which imports depC which imports depD which has been compromised, and boom, malicious code runs before you've even finished resolving the imports.

          > your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm

          I've never once seen or worked with a CI pipeline that ran "npm install" that would be any safer if post-install scripts didn't exist. They all run "npm run test" or similar.

      • throwaway27448 19 hours ago
        Surely every layer of defense in depth is a distraction except the one that prevents the problem.
        • dns_snek 15 hours ago
          It's not defense in depth if the mechanism is trivially bypassed.
          • throwaway27448 15 hours ago
            Trivial relative to which perspective? The distinction matters enough to care. Just because your father might give away their phone pin over the phone doesn't mean we should allow this granting remote access to his phone.
            • dns_snek 14 hours ago
              Trivial in the sense that in 99.9% of situations, "npm install" is immediately followed by "npm run", "npm test", or some form of execution. Any execution that imports a dependency is enough for a transitive dependency to execute its malicious payload immediately.

              Post-install scripts have a slight edge over executing malicious code on import, i.e. they work 99.95% of the time instead of 99.9% of the time, but removing these scripts wouldn't materially change the situation we're in. You're locking the back door but leaving the front door and all of the windows wide open.

              I'm going to suggest that we might be worse off in the short-medium term if post-install scripts are removed because everyone who thought that disabling post-install scripts was a "good enough" standalone security strategy will get caught with their pants down as attackers modify their payloads.

              • throwaway27448 3 hours ago
                > Post-install scripts have a slight edge over executing malicious code on import, i.e. they work 99.95% of the time instead of 99.9% of the time

                The "instead of" depends very much on the exploit and where it's wedged in the code. I doubt it's anywhere near 99%. Plus, getting the exploit to execute on the developer's machine is difficult to manage even in the best cases.

                > because everyone who thought that disabling post-install scripts was a "good enough" standalone security strategy will get caught with their pants down as attackers modify their payloads.

                Saying "well there are stupid people in the world" seems like a pretty bad justification to leave a hole open.

              • cindyllm 11 hours ago
                [dead]
    • amluto 19 hours ago
      There is also not too much legitimacy to the fact that Rust packages can run unsandboxed when they build themselves.
      • adamnemecek 18 hours ago
        I feel like it's harder to hide malicious stuff in Rust build scripts.
    • Rohansi 19 hours ago
      This doesn't really fix the issue though because package code is also executed at build time and during testing. Just maybe restricts the scope a little bit.
      • 827a 18 hours ago
        There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.

        Its childish to believe that because you can't fix everything you shouldn't fix anything. Defense in depth.

        • Rohansi 17 hours ago
          > There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package)

          You don't need to test a compromised package to have it execute code. Importing it anywhere in your tests is enough, even transitively.

          It's for sure less likely to run but I doubt it's significantly different in practice.

      • tkel 19 hours ago
        If you look at the last N npm worms, they all used postinstall scripts.
    • akoboldfrying 18 hours ago
      With respect, post-install scripts are a total red herring. You're alarmed by them because they are code controlled by someone else that runs on your box, and they could do something bad -- yes, they are, and yes they could.

      But so is the regular code in those packages! It won't run at install time, but something in there will run -- otherwise it wouldn't have been included in the dependencies.

      Thinking that eliminating post-install scripts will have more than a momentary impact on exploitation rates is a sign of not thinking the issue through. Unfortunately the issue is much more nuanced than TFA implies -- it's not at all a case of "Let's just stop putting the wings-fall-off button next to the light switch", it's that the thing we want to prevent (other people's bad code running on our box) cannot be distinguished from the thing we want (other people's good code running on our box) without a whole lot of painstaking manual effort, and avoiding painstaking manual effort is the only reason we even consider running other people's code in the first place.

      • apf6 18 hours ago
        The time difference does matter though. There were some recent worm attacks in NPM that spread very quickly because they used post-install. I don’t remember how long it took NPM to block the packages but it was probably around 30 minutes or so? If it wasn’t for post-install then that same attack would have a much slower spread and thus a smaller blast radius.
        • dns_snek 15 hours ago
          I don't accept the idea that it would significantly slow down the spread.

          How often do you run "npm install" just for the fun of it, without actively working on the codebase?

          IME 99% of the time the time between "npm install" and some form of execution that pulls in dependencies is less than 30 seconds.

      • 827a 17 hours ago
        > There's a huge difference, because postinstall scripts are almost guaranteed to run in your CI pipeline. Compromised code probably won't (maybe it will if your test cases test a compromised package). Different attack profile. Worse in some ways (your CI likely has NPM push tokens, which is how this single-package worm become a multi-package self-replicating worm) (your CI pipeline also likely has some level of privileged access to your cloud environment; deployed services are more likely to be highly scoped). But, better in some ways.
    • guidedlight 19 hours ago
      Security issues aside, they are a nightmare in enterprise environments where internet and OS access is heavily restricted.
    • nine_k 19 hours ago
      ...and only if you invoke it with --dangerously-run-postinstall-scripts; otherwise it will report an error if a postinstall script is found.

      This is definitely going to affect any packages that need to link to native code and/or compile shims, but these are very few.

  • erikerikson 5 hours ago
    This reads like an onion article

    > residents of the Node.js ecosystem stood unified in their belief that the malicious remote-code execution was a completely unpredictable tragedy

    Does anyone believe that claim? There's been so many counterexamples.

    It's a great dig on the ecosystem's failings but only entertainment. Perhaps a prompt for marketers to present their wares? Kinda like the maintainer of depsguard who removed, re-added, and then re-removed that admission from their post? At the time of this writing they have the top post.

  • spaqin 18 hours ago
    It's a cultural issue, always feeling the urge to update to the newest possible package for things that are already working fine, without even reading the changelog to see if it's applicable. Cooldowns are only a way to force a bit of patience onto the maintainers... and they work.
    • morbicer 14 hours ago
      If you have some sort of compliance requirements, you need to update because of the onslaught of CVE vulnerabilities in the older versions. They are mostly bogus like "regexp DOS" but you have to satisfy the process and update anyway.
    • anonzzzies 18 hours ago
      That, and package owners updating stuff that needs no updating just to look not stale/unupdated. I can use lisp packages without changes for 15 years fine, but a js one is unmaintained! oh no! Even though it was done 15 years ago, so they add nothing, sometimes a breaking change, to up a version on npm and github and look maintained. And then everything will update.
  • xiaosong001 14 hours ago
    A 7-day cooldown feels like a low-effort band-aid. The real fix is probably reproducible builds + signed attestations, but most teams won't pay that tax until they've already been burned.
  • ramblurr 14 hours ago
    This link is clearly an AI laundered version of the long running joke from Xe Iaso. Shame.

    https://xeiaso.net/shitposts/no-way-to-prevent-this/CVE-2024...

    https://news.ycombinator.com/item?id=40438408

  • germandiago 18 hours ago
    I use C++ and Conan with my own recipes and pre-built artifacts.

    This mitigates things to a great extent.

    I do not know who thought that having your dependencies depend on the internet with a zillion users doing stuff to each package was a good idea for enterprise environments...

    It is crazy how much things can get endangered this way.

  • yangm97 17 hours ago
    I’m using nix for managing npm dependencies in a project and it seems like I accidentally got some protection from these attacks because of the nix sandbox. Looks like I got more than I begged for.
  • holotherapper 8 hours ago
    Lately the security vulnerabilities around Node.js have been pretty rough, and at work I've been scrambling to deal with them.
  • exabrial 19 hours ago
    I really don't understand why the npm project cannot embrace PGP as an ambulatory 'good enough' solution.
    • loloquwowndueo 19 hours ago
      The NIH mentality in the ecosystem would result in a JavaScript pgp library which itself would be an npm package and subject to supply chain attacks. lol.
      • panzi 19 hours ago
        A good part of it is already implemented in web crypto, which is supported by browsers and node. There is a chance that npm could implement something there without extra dependencies. Maybe I'm too optimistic?
    • Gigachad 19 hours ago
      Would that help? Most of these recent attacks, the attackers have gained access to the system that builds the packages. So it would have just signed the malicious build the same.
      • raggi 19 hours ago
        nope, doesn't help. signatures and removal of script points have zero net effect on the value of the target that the ecosystem has, or how easy/hard it is to write a worm. the package code gets run, this is statistically true, and the exploited developers/environments will sign packages, this is also statistically true.
      • Macha 9 hours ago
        In some ways the push towards trusted publishing has made these attacks more likely as the credentials are sitting in a standardized, always on CI system, rather than in a locked down corporate CI system for big packages or a developers machine or developers head for smaller open source packages.
    • saghm 18 hours ago
      Probably the same reason that pretty much no other package manager (or even major email provider, when email is ostensibly the most famous use-case for it) has adopted it: the UX is atrocious.
      • Macha 9 hours ago
        Basically all Linux distro package managers?
  • dh2022 18 hours ago
    Kudos to the author : this article read like something out of The Onion.
    • peterashford 14 hours ago
      Its a reference to an Onion article about gun violence in the US
  • slopinthebag 17 hours ago
    I think people are overlooking the fact that the javascript ecosystem is run by perpetual beginners who are probably using 5 different SAAS credential managers and still manage to check their creds into a public git repo. No wonder there are so many breaches. Rust developers otoh are typically experts and don't get pwned so easily.
  • swang 17 hours ago
    Ah yes, only `npm` has ever suffered an attack. Ever.

    RubyGems: https://www.sonatype.com/blog/anatomy-of-the-rubygems-rest-c... PyPi: literally the latest attack included publishing malicious packages on PyPi XZ Tools, a part of nearly every Linux distribution nearly merged in code to backdoor SSH: https://www.akamai.com/blog/security-research/critical-linux...

    It is just easy pickings to blame npm specifically. Yes, while they do share some part of the blame, no package manager is immune from attack and certainly not ones where the attackers exploited being able to extract out secrets from a developer's environment variables or files. Seems more like developers should be managing their secrets better?

    I also find that using the meme that this title snowclones is in bad taste too.

    • lucketone 14 hours ago
      XZ attacker spent half a year earning trust, doing real maintenance.

      Different order of magnitude effort spent during XZ attack.

    • skydhash 16 hours ago
      Security doesn't exist in absolute. It's about relative effort. Exploiting Debian's package management requires quite a bit of effort, NPM, while being funded by Microsoft, only need to have a token stolen. And postinstall scripts were decried as a security risk for a long time
  • p-e-w 20 hours ago
    With the recent high-profile attacks on PyPI packages, it’s no longer true that npm is the “only package manager where this regularly happens”.

    In fact, pip is much more dangerous than npm because it lacks a lockfile. uv fixes that, but adoption is proceeding at a snail’s pace.

    • godzillabrennus 19 hours ago
      UV adoption is happening, though. NPM is still the only name in town.
      • manquer 19 hours ago
        Huh ? uv is a package manager not a registry.

        In JS world there is plenty of competition for package managers pnpm/ yarn/ burn all viable alternatives to npm the package manager.

        Public registries for languages tend to coalesce around one service . Nobody wants to publish their library to 4 different registries .

    • esafak 19 hours ago
    • fragmede 19 hours ago
      I don't know about snails, but everything I'm in contact with has moved over to uv, and I can't imagine I'm the only one.
    • lateral5 17 hours ago
      [flagged]
  • skeledrew 18 hours ago
    No surprise here. That's what you get when you have a language/ecosystem where core devs refuse to fix fundamental flaws, cuz for them breaking backwards compatibility is the worse crime that can ever be committed. And so all that happens in JS-land will eternally be layering lipstick on the pig in the cesspool. Too afraid of going through something similar to the Python 2 -> 3 fiasco, I guess because too many web devs and site admins would be incensed at being forced to fix their broken universe; as if it isn't already broken in its current condition.
  • eulgro 19 hours ago
    These satire articles on cybersecurity are really entertaining.

    The other one a few days ago was also good: https://nesbitt.io/2026/02/03/incident-report-cve-2024-yikes...

  • greatgib 9 hours ago
    There is a good old time, before rust and go mindset ruined everything pushing people to wget|bash --install random crap on the spot, when experienced people used to rely on linux OS distributions like debian and co to source package and libraries and ensure to have "stable" and "safe" software procurement for professional and serious infrastructure and deployment.

    But young blood mocked the fact to have to wait for Manual human review, safe gpg signatures, cool down periods and weeks of "testing" stage before being considered "stable".

    And now most companies data are leaked and on the wide, hackers and ransomware are thriving...

    This is crazy when you think about it because after so many years of software dev crafting experience, "modern safe" languages like go and rust, ..., typing, ... You would expect most software stack to be pretty solid and safe compare to 15 years ago.

  • 7e 18 hours ago
    The answer is LLM inspection. Which, sadly, raises the cost of software, especially once evil LLMs start hiding the backdoors better. Long term the answer should be CHERI, in my opinion.
  • joshka 17 hours ago
    ...so far...
  • btown 20 hours ago
  • theuniverseson 10 hours ago
    [flagged]
  • tylerchilds 14 hours ago
    [dead]
  • qrush 20 hours ago
    [flagged]
    • rileymat2 20 hours ago
      I read it as a comparison of the attitude of helplessness around it, not the acts themselves. So it was a bit meta, but unremarkably inoffensive.
    • mikepurvis 19 hours ago
      I don't think it's comparing them directly or arguing for equivalent seriousness. It is identifying a similarity of mindset where those who have their hands on the levers of power that could materially improve the situation act like there's nothing they can do.
    • mrandish 19 hours ago
      But it's not comparing to school shootings, it's satirizing supposedly responsible parties who continue to deny responsibility despite repeated catastrophic failures which are their responsibility.
    • p-e-w 19 hours ago
      You’re right. Major supply chain attacks affect far more people than school shootings do, and can potentially cost more lives through downstream effects.

      It’s 2026. Software is critical infrastructure for global civilization now. Lives and livelihoods depend on it working reliably. The “it’s just bits on a computer” quip has been outdated for 20 years now.

  • numbsafari 19 hours ago
    [flagged]
  • computersuck 17 hours ago
    Do not fucking use npm. Stay the fuck away from it. Want to write JS? AI can now write vanilla JS for you with no libraries. Own your code.
    • kulahan 15 hours ago
      I don't like vanilla JS though. I like easier-to-read abstracted JS.
  • yegle 19 hours ago
    Vendorizing using git submodule should be a robust mitigation for this problem.
    • no-name-here 10 hours ago
      Wouldn't locking dependencies be far more likely for dependency-users to do, and be approximately as effective for those that do?
    • raggi 19 hours ago
      subtree is better for this case, you want to encourage actual reading before running. reading won't catch everything but it catches a lot, and the burden isn't as high as people always complain about before they try it.
    • saghm 18 hours ago
      This feels like the modern analog of the king, the mice, and the cheese. What cats do I need to bring in to eat my git submodules?