Modern Node.js Patterns

(kashw1n.com)

434 points | by eustoria 9 hours ago

43 comments

  • simonw 5 hours ago
    Whoa, I didn't know about this:

      # Run with restricted file system access
      node --experimental-permission \
        --allow-fs-read=./data --allow-fs-write=./logs app.js
      
      # Network restrictions
      node --experimental-permission \
        --allow-net=api.example.com app.js
    
    Looks like they were inspired by Deno. That's an excellent feature. https://docs.deno.com/runtime/fundamentals/security/#permiss...
  • farkin88 8 hours ago
    The killer upgrade here isn’t ESM. It’s Node baking fetch + AbortController into core. Dropping axios/node-fetch trimmed my Lambda bundle and shaved about 100 ms off cold-start latency. If you’re still npm i axios out of habit, 2025 Node is your cue to drop the training wheels.
    • andai 3 hours ago
      16 years after launch, the JS runtime centered around network requests now supports network requests out of the box.
    • hliyan 1 hour ago
      There has to be something wrong with a tech stack (Node + Lambda) that adds 100ms latency for some requests, just to gain the capability [1] to send out HTTP requests within an environment that almost entirely communicates via HTTP requests.

      [1] convenient capability - otherwise you'd use XMLHttpRequest

    • exhaze 6 hours ago
      Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.

      Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.

      • jbryu 5 hours ago
        Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715

        I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.

        I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.

        • hmcdona1 25 minutes ago
          ts-rest doesn't see a lot of support these days. It's lack of adoption of modern tanstack query integration patterns finally drove us look for alternatives.

          Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.

          - https://orpc.unnoq.com/

          - https://github.com/unnoq/orpc

        • jbryu 59 minutes ago
          nvm I'm dumb lol, `ts-rest` does support express v5: https://github.com/ts-rest/ts-rest/pull/786. Don't listen to my misinformation above!!

          I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.

      • farkin88 5 hours ago
        Type safety for API calls is huge. I haven't used ts-rest but the compile-time validation approach sounds solid. Way better than runtime surprises. How's the experience in practice? Do you find the schema definition overhead worth it or does it feel heavy for simpler endpoints?
        • _heimdall 4 hours ago
          I always try to throw schema validation of some kind in API calls for any codebase I really need to be reliable.

          For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.

          For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.

          • pnappa 4 hours ago
            How do you supply the schema on the other side?

            I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.

            • exhaze 1 hour ago
              There are a few ways, but I believe SSOT (single source of truth) is key, as others basically said. Some ways:

              1. Shared TypeScript types

              2. tRPC/ts-rest style: Automagic client w/ compile+runtime type safety

              3. RTK (redux toolkit) query style: codegen'd frontend client

              I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.

              Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.

              • jedwards1211 49 minutes ago
                What is a validation quirk that would happen when using server side Zod schemas that somehow doesn’t happen with a codegened client?
            • _heimdall 1 hour ago
              I'll almost always lean on separate packages for any shared logic like that (at least if I can use the same language on both ends).

              For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.

              It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.

            • koolba 1 hour ago
              Write them both in TypeScript and have both the request and response shapes defined as schemas for each API endpoint.

              The server validates request bodies and produces responses that match the type signature of the response schema.

              The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.

              It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.

              • jvuygbbkuurx 1 hour ago
                This will break old clients. Having a deployment stategy taking that into account is important.
        • jedwards1211 55 minutes ago
          The schema definition is more efficient than writing input validation from scratch anyway so it’s completely win/win unless you want to throw caution to the wind and not do any validation
      • avandekleut 3 hours ago
        Also want to shout out ts-rest. We have a typescript monorepo where the backend and frontend import the api contract from a shared package, making frontend integration both type-safe and dead simple.
      • cassepipe 4 hours ago
        For what it's worth, happy user of ts-rest here. Best solution I landed upon so far.
    • tanduv 6 hours ago
      I never really liked the syntax of fetch and the need to await for the response.json, implementing additional error handling -

        async function fetchDataWithAxios() {
          try {
            const response = await axios.get('https://jsonplaceholder.typicode.com/posts/1');
            console.log('Axios Data:', response.data);
          } catch (error) {
            console.error('Axios Error:', error);
          }
        }
      
      
      
        async function fetchDataWithFetch() {
          try {
            const response = await fetch('https://jsonplaceholder.typicode.com/posts/1');
      
            if (!response.ok) { // Check if the HTTP status is in the 200-299 range
              throw new Error(`HTTP error! status: ${response.status}`);
            }
      
            const data = await response.json(); // Parse the JSON response
            console.log('Fetch Data:', data);
          } catch (error) {
            console.error('Fetch Error:', error);
          }
        }
      • stevage 2 hours ago
        I usually write it like:

            const data = (await fetch(url)).then(r => r.json())
        
        
        But it's very easy obviously to wrap the syntax into whatever ergonomics you like.
        • mythz 47 minutes ago
          why not?

              const data = await (await fetch(url)).json()
      • farkin88 6 hours ago
        Yeah, that's the classic bundle size vs DX trade-off. Fetch definitely requires more boilerplate. The manual response.ok check and double await is annoying. For Lambda where I'm optimizing for cold starts, I'll deal with it, but for regular app dev where bundle size matters less, axios's cleaner API probably wins for me.
        • jedwards1211 46 minutes ago
          You’re shooting yourself in the foot if you put naked fetch calls all over the place in your own client SDK though. Or at least going to extra trouble for no benefit
        • hn_throwaway_99 3 hours ago
          Agreed, but I think that in every project I've done I've put at least a minimal wrapper function around axios or fetch - so adding a teeny bit more to make fetch nicer feels like tomayto-tomahto to me.
      • freeopinion 2 hours ago
        I somehow don't get your point.

        The following seems cleaner than either of your examples. But I'm sure I've missed the point.

          fetch(url).then(r=>r.ok ? r.json() : Promise.reject(r.status))
          .then(
            j=>console.log('Fetch Data:', j),
            e=>console.log('Fetch Error:', e)
          );
        
        I share this at the risk of embarrassing myself in the hope of being educated.
    • franciscop 7 hours ago
      As a library author it's the opposite, while fetch() is amazing, ESM has been a painful but definitely worth upgrade. It has all the things the author describes.
      • farkin88 7 hours ago
        Interesting to get a library author's perspective. To be fair, you guys had to deal with the whole ecosystem shift: dual package hazards, CJS/ESM compatibility hell, tooling changes, etc so I can see how ESM would be the bigger story from your perspective.
        • franciscop 6 hours ago
          I'm a small-ish time author, but it was really painful for a while since we were all dual-publishing in CJS and ESM, which was a mess. At some point some prominent authors decided to go full-ESM, and basically many of us followed suit.

          The fetch() change has been big only for the libraries that did need HTTP requests, otherwise it hasn't been such a huge change. Even in those it's been mostly removing some dependencies, which in a couple of cases resulted in me reducing the library size by 90%, but this is still Node.js where that isn't such a huge deal as it'd have been on the frontend.

          Now there's an unresolved one, which is the Node.js streams vs WebStreams, and that is currently a HUGE mess. It's a complex topic on its own, but it's made a lot more complex by having two different streaming standards that are hard to match.

          • farkin88 5 hours ago
            What a dual-publishing nightmare. Someone had to break the stalemate first. 90% size reduction is solid even if Node bundle size isn't as critical. The streams thing sounds messy, though. Two incompatible streaming standards in the same runtime is bound to create headaches.
        • bikeshaving 4 hours ago
          The fact that CJS/ESM compatibility issues are going away indicates it was always a design choice and never a technical limitation (most CJS format code can consume ESM and vice versa). So much lost time to this problem.
          • bakkoting 27 minutes ago
            It was neither a design choice nor a technical limitation. It was a big complicated thing which necessarily involved fiddly internal work and coordination between relatively isolated groups. It got done when someone (Joyee Cheung) actually made the fairly heroic effort to push through all of that.

            Joyee has a nice post going into details. Reading this gives a much more accurate picture of why things do and don't happen in big projects like Node: https://joyeecheung.github.io/blog/2024/03/18/require-esm-in...

        • stevage 2 hours ago
          I maintain a library also, and the shift to ESM was incredibly painful, because you still have to ship CJS, only now you have work out how to write the code in a way that can be bundled either way, can be tested, etc etc.
          • 8n4vidtmkvmk 40 minutes ago
            It was a pain, but rollup can export both if you write the source in esm. The part I find most annoying is exporting the typescript types. There's no tree-shaking for that!
    • yawnxyz 7 hours ago
      node fetch is WAY better than axios (easier to use/understand, simpler); didn't really know people were still using axios
      • reactordev 6 hours ago
        You still see axios used in amateur tutorials and stuff on dev.to and similar sites. There’s also a lot of legacy out there.
        • bravesoul2 6 hours ago
          AI is going to bring that back like an 80s disco playing Wham. If you gonna do it do it wrong...
          • macNchz 5 hours ago
            I've had Claude decide to replace my existing fetch-based API calls with Axios (not installed or present at all in the project), apropos of nothing during an unrelated change.
          • reactordev 5 hours ago
            hahaha, I see it all the time in my responses. I immediately reject.
      • Raed667 7 hours ago
        I do miss the axios extensions tho, it was very easy to add rate-limits, throttling, retry strategies, cache, logging ..

        You can obviously do that with fetch but it is more fragmented and more boilerplate

        • farkin88 7 hours ago
          Totally get that! I think it depends on your context. For Lambda where every KB and millisecond counts, native fetch wins, but for a full app where you need robust HTTP handling, the axios plugin ecosystem was honestly pretty nice. The fragmentation with fetch libraries is real. You end up evaluating 5 different retry packages instead of just grabbing axios-retry.
        • hiccuphippo 7 hours ago
          Sounds like there's space for an axios-like library built on top of fetch.
          • farkin88 7 hours ago
            I think that's the sweet spot. Native fetch performance with axios-style conveniences. Some libraries are moving in that direction, but nothing's really nailed it yet. The challenge is probably keeping it lightweight while still solving the evaluating 5 retry packages problem.
            • crabmusket 6 hours ago
              Is this what you're looking for? https://www.npmjs.com/package/ky

              I haven't used it but the weekly download count seems robust.

              • farkin88 6 hours ago
                Ky is definitely one of the libraries moving in that direction. Good adoption based on those download numbers, but I think the ecosystem is still a bit fragmented. You've got ky, ofetch, wretch, etc. all solving similar problems. But yeah, ky is probably the strongest contender right now, in my opinion.
          • lllllllllllll6 7 hours ago
            [dead]
      • mcv 5 hours ago
        This is all very good news. I just got an alert about a vulnerability in a dependency of axios (it's an older project). Getting rid of these dependencies is a much more attractive solution than merely upgrading them.
        • thewisenerd 4 hours ago
          isn't upgrading node going to ba bigger challenge? (if you're on a node version that's no longer receiving maintenance)
      • farkin88 7 hours ago
        Right?! I think a lot of devs got stuck in the axios habit from before Node 18 when fetch wasn't built-in. Plus axios has that batteries included feel with interceptors, auto-JSON parsing, etc. But for most use cases, native fetch + a few lines of wrapper code beats dragging in a whole dependency.
      • benoau 6 hours ago
        axios got discontinued years ago I thought, nobody should still be using it!
        • creatonez 4 hours ago
          No? Its last update was 12 days ago
    • jedwards1211 58 minutes ago
      This has been the case for quite awhile, most of the things in this article aren’t brand new
    • vinnymac 6 hours ago
      Undici in particular is very exciting as a built-in request library, https://undici.nodejs.org
      • farkin88 6 hours ago
        Undici is solid. Being the engine behind Node's fetch is huge. The performance gains are real and having it baked into core means no more dependency debates. Plus, it's got some great advanced features (connection pooling, streams) if you need to drop down from the fetch API. Best of both worlds.
        • forty 12 minutes ago
          It's into core but not exposed to users directly. you still need to install the npm module if you want to use it, which is required if you need for example to go through an outgoing proxy in your production environment
    • pbreit 6 hours ago
      It has always astonished me that platforms did not have first class, native "http client" support. Pretty much every project in the past 20 years has needed such a thing.

      Also, "fetch" is lousy naming considering most API calls are POST.

      • catlifeonmars 4 hours ago
        “Most” is doing a lot of heavy lifting here. I use plenty of APIs that are GET
      • rendall 4 hours ago
        That's a category error. Fetch is just refers to making a request. POST is the method or the HTTP verb used when making the request. If you're really keen, you could roll your own

          const post = (url) => fetch(url, {method:"POST"})
        • catlifeonmars 4 hours ago
          I read this as OP commenting on the double meaning of the category. In English, “fetch” is a synonym of “GET”, so it’s silly that “fetch” as a category is independent of the HTTP method
          • rendall 2 hours ago
            That makes sense.
    • TheRealPomax 3 hours ago
      Those... are not mutually exclusive as killer upgrade. No longer having to use a nonsense CJS syntax is absolutely also a huge deal.

      Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.

      • 8n4vidtmkvmk 46 minutes ago
        Especially since it seems perfectly possible to support both simultaneously. Bun does it. If there's an edge case, I still haven't hit it.
    • synergy20 6 hours ago
      axios works for both node and browser in production code, not sure if fetch can do as much as axios in browser though
  • vinnymac 6 hours ago
    You no longer need to install chalk or picocolors either, you can now style text yourself:

    `const { styleText } = require('node:util');`

    Docs: https://nodejs.org/api/util.html#utilstyletextformat-text-op...

    • austin-cheney 1 hour ago
      I never needed those. I would just have an application wide object property like:

                  text: {
                      angry    : "\u001b[1m\u001b[31m",
                      blue     : "\u001b[34m",
                      bold     : "\u001b[1m",
                      boldLine : "\u001b[1m\u001b[4m",
                      clear    : "\u001b[24m\u001b[22m",
                      cyan     : "\u001b[36m",
                      green    : "\u001b[32m",
                      noColor  : "\u001b[39m",
                      none     : "\u001b[0m",
                      purple   : "\u001b[35m",
                      red      : "\u001b[31m",
                      underline: "\u001b[4m",
                      yellow   : "\u001b[33m"
                  }
      
      And then you can call that directly like:

          `${vars.text.green}whatever${vars.text.none}`;
  • tyleo 8 hours ago
    This is great. I learned several things reading this that I can immediately apply to my small personal projects.

    1. Node has built in test support now: looks like I can drop jest!

    2. Node has built in watch support now: looks like I can drop nodemon!

    • pavel_lishin 6 hours ago
      I still like jest, if only because I can use `jest-extended`.
      • vinnymac 6 hours ago
        If you haven't tried vitest I highly recommend giving it a go. It is compatible with `jest-extended` and most of the jest matcher libraries out there.
        • pavel_lishin 6 hours ago
          I've heard it recommended; other than speed, what does it have to offer? I'm not too worried about shaving off half-a-second off of my personal projects' 5-second test run :P
          • ezfe 1 hour ago
            Jest is just not modern, it can't handle modern async/ESM/etc. out of the box. Everything just works in Vitest.
          • tkcranny 5 hours ago
            It has native TS and JSX support, excellent spy, module, and DOM mocking, benchmarking, works with vite configs, and parallelises tests to be really fast.
    • hungryhobbit 6 hours ago
      Eh, the Node test stuff is pretty crappy, and the Node people aren't interested in improving it. Try it for a few weeks before diving headfirst into it, and you'll see what I mean (and then if you go to file about those issues, you'll see the Node team not care).
      • tejohnso 46 minutes ago
        I just looked at the documentation and it seems there's some pretty robust mocking and even custom test reporters. Definitely sounds like a great addition. As you suggest, I'll temper my enthusiasm until I actually try it out.
      • upcoming-sesame 4 hours ago
        still I would rather use that than import mocha, chai, Sinon, istanbul.

        At the end it's just tests, the syntax might be more verbose but Llms write it anyway ;-)

        • johnny22 57 minutes ago
          > but Llms write it anyway

          The problem isn't in the writing, but the reading!

  • azangru 7 hours ago
    Matteo Collina says that the node fetch under the hood is the fetch from the undici node client [0]; and that also, because it needs to generate WHATWG web streams, it is inherently slower than the alternative — undici request [1].

    [0] - https://www.youtube.com/watch?v=cIyiDDts0lo

    [1] - https://blog.platformatic.dev/http-fundamentals-understandin...

    • vinnymac 6 hours ago
      If anyone is curious how they are measuring these are the benchmarks: https://github.com/nodejs/undici/blob/main/benchmarks/benchm...

      I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.

      I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.

    • sieabahlpark 7 hours ago
      [dead]
  • bravesoul2 6 hours ago
    Anyone else find they discover these sorts of things by accident. I never know when a feature was added but vague ideas of "thats modern". Feels different to when I only did C# and you'd read the new language features and get all excited. In a polyglot world and just the rate even individual languages evolve its hard to keep up! I usually learn through osmosis or a blog post like this (but that is random learning).
    • moralestapia 5 hours ago
      I'm truly a fan of node (and V8) so once in a while (2-3 months?) I read their release notes and become aware of these things.

      Sometimes I also read the proposals, https://github.com/tc39/proposals

      I really want the pipeline operator to be included.

  • prmph 7 hours ago
    I think slowly Node is shaping up to offer strong competition to Bun.js, Deno, etc. such that there is little reason to switch. The mutual competition is good for the continued development of JS runtimes
  • gabrielpoca118 8 hours ago
    Don’t forget the native typescript transpiler which reduces the complexity a lot for those using TS
    • theThree 5 hours ago
      It's still not ready for use. I don't care Enum. But you can not import local files without extensions. You can not define class properties in constructor.
      • throwitaway1123 3 hours ago
        Enums and parameter properties can be enabled with the --experimental-transform-types CLI option.

        Not being able to import TypeScript files without including the ts extension is definitely annoying. The rewriteRelativeImportExtensions tsconfig option added in TS 5.7 made it much more bearable though. When you enable that option not only does the TS compiler stop complaining when you specify the '.ts' extension in import statements (just like the allowImportingTsExtensions option has always allowed), but it also rewrites the paths if you compile the files, so that the build artifacts have the correct js extension: https://www.typescriptlang.org/docs/handbook/release-notes/t...

      • erikpukinskis 4 hours ago
        Why would you want to do either of those?
        • solarkraft 3 hours ago
          Both are very common Typescript patterns.
          • bapak 43 minutes ago
            Importing without extensions is not a TypeScript thing at all. Node introduced it at the beginning and then stopped when implementing ESM. Being strict is a feature.

            What's true is that they "support TS" but require .ts extensions, which was never even allowed until Node added "TS support". That part is insane.

            TS only ever accepted .js and officially rejected support for .ts appearing in imports. Then came Node and strong-armed them into it.

    • sroussey 5 hours ago
      It strips TS, it does not transpile.

      Things like TS enums will not work.

      • throwitaway1123 3 hours ago
        In Node 22.7 and above you can enable features like enums and parameter properties with the --experimental-transform-types CLI option (not to be confused with the old --experimental-strip-types option).
    • mmcnl 7 hours ago
      Exactly. You don't even need --experimental-strip-types anymore.
  • jmull 3 hours ago
    Something's missing in the "Modern Event Handling with AsyncIterators" section.

    The demonstration code emits events, but nothing receives them. Hopefully some copy-paste error, and not more AI generated crap filling up the internet.

    • yesbabyyes 59 minutes ago
      It's definitely ai slop. See also the nonsensical attempt to conditionally load SQLite twice, in the dynamic imports example.

      The list of features is nice, I suppose, for those who aren't keeping up with new releases, but IMO, if you're working with node and js professionally, you should know about most, if not all of these features.

      • steve_adams_86 15 minutes ago
        Hasn't AsyncIterator been available in Node for several years? I used it extensively—I want to say—around 3 years ago.

        It's definitely awesome but doesn't seem newsworthy. The experimental stuff seems more along the lines of newsworthy.

  • mythz 40 minutes ago
    Good to see Node is catching up although Bun seems to have more developer effort behind it so I'll typically default to Bun unless I need it to run in an environment where node is better for compatibility.
  • rco8786 7 hours ago
    I've been away from the node ecosystem for quite some time. A lot of really neat stuff in here.

    Hard to imagine that this wasn't due to competition in the space. With Deno and Bun trying to eat up some of the Node market in the past several years, seems like the Node dev got kicked into high gear.

  • stevage 2 hours ago
    Huh, I write a fair bit of Node and there was a lot new here for me. Like the built in test stuff.

    Also hadn't caught up with the the `node:` namespace.

  • fleebee 7 hours ago
    Nice post! There's a lot of stuff here that I had no idea was in built-in already.

    I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.

    Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]

    [1]: https://nodejs.org/api/single-executable-applications.html

    [2]: https://brr.fyi/posts/engineering-for-slow-internet

  • austin-cheney 6 hours ago
    I see two classes of emerging features, just like in the browser:

    1. new technologies

    2. vanity layers for capabilities already present

    It’s interesting to watch where people place their priorities given those two segments

    • ctoth 6 hours ago
      One man's "vanity layers?" is another man's ergonomics.
      • spankalee 5 hours ago
        And in many of the cases talked about here, the "vanity layers" are massive interoperability improvements.
  • growbell_social 5 hours ago
    I'm just happy to see Node.js patterns as a #1 on HN after continually being dismissed from 2012-2018.
  • didip 1 hour ago
    Will node one day absorb Typescript and use it as default?
  • zacharyvoase 13 minutes ago
    we cannot escape the AI generated slop can we?

    online writing before 2022 is the low-background steel of the information age. now these models will all be training on their own output. what will the consequences be of this?

  • refulgentis 7 hours ago
    The LLM made this sound so epic: "The node: prefix is more than just a convention—it’s a clear signal to both developers and tools that you’re importing Node.js built-ins rather than npm packages. This prevents potential conflicts and makes your code more explicit about its dependencies."
    • wavemode 7 hours ago
      so in other words, it's a convention
    • bashtoni 6 hours ago
      Agreed. It's surprising to see this sort of slop on the front page, but perhaps it's still worthwhile as a way to stimulate conversation in the comments here?
      • jmkni 5 hours ago
        I learned quite a few new things from this, I don't really care if OP filtered it through an LLM before publishing it
        • refulgentis 5 hours ago
          Same, but, I'm struggling with the idea that even if I learn things I haven't before, at the limit, it'd be annoying if we gave writing like this a free pass continuously - I'd argue filtered might not be the right word - I'd be fine with net reduction. Theres something bad about adding fluff (how many game changers were there?)

          An alternative framing I've been thinking about is, there's clearly something bad when you leave in the bits that obviously lower signal to noise ratio for all readers.

          Then throw in the account being new, and, well, I hope it's not a harbinger.*

          * It is and it's too late.

      • jjani 2 hours ago
        I too find it unreadable, I guess that's the downside of working on this stuff every day, you get to really hate seeing it.

        It does tell you that if even 95% of HN can't tell, then 99% of the public can't tell. Which is pretty incredible.

      • refulgentis 5 hours ago
        I have an increasing feeling of doom re: this.

        The forest is darkening, and quickly.

        Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.

        Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.

        Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)

        The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.

        I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.

        And the second half of the article was composed of entirely 4 item lists.

        • zacharyvoase 9 minutes ago
          i am desperately awaiting the butlerian jihad ;_;
        • jjani 2 hours ago
          There's just so many tells in this one though and they aren'tn new ones. Like a dozen+, besides just the entire writing style being one, permeating through every word.

          I'm also pretty shocked how HNers don't seem to notice or care, IMO it makes it unreadable.

          I'd write an article about this but all it'd do is make people avoid just those tells and I'm not sure if that's an improvement.

    • Hackbraten 6 hours ago
      Also no longer having to use an IIFE for top-level await is allegedly a „game changer.“
  • serguzest 7 hours ago
    One thing you should add to section 10 is encouraging people to pass `cause` option while throwing new Error instances. For example

    new Error("something bad happened", {cause:innerException})

  • keysdev 8 hours ago
    About time! The whole dragging the feet on ESM adoption is insane. The npm are still stuck on commonjs is quite a lot. In some way glad jsr came along.
    • DimmieMan 3 hours ago
      I blame tooling folks doing too good of a job abstracting the problem away, and no this of course isn't a jab at them.

      probably 70 to 80% of JS users have barely any idea of the difference because their tooling just makes it work.

  • NackerHughes 5 hours ago
    Be honest. How much of this article did you write, and how much did ChatGPT write?
    • jjani 2 hours ago
      To the latter: Absolutely all of it, though I put my money on Claude, it has more of its prominent patterns.
    • Our_Benefactors 5 hours ago
      What, surely you’re not implying that bangers like the following are GPT artifacts!? “The changes aren’t just cosmetic; they represent a fundamental shift in how we approach server-side JavaScript development.”
      • jameshart 5 hours ago
        And now we need to throw the entire article out because we have no idea whether any of these features are just hallucinations.
        • drewbitt 2 hours ago
          I think we have enough node developers here to know its truthfulness.
  • amclennon 7 hours ago
    Some good stuff in here. I had no idea about AsyncIterators before this article, but I've done similar things with generators in the past.

    A couple of things seem borrowed from Bun (unless I didn't know about them before?). This seems to be the silver lining from the constant churn in the Javascript ecosystem

  • lvl155 8 hours ago
    Thank you for this. Very helpful as I was just starting to dig into node for first time in a few years.
  • fud101 52 minutes ago
    I could never get into node but i've recently been dabbling with bun which is super nice. I still don't think i'll give node a chance but maybe i'm missing out.
  • yawnxyz 7 hours ago
    I feel like node and deno conventions are somehow merging (which is a good thing)
    • upcoming-sesame 4 hours ago
      Yes around web standards
    • mdhb 4 hours ago
      I think this partly at least is coming from the WinterCG efforts.
  • asgr 6 hours ago
    Deno has sandboxing tho
  • ale 5 hours ago
    Why bother with node when bun is a much better alternative for new projects?
    • crabmusket 4 hours ago
      I am being sincere and a little self deprecating when I say: because I prefer Gen X-coded projects (Node, and Deno for that matter) to Gen Z-coded projects (Bun).

      Bun being VC-backed allows me to fig-leaf that emotional preference with a rational facade.

      • DimmieMan 3 hours ago
        I think I kind of get you, there's something I find off putting about Bun like it's a trendy ORM or front end framework where Node and Deno are trying to be the boring infrastructure a runtime should be.

        Not to say Deno doesn't try, some of their marketing feels very "how do you do fellow kids" like they're trying to play the JS hype game but don't know how to.

        • crabmusket 30 minutes ago
          Yes, that's it. I don't want a cute runtime, I want a boring and reliable one.

          Deno has a cute mascot, but everything else about it says "trust me, I'm not exciting". Ryan Dahl himself also brings an "I've done his before" pedigree.

    • presentation 2 hours ago
      I haven't used it for a few months but in my experience, its package/monorepo management features suck compared to pnpm (dependencies leak between monorepo packages, the command line is buggy, etc), bun --bun is stupid, build scripts for packages routinely blow up since they use node so i end up needing to have both node and bun present for installs to work, packages routinely crash because they're not bun-compatible, most of the useful optimizations are making it into Node anyway, and installing ramda or whatever takes 2 seconds and I trust it so all of Bun's random helper libraries are of marginal utility.
    • johnny22 52 minutes ago
      because bun is written in a language that isn't even stable (zig) and uses webkit. None of the developer niceties will cover that up. I also don't know if they'll be able to monetize, which means it might die if funding dries up.
    • tonypapousek 5 hours ago
      Why bother with bun when deno 2 is a much better alternative for new projects?
      • 0x073 5 hours ago
        Why bother with deno 2 when node 22 is a much better alternative for new projects?

        (closing the circle)

    • FearTheFacts 2 hours ago
      [dead]
  • nikanj 5 hours ago
    By the time you finish reading this guide and update your codebase, the state-of-the-art JS best practices have changed at least twice
  • panzi 6 hours ago
    Unless it changed how NodeJS handles this you shouldn't use Promise.all(). Because if more than one promise rejects then the second rejection will emit a unhandledRejection event and per default that crashes your server. Use Promise.allSettled() instead.
    • vinnymac 6 hours ago
      Promise.all() itself doesn't inherently cause unhandledRejection events. Any rejected promise that is left unhandled will throw an unhandledRejection, allSettled just collects all rejections, as well as fulfillments for you. There are still legitimate use cases for Promise.all, as there are ones for Promise.allSettled, Promise.race, Promise.any, etc. They each serve a different need.

      Try it for yourself:

      > node

      > Promise.all([Promise.reject()])

      > Promise.reject()

      > Promise.allSettled([Promise.reject()])

      Promise.allSettled never results in an unhandledRejection, because it never rejects under any circumstance.

    • kaoD 5 hours ago
      This didn't feel right so I went and tested.

          process.on("uncaughException", (e) => {
              console.log("uncaughException", e);
          });
      
          try {
              const r = await Promise.all([
                  Promise.reject(new Error('1')),
                  new Promise((resolve, reject) => {
                      setTimeout(() => reject(new Error('2'), 1000));
                  }),
              ]);
      
              console.log("r", r);
          } catch (e) {
              console.log("catch", e);
          }
      
          setTimeout(() => {
              console.log("setTimeout");
          }, 2000);
      
      Produces:

          alvaro@DESKTOP ~/Projects/tests
          $ node -v
          v22.12.0
      
          alvaro@DESKTOP ~/Projects/tests
          $ node index.js 
          catch Error: 1
              at file:///C:/Users/kaoD/Projects/tests/index.js:7:22
              at ModuleJob.run (node:internal/modules/esm/module_job:271:25)
              at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:547:26)
              at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:116:5)
          setTimeout
      
      So, nope. The promises are just ignored.
      • panzi 2 hours ago
        So they did change it! Good.

        I definitely had a crash like that a long time ago, and you can find multiple articles describing that behavior. It was existing for quite a time, so I didn't think that is something they would fix so I didn't keep track of it.

      • cluckindan 2 hours ago
        Typo? ”uncaughException”
    • mijkal 5 hours ago
      When using Promise.all(), it won't fail entirely if individual promises have their own .catch() handlers.
      • andrewmcwatters 2 hours ago
        Subtle enough you’ll learn once to not do that again if you’re not looking for that behavior.
  • kfuse 8 hours ago
    Node now has limited supports for Typescript and has SQLite built in, so it becomes really good for small/personal web oriented projects.
  • serguzest 6 hours ago
    I love Node's built-in testing and how it integrates with VSCode's test runner. But I still miss Jest matchers. The Vitest team ported Jest matchers for their own use. I wish there were a similar compatibility between Jest matchers and Node testing as well.
    • vinnymac 6 hours ago
      Currently for very small projects I use the built in NodeJS test tooling.

      But for larger and more complex projects, I tend to use Vitest these days. At 40MBs down, and most of the dependency weight falling to Vite (33MBs and something I likely already have installed directly), it's not too heavy of a dependency.

      • serguzest 4 hours ago
        It is based on vite and a bundler has no place in my backend. Vite is based on roll-up, roll-up uses some other things such as swc. I want to use typescript projects and npm workspaces which vite doesn't seem to care about.
    • tkzed49 6 hours ago
      assertions in node test feel very "technically correct but kind of ugly" compared to jest, but I'll use it anyway
      • serguzest 3 hours ago
        yes but consider this Jest code, replicating such in node testing is painful. testing code should be DSL-like, should be very easy to read.

                    expect(bar).toEqual(
                        expect.objectContaining({
                            symbol: `BTC`,
                            interval: `hour`,
                            timestamp: expect.any(Number),
                            o: expect.any(Number),
                            h: expect.any(Number),
                            l: expect.any(Number),
                            c: expect.any(Number),
                            v: expect.any(Number)
                        })
                    );
  • insin 6 hours ago
    "SlopDetector has detected 2 x seamlessly and 7 x em-dash, would you like to continue?"
    • presentation 2 hours ago
      fwiw i use em-dashes
    • Lockal 6 hours ago
      Screaming "you’re not just writing contemporary code—you’re building applications that are more maintainable, performant, and aligned"
  • MuffinFlavored 6 hours ago
    Is current node.js a better language than .NET 6/7/8/9, why or why not?
    • bloomca 6 hours ago
      Node.js is a runtime, not a language. It is quite capable, but as per usual, it depends on what you need/have/know, ASP.NET Core is a very good choice too.
      • MuffinFlavored 5 hours ago
        > ASP.NET Core is a very good choice too.

        I have found this to not be true.

        • jiggawatts 4 hours ago
          Recently?

          In my experience ASP.NET 9 is vastly more productive and capable than Node.js. It has a nicer developer experience, it is faster to compile, faster to deploy, faster to start, serves responses faster, it has more "batteries included", etc, etc...

          What's the downside?

          • drewbitt 2 hours ago
            Compile speed and a subjective DX opinion are very debatable.

            The breadth of npm packages is a good reason to use node. It has basically everything.

            • jiggawatts 2 hours ago
              It has terrible half-completed versions of everything, all of which are subtly incompatible with everything else.

              I regularly see popular packages that are developed by essentially one person, or a tiny volunteer team that has priorities other than things working.

              Something else I noticed is that NPM packages have little to no "foresight" or planning ahead... because they're simply an itch that someone needed to scratch. There's no cohesive vision or corporate plan as a driving force, so you get a random mish-mash of support, compatibility, lifecycle, support, etc...

              That's fun, I suppose, if you enjoy a combinatorial explosion of choice and tinkering with compatibility shims all day instead of delivering boring stuff like "business value".

    • jiggawatts 4 hours ago
      In my experience, no.

      It's still single-threaded, it still uses millions of tiny files (making startup very slow), it still has wildly inconsistent basic management because it doesn't have "batteries included", etc...

      • ninetyninenine 1 hour ago
        You can bundle it all into one file and it's not single threaded anymore. There's this thing called worker_threads.

        But yes there are downsides. But the biggest ones you brought up are not true.

  • rvz 4 hours ago
    Perhaps the technology that you are using is loaded with hundreds of foot-guns if you have to spend time on enforcing these patterns.

    Rather than taking the logical focus on making money, it is wasting time on shuffling around code and being an architecture astronaut with the main focus on details rather than shipping.

    One of the biggest errors one can make is still using Node.js and Javascript on the server in 2025.

  • lightbendover 6 hours ago
    [dead]
  • FearTheFacts 2 hours ago
    [dead]
  • nabwodahs 7 hours ago
    [dead]
  • andrewmcwatters 8 hours ago
    [flagged]
    • mattnewton 8 hours ago
      isn’t this just like one of the few problems that is completely solvable today with LLM coding agents?
      • andrewmcwatters 7 hours ago
        Ideally, a codemod would fix this, but the two module systems are incompatible for dynamic programming reasons.
        • mattnewton 7 hours ago
          right, those transformations are a little too tricky for a codemod, but definitely still mechanical enough for LLMs to chug through is my guess.
    • seattle_spring 8 hours ago
      "Anyone who disagrees with me is a junior engineer."

      Just because a new feature can't always easily be slipped into old codebases doesn't make it a bad feature.

      • andrewmcwatters 8 hours ago
        It’s pretty obviously bad. There was no need to design such a bad module system and basically destroy the work of others for no benefit.

        Yes, it’s 100% junior, amateur mentality. I guess you like pointless toil and not getting things done.

        • wiseowise 7 hours ago
          ESM is literally a standard. You can rant all you want, but you'll adopt it anyway.
          • andrewmcwatters 7 hours ago
            Not really, from everything I can see, authors are basically forced to ship both, so it’s just another schism. Libraries that stopped shipping CJS we just never adopted, because we’re not dropping mature tech for pointless junior attitudes like this.

            No idea why you think otherwise, I’m over here actually shipping.

  • chickenzzzzu 8 hours ago
    Yet more architecture astronaut behavior by people who really should just be focusing on ifs, fors, arrays, and functions.
    • triyambakam 8 hours ago
      Architecture astronaut is a term I hadn't heard but can appreciate. However I fail to see that here. It's a fair overview of newish Node features... Haven't touched Node in a few years so kinda useful.
      • chickenzzzzu 8 hours ago
        It's a good one with some history and growing public knowledge now. I'd encourage a deep dive, it goes all the way back to at least CPP and small talk.

        While I can see some arguments for "we need good tools like Node so that we can more easily write actual applications that solve actual business problems", this seems to me to be the opposite.

        All I should ever have to do to import a bunch of functions from a file is

        "import * from './path'"

        anything more than that is a solution in search of a problem

        • MrJohz 8 hours ago
          Isn't that exactly the syntax being recommended? Could you explain what exactly in the article is a solution in search of a problem?
        • WickyNilliams 7 hours ago
          Did you read the article? Your comments feel entirely disconnected from its contents - mostly low level piece or things that can replace libraries you probably used anyway
    • flufluflufluffy 8 hours ago
      what? This is an overview of modern features provided in a programming language runtime. Are you saying the author shouldn’t be wasting their time writing about them and should be writing for loops instead? Or are you saying the core devs of a language runtime shouldn’t be focused on architecture and should instead be writing for loops?
    • programmarchy 8 hours ago
      One of the core things Node.js got right was streams. (Anyone remember substack’s presentation “Thinking in streams”?) It’s good to see them continue to push that forward.
      • chickenzzzzu 8 hours ago
        Why? Why is a stream better than an array? Why is the concept of a realtime loop and for looping through a buffer not sufficient?
        • bblaylock 8 hours ago
          I think there are several reasons. First, the abstraction of a stream of data is useful when a program does more than process a single realtime loop. For example, adding a timeout to a stream of data, switching from one stream processor to another, splitting a stream into two streams or joining two streams into one, and generally all of the patterns that one finds in the Observable pattern, in unix pipes, and more generally event based systems, are modelled better in push and pull based streams than they are in a real time tight loop. Second, for the same reason that looping through an array using map or forEach methods is often favored over a for loop and for loops are often favored over while loops and while loops are favored over goto statements. Which is that it reduces the amount of human managed control flow bookkeeping, which is precisely where humans tend to introduce logic errors. And lastly, because it almost always takes less human effort to write and maintain stream processing code than it does to write and maintain a real time loop against a buffer.

          Hopefully this helps! :D

        • dwb 8 hours ago
          A stream is not necessarily always better than an array, of course it depends on the situation. They are different things. But if you find yourself with a flow of data that you don't want to buffer entirely in memory before you process it and send it elsewhere, a stream-like abstraction can be very helpful.
        • cluckindan 7 hours ago
          Streams have backpressure, making it possible for downstream to tell upstream to throttle their streaming. This avoids many issues related to queuing theory.

          That also happens automatically, it is abstracted away from the users of streams.

        • WickyNilliams 7 hours ago
          Why is an array better than pointer arithmetic and manually managing memory? Because it's a higher level abstraction that frees you from the low level plumbing and gives you new ways to think and code.

          Streams can be piped, split, joined etc. You can do all these things with arrays but you'll be doing a lot of bookkeeping yourself. Also streams have backpressure signalling

          • chickenzzzzu 6 hours ago
            Backpressure signaling can be handled with your own "event loop" and array syntax.

            Manually managing memory is in fact almost always better than what we are given in node and java and so on. We succeed as a society in spite of this, not because of this.

            There is some diminishing point of returns, say like, the difference between virtual and physical memory addressing, but even then it is extremely valuable to know what is happening, so that when your magical astronaut code doesn't work on an SGI, now we know why.