The killer upgrade here isn’t ESM. It’s Node baking fetch + AbortController into core. Dropping axios/node-fetch trimmed my Lambda bundle and shaved about 100 ms off cold-start latency. If you’re still npm i axios out of habit, 2025 Node is your cue to drop the training wheels.
There has to be something wrong with a tech stack (Node + Lambda) that adds 100ms latency for some requests, just to gain the capability [1] to send out HTTP requests within an environment that almost entirely communicates via HTTP requests.
[1] convenient capability - otherwise you'd use XMLHttpRequest
Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.
Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.
Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715
I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.
I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.
ts-rest doesn't see a lot of support these days. It's lack of adoption of modern tanstack query integration patterns finally drove us look for alternatives.
Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.
I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.
Type safety for API calls is huge. I haven't used ts-rest but the compile-time validation approach sounds solid. Way better than runtime surprises. How's the experience in practice? Do you find the schema definition overhead worth it or does it feel heavy for simpler endpoints?
I always try to throw schema validation of some kind in API calls for any codebase I really need to be reliable.
For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.
For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.
I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.
I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.
Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.
I'll almost always lean on separate packages for any shared logic like that (at least if I can use the same language on both ends).
For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.
It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.
Write them both in TypeScript and have both the request and response shapes defined as schemas for each API endpoint.
The server validates request bodies and produces responses that match the type signature of the response schema.
The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.
It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.
The schema definition is more efficient than writing input validation from scratch anyway so it’s completely win/win unless you want to throw caution to the wind and not do any validation
Also want to shout out ts-rest. We have a typescript monorepo where the backend and frontend import the api contract from a shared package, making frontend integration both type-safe and dead simple.
Yeah, that's the classic bundle size vs DX trade-off. Fetch definitely requires more boilerplate. The manual response.ok check and double await is annoying. For Lambda where I'm optimizing for cold starts, I'll deal with it, but for regular app dev where bundle size matters less, axios's cleaner API probably wins for me.
You’re shooting yourself in the foot if you put naked fetch calls all over the place in your own client SDK though. Or at least going to extra trouble for no benefit
Agreed, but I think that in every project I've done I've put at least a minimal wrapper function around axios or fetch - so adding a teeny bit more to make fetch nicer feels like tomayto-tomahto to me.
As a library author it's the opposite, while fetch() is amazing, ESM has been a painful but definitely worth upgrade. It has all the things the author describes.
Interesting to get a library author's perspective. To be fair, you guys had to deal with the whole ecosystem shift: dual package hazards, CJS/ESM compatibility hell, tooling changes, etc so I can see how ESM would be the bigger story from your perspective.
I'm a small-ish time author, but it was really painful for a while since we were all dual-publishing in CJS and ESM, which was a mess. At some point some prominent authors decided to go full-ESM, and basically many of us followed suit.
The fetch() change has been big only for the libraries that did need HTTP requests, otherwise it hasn't been such a huge change. Even in those it's been mostly removing some dependencies, which in a couple of cases resulted in me reducing the library size by 90%, but this is still Node.js where that isn't such a huge deal as it'd have been on the frontend.
Now there's an unresolved one, which is the Node.js streams vs WebStreams, and that is currently a HUGE mess. It's a complex topic on its own, but it's made a lot more complex by having two different streaming standards that are hard to match.
What a dual-publishing nightmare. Someone had to break the stalemate first. 90% size reduction is solid even if Node bundle size isn't as critical. The streams thing sounds messy, though. Two incompatible streaming standards in the same runtime is bound to create headaches.
The fact that CJS/ESM compatibility issues are going away indicates it was always a design choice and never a technical limitation (most CJS format code can consume ESM and vice versa). So much lost time to this problem.
It was neither a design choice nor a technical limitation. It was a big complicated thing which necessarily involved fiddly internal work and coordination between relatively isolated groups. It got done when someone (Joyee Cheung) actually made the fairly heroic effort to push through all of that.
I maintain a library also, and the shift to ESM was incredibly painful, because you still have to ship CJS, only now you have work out how to write the code in a way that can be bundled either way, can be tested, etc etc.
It was a pain, but rollup can export both if you write the source in esm. The part I find most annoying is exporting the typescript types. There's no tree-shaking for that!
I've had Claude decide to replace my existing fetch-based API calls with Axios (not installed or present at all in the project), apropos of nothing during an unrelated change.
Totally get that! I think it depends on your context. For Lambda where every KB and millisecond counts, native fetch wins, but for a full app where you need robust HTTP handling, the axios plugin ecosystem was honestly pretty nice. The fragmentation with fetch libraries is real. You end up evaluating 5 different retry packages instead of just grabbing axios-retry.
I think that's the sweet spot. Native fetch performance with axios-style conveniences. Some libraries are moving in that direction, but nothing's really nailed it yet. The challenge is probably keeping it lightweight while still solving the evaluating 5 retry packages problem.
Ky is definitely one of the libraries moving in that direction. Good adoption based on those download numbers, but I think the ecosystem is still a bit fragmented. You've got ky, ofetch, wretch, etc. all solving similar problems. But yeah, ky is probably the strongest contender right now, in my opinion.
This is all very good news. I just got an alert about a vulnerability in a dependency of axios (it's an older project). Getting rid of these dependencies is a much more attractive solution than merely upgrading them.
Right?! I think a lot of devs got stuck in the axios habit from before Node 18 when fetch wasn't built-in. Plus axios has that batteries included feel with interceptors, auto-JSON parsing, etc. But for most use cases, native fetch + a few lines of wrapper code beats dragging in a whole dependency.
Undici is solid. Being the engine behind Node's fetch is huge. The performance gains are real and having it baked into core means no more dependency debates. Plus, it's got some great advanced features (connection pooling, streams) if you need to drop down from the fetch API. Best of both worlds.
It's into core but not exposed to users directly. you still need to install the npm module if you want to use it, which is required if you need for example to go through an outgoing proxy in your production environment
It has always astonished me that platforms did not have first class, native "http client" support. Pretty much every project in the past 20 years has needed such a thing.
Also, "fetch" is lousy naming considering most API calls are POST.
That's a category error. Fetch is just refers to making a request. POST is the method or the HTTP verb used when making the request. If you're really keen, you could roll your own
I read this as OP commenting on the double meaning of the category. In English, “fetch” is a synonym of “GET”, so it’s silly that “fetch” as a category is independent of the HTTP method
Those... are not mutually exclusive as killer upgrade. No longer having to use a nonsense CJS syntax is absolutely also a huge deal.
Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.
I've heard it recommended; other than speed, what does it have to offer? I'm not too worried about shaving off half-a-second off of my personal projects' 5-second test run :P
It has native TS and JSX support, excellent spy, module, and DOM mocking, benchmarking, works with vite configs, and parallelises tests to be really fast.
Eh, the Node test stuff is pretty crappy, and the Node people aren't interested in improving it. Try it for a few weeks before diving headfirst into it, and you'll see what I mean (and then if you go to file about those issues, you'll see the Node team not care).
I just looked at the documentation and it seems there's some pretty robust mocking and even custom test reporters. Definitely sounds like a great addition. As you suggest, I'll temper my enthusiasm until I actually try it out.
Matteo Collina says that the node fetch under the hood is the fetch from the undici node client [0]; and that also, because it needs to generate WHATWG web streams, it is inherently slower than the alternative — undici request [1].
I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.
I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.
Anyone else find they discover these sorts of things by accident. I never know when a feature was added but vague ideas of "thats modern". Feels different to when I only did C# and you'd read the new language features and get all excited. In a polyglot world and just the rate even individual languages evolve its hard to keep up! I usually learn through osmosis or a blog post like this (but that is random learning).
I think slowly Node is shaping up to offer strong competition to Bun.js, Deno, etc. such that there is little reason to switch. The mutual competition is good for the continued development of JS runtimes
Slowly, yes, definitely welcome changes. I'm still missing Bun's `$` shell functions though. It's very convenient to use JS as a scripting language and don't really want to run 2 runtimes on my server.
It's still not ready for use. I don't care Enum. But you can not import local files without extensions. You can not define class properties in constructor.
Enums and parameter properties can be enabled with the --experimental-transform-types CLI option.
Not being able to import TypeScript files without including the ts extension is definitely annoying. The rewriteRelativeImportExtensions tsconfig option added in TS 5.7 made it much more bearable though. When you enable that option not only does the TS compiler stop complaining when you specify the '.ts' extension in import statements (just like the allowImportingTsExtensions option has always allowed), but it also rewrites the paths if you compile the files, so that the build artifacts have the correct js extension: https://www.typescriptlang.org/docs/handbook/release-notes/t...
Importing without extensions is not a TypeScript thing at all. Node introduced it at the beginning and then stopped when implementing ESM. Being strict is a feature.
What's true is that they "support TS" but require .ts extensions, which was never even allowed until Node added "TS support". That part is insane.
TS only ever accepted .js and officially rejected support for .ts appearing in imports. Then came Node and strong-armed them into it.
In Node 22.7 and above you can enable features like enums and parameter properties with the --experimental-transform-types CLI option (not to be confused with the old --experimental-strip-types option).
Something's missing in the "Modern Event Handling with AsyncIterators" section.
The demonstration code emits events, but nothing receives them. Hopefully some copy-paste error, and not more AI generated crap filling up the internet.
It's definitely ai slop. See also the nonsensical attempt to conditionally load SQLite twice, in the dynamic imports example.
The list of features is nice, I suppose, for those who aren't keeping up with new releases, but IMO, if you're working with node and js professionally, you should know about most, if not all of these features.
Good to see Node is catching up although Bun seems to have more developer effort behind it so I'll typically default to Bun unless I need it to run in an environment where node is better for compatibility.
I've been away from the node ecosystem for quite some time. A lot of really neat stuff in here.
Hard to imagine that this wasn't due to competition in the space. With Deno and Bun trying to eat up some of the Node market in the past several years, seems like the Node dev got kicked into high gear.
Nice post! There's a lot of stuff here that I had no idea was in built-in already.
I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.
Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]
I have a blog post[1] and accompanying repo[2] that shows how to use SEA to build a binary (and compares it to bun and deno) and strip it down to 67mb (for me, depends on the size of your local node binary).
online writing before 2022 is the low-background steel of the information age. now these models will all be training on their own output. what will the consequences be of this?
The LLM made this sound so epic: "The node: prefix is more than just a convention—it’s a clear signal to both developers and tools that you’re importing Node.js built-ins rather than npm packages. This prevents potential conflicts and makes your code more explicit about its dependencies."
Agreed. It's surprising to see this sort of slop on the front page, but perhaps it's still worthwhile as a way to stimulate conversation in the comments here?
Same, but, I'm struggling with the idea that even if I learn things I haven't before, at the limit, it'd be annoying if we gave writing like this a free pass continuously - I'd argue filtered might not be the right word - I'd be fine with net reduction. Theres something bad about adding fluff (how many game changers were there?)
An alternative framing I've been thinking about is, there's clearly something bad when you leave in the bits that obviously lower signal to noise ratio for all readers.
Then throw in the account being new, and, well, I hope it's not a harbinger.*
Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.
Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.
Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)
The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.
I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.
And the second half of the article was composed of entirely 4 item lists.
There's just so many tells in this one though and they aren'tn new ones. Like a dozen+, besides just the entire writing style being one, permeating through every word.
I'm also pretty shocked how HNers don't seem to notice or care, IMO it makes it unreadable.
I'd write an article about this but all it'd do is make people avoid just those tells and I'm not sure if that's an improvement.
About time! The whole dragging the feet on ESM adoption is insane. The npm are still stuck on commonjs is quite a lot. In some way glad jsr came along.
What, surely you’re not implying that bangers like the following are GPT artifacts!? “The changes aren’t just cosmetic; they represent a fundamental shift in how we approach server-side JavaScript development.”
Some good stuff in here. I had no idea about AsyncIterators before this article, but I've done similar things with generators in the past.
A couple of things seem borrowed from Bun (unless I didn't know about them before?). This seems to be the silver lining from the constant churn in the Javascript ecosystem
I could never get into node but i've recently been dabbling with bun which is super nice. I still don't think i'll give node a chance but maybe i'm missing out.
I am being sincere and a little self deprecating when I say: because I prefer Gen X-coded projects (Node, and Deno for that matter) to Gen Z-coded projects (Bun).
Bun being VC-backed allows me to fig-leaf that emotional preference with a rational facade.
I think I kind of get you, there's something I find off putting about Bun like it's a trendy ORM or front end framework where Node and Deno are trying to be the boring infrastructure a runtime should be.
Not to say Deno doesn't try, some of their marketing feels very "how do you do fellow kids" like they're trying to play the JS hype game but don't know how to.
Yes, that's it. I don't want a cute runtime, I want a boring and reliable one.
Deno has a cute mascot, but everything else about it says "trust me, I'm not exciting". Ryan Dahl himself also brings an "I've done his before" pedigree.
I haven't used it for a few months but in my experience, its package/monorepo management features suck compared to pnpm (dependencies leak between monorepo packages, the command line is buggy, etc), bun --bun is stupid, build scripts for packages routinely blow up since they use node so i end up needing to have both node and bun present for installs to work, packages routinely crash because they're not bun-compatible, most of the useful optimizations are making it into Node anyway, and installing ramda or whatever takes 2 seconds and I trust it so all of Bun's random helper libraries are of marginal utility.
because bun is written in a language that isn't even stable (zig) and uses webkit. None of the developer niceties will cover that up. I also don't know if they'll be able to monetize, which means it might die if funding dries up.
Unless it changed how NodeJS handles this you shouldn't use Promise.all(). Because if more than one promise rejects then the second rejection will emit a unhandledRejection event and per default that crashes your server. Use Promise.allSettled() instead.
Promise.all() itself doesn't inherently cause unhandledRejection events. Any rejected promise that is left unhandled will throw an unhandledRejection, allSettled just collects all rejections, as well as fulfillments for you. There are still legitimate use cases for Promise.all, as there are ones for Promise.allSettled, Promise.race, Promise.any, etc. They each serve a different need.
Try it for yourself:
> node
> Promise.all([Promise.reject()])
> Promise.reject()
> Promise.allSettled([Promise.reject()])
Promise.allSettled never results in an unhandledRejection, because it never rejects under any circumstance.
I definitely had a crash like that a long time ago, and you can find multiple articles describing that behavior. It was existing for quite a time, so I didn't think that is something they would fix so I didn't keep track of it.
I love Node's built-in testing and how it integrates with VSCode's test runner. But I still miss Jest matchers. The Vitest team ported Jest matchers for their own use. I wish there were a similar compatibility between Jest matchers and Node testing as well.
Currently for very small projects I use the built in NodeJS test tooling.
But for larger and more complex projects, I tend to use Vitest these days. At 40MBs down, and most of the dependency weight falling to Vite (33MBs and something I likely already have installed directly), it's not too heavy of a dependency.
It is based on vite and a bundler has no place in my backend. Vite is based on roll-up, roll-up uses some other things such as swc. I want to use typescript projects and npm workspaces which vite doesn't seem to care about.
Node.js is a runtime, not a language. It is quite capable, but as per usual, it depends on what you need/have/know, ASP.NET Core is a very good choice too.
In my experience ASP.NET 9 is vastly more productive and capable than Node.js. It has a nicer developer experience, it is faster to compile, faster to deploy, faster to start, serves responses faster, it has more "batteries included", etc, etc...
It has terrible half-completed versions of everything, all of which are subtly incompatible with everything else.
I regularly see popular packages that are developed by essentially one person, or a tiny volunteer team that has priorities other than things working.
Something else I noticed is that NPM packages have little to no "foresight" or planning ahead... because they're simply an itch that someone needed to scratch. There's no cohesive vision or corporate plan as a driving force, so you get a random mish-mash of support, compatibility, lifecycle, support, etc...
That's fun, I suppose, if you enjoy a combinatorial explosion of choice and tinkering with compatibility shims all day instead of delivering boring stuff like "business value".
It's still single-threaded, it still uses millions of tiny files (making startup very slow), it still has wildly inconsistent basic management because it doesn't have "batteries included", etc...
Perhaps the technology that you are using is loaded with hundreds of foot-guns if you have to spend time on enforcing these patterns.
Rather than taking the logical focus on making money, it is wasting time on shuffling around code and being an architecture astronaut with the main focus on details rather than shipping.
One of the biggest errors one can make is still using Node.js and Javascript on the server in 2025.
Not really, from everything I can see, authors are basically forced to ship both, so it’s just another schism. Libraries that stopped shipping CJS we just never adopted, because we’re not dropping mature tech for pointless junior attitudes like this.
No idea why you think otherwise, I’m over here actually shipping.
Architecture astronaut is a term I hadn't heard but can appreciate. However I fail to see that here. It's a fair overview of newish Node features... Haven't touched Node in a few years so kinda useful.
It's a good one with some history and growing public knowledge now. I'd encourage a deep dive, it goes all the way back to at least CPP and small talk.
While I can see some arguments for "we need good tools like Node so that we can more easily write actual applications that solve actual business problems", this seems to me to be the opposite.
All I should ever have to do to import a bunch of functions from a file is
"import * from './path'"
anything more than that is a solution in search of a problem
Did you read the article? Your comments feel entirely disconnected from its contents - mostly low level piece or things that can replace libraries you probably used anyway
what? This is an overview of modern features provided in a programming language runtime. Are you saying the author shouldn’t be wasting their time writing about them and should be writing for loops instead? Or are you saying the core devs of a language runtime shouldn’t be focused on architecture and should instead be writing for loops?
One of the core things Node.js got right was streams. (Anyone remember substack’s presentation “Thinking in streams”?) It’s good to see them continue to push that forward.
I think there are several reasons. First, the abstraction of a stream of data is useful when a program does more than process a single realtime loop. For example, adding a timeout to a stream of data, switching from one stream processor to another, splitting a stream into two streams or joining two streams into one, and generally all of the patterns that one finds in the Observable pattern, in unix pipes, and more generally event based systems, are modelled better in push and pull based streams than they are in a real time tight loop. Second, for the same reason that looping through an array using map or forEach methods is often favored over a for loop and for loops are often favored over while loops and while loops are favored over goto statements. Which is that it reduces the amount of human managed control flow bookkeeping, which is precisely where humans tend to introduce logic errors. And lastly, because it almost always takes less human effort to write and maintain stream processing code than it does to write and maintain a real time loop against a buffer.
A stream is not necessarily always better than an array, of course it depends on the situation. They are different things. But if you find yourself with a flow of data that you don't want to buffer entirely in memory before you process it and send it elsewhere, a stream-like abstraction can be very helpful.
Streams have backpressure, making it possible for downstream to tell upstream to throttle their streaming. This avoids many issues related to queuing theory.
That also happens automatically, it is abstracted away from the users of streams.
Why is an array better than pointer arithmetic and manually managing memory? Because it's a higher level abstraction that frees you from the low level plumbing and gives you new ways to think and code.
Streams can be piped, split, joined etc. You can do all these things with arrays but you'll be doing a lot of bookkeeping yourself. Also streams have backpressure signalling
Backpressure signaling can be handled with your own "event loop" and array syntax.
Manually managing memory is in fact almost always better than what we are given in node and java and so on. We succeed as a society in spite of this, not because of this.
There is some diminishing point of returns, say like, the difference between virtual and physical memory addressing, but even then it is extremely valuable to know what is happening, so that when your magical astronaut code doesn't work on an SGI, now we know why.
[1] convenient capability - otherwise you'd use XMLHttpRequest
Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.
I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.
I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.
Luckily, oRPC had progressed enough to be viable now. I cannot recommend it over ts-rest enough. It's essentially tRPC but with support for ts-rest style contracts that enable standard OpenAPI REST endpoints.
- https://orpc.unnoq.com/
- https://github.com/unnoq/orpc
I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.
For prototypes I'll sometimes reach for tRPC. I don't like the level of magic it adds for a production app, but it is really quick to prototype with and we all just use RPC calls anyway.
For procudtion I'm most comfortable with zod, but there are quite a few good options. I'll have a fetchApi or similar wrapper call that takes in the schema + fetch() params and validates the response.
I found that keeping the frontend & backend in sync was a challenge so I wrote a script that reads the schemas from the backend and generated an API file in the frontend.
1. Shared TypeScript types
2. tRPC/ts-rest style: Automagic client w/ compile+runtime type safety
3. RTK (redux toolkit) query style: codegen'd frontend client
I personally I prefer #3 for its explicitness - you can actually review the code it generates for a new/changed endpoint. It does come w/ downside of more code + as codebase gets larger you start to need a cache to not regenerate the entire API every little change.
Overall, I find the explicit approach to be worth it, because, in my experience, it saves days/weeks of eng hours later on in large production codebases in terms of not chasing down server/client validation quirks.
For JS/TS, I'll have a shared models package that just defines the schemas and types for any requests and responses that both the backend and frontend are concerned with. I can also define migrations there if model migrations are needed for persistence or caching layers.
It takes a bit more effort, but I find it nicer to own the setup myself and know exactly how it works rather than trusting a tool to wire all that up for me, usually in some kind of build step or transpiration.
The server validates request bodies and produces responses that match the type signature of the response schema.
The client code has an API where it takes the request body as its input shape. And the client can even validate the server responses to ensure they match the contract.
It’s pretty beautiful in practice as you make one change to the API to say rename a field, and you immediately get all the points of use flagged as type errors.
The following seems cleaner than either of your examples. But I'm sure I've missed the point.
I share this at the risk of embarrassing myself in the hope of being educated.The fetch() change has been big only for the libraries that did need HTTP requests, otherwise it hasn't been such a huge change. Even in those it's been mostly removing some dependencies, which in a couple of cases resulted in me reducing the library size by 90%, but this is still Node.js where that isn't such a huge deal as it'd have been on the frontend.
Now there's an unresolved one, which is the Node.js streams vs WebStreams, and that is currently a HUGE mess. It's a complex topic on its own, but it's made a lot more complex by having two different streaming standards that are hard to match.
Joyee has a nice post going into details. Reading this gives a much more accurate picture of why things do and don't happen in big projects like Node: https://joyeecheung.github.io/blog/2024/03/18/require-esm-in...
You can obviously do that with fetch but it is more fragmented and more boilerplate
I haven't used it but the weekly download count seems robust.
Also, "fetch" is lousy naming considering most API calls are POST.
Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.
`const { styleText } = require('node:util');`
Docs: https://nodejs.org/api/util.html#utilstyletextformat-text-op...
1. Node has built in test support now: looks like I can drop jest!
2. Node has built in watch support now: looks like I can drop nodemon!
At the end it's just tests, the syntax might be more verbose but Llms write it anyway ;-)
The problem isn't in the writing, but the reading!
[0] - https://www.youtube.com/watch?v=cIyiDDts0lo
[1] - https://blog.platformatic.dev/http-fundamentals-understandin...
I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.
I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.
Sometimes I also read the proposals, https://github.com/tc39/proposals
I really want the pipeline operator to be included.
https://github.com/sindresorhus/execa/blob/main/docs/bash.md
Not being able to import TypeScript files without including the ts extension is definitely annoying. The rewriteRelativeImportExtensions tsconfig option added in TS 5.7 made it much more bearable though. When you enable that option not only does the TS compiler stop complaining when you specify the '.ts' extension in import statements (just like the allowImportingTsExtensions option has always allowed), but it also rewrites the paths if you compile the files, so that the build artifacts have the correct js extension: https://www.typescriptlang.org/docs/handbook/release-notes/t...
What's true is that they "support TS" but require .ts extensions, which was never even allowed until Node added "TS support". That part is insane.
TS only ever accepted .js and officially rejected support for .ts appearing in imports. Then came Node and strong-armed them into it.
Things like TS enums will not work.
The demonstration code emits events, but nothing receives them. Hopefully some copy-paste error, and not more AI generated crap filling up the internet.
The list of features is nice, I suppose, for those who aren't keeping up with new releases, but IMO, if you're working with node and js professionally, you should know about most, if not all of these features.
It's definitely awesome but doesn't seem newsworthy. The experimental stuff seems more along the lines of newsworthy.
Hard to imagine that this wasn't due to competition in the space. With Deno and Bun trying to eat up some of the Node market in the past several years, seems like the Node dev got kicked into high gear.
Also hadn't caught up with the the `node:` namespace.
I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.
Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]
[1]: https://nodejs.org/api/single-executable-applications.html
[2]: https://brr.fyi/posts/engineering-for-slow-internet
[1]: https://notes.billmill.org/programming/javascript/Making_a_s...
[2]: https://github.com/llimllib/node-esbuild-executable#making-a...
I hope you can appreciate how utterly insane this sounds to anyone outside of the JS world. Good on you for reducing the size, but my god…
1. new technologies
2. vanity layers for capabilities already present
It’s interesting to watch where people place their priorities given those two segments
online writing before 2022 is the low-background steel of the information age. now these models will all be training on their own output. what will the consequences be of this?
An alternative framing I've been thinking about is, there's clearly something bad when you leave in the bits that obviously lower signal to noise ratio for all readers.
Then throw in the account being new, and, well, I hope it's not a harbinger.*
* It is and it's too late.
It does tell you that if even 95% of HN can't tell, then 99% of the public can't tell. Which is pretty incredible.
The forest is darkening, and quickly.
Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.
Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.
Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)
The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.
I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.
And the second half of the article was composed of entirely 4 item lists.
I'm also pretty shocked how HNers don't seem to notice or care, IMO it makes it unreadable.
I'd write an article about this but all it'd do is make people avoid just those tells and I'm not sure if that's an improvement.
new Error("something bad happened", {cause:innerException})
probably 70 to 80% of JS users have barely any idea of the difference because their tooling just makes it work.
A couple of things seem borrowed from Bun (unless I didn't know about them before?). This seems to be the silver lining from the constant churn in the Javascript ecosystem
Bun being VC-backed allows me to fig-leaf that emotional preference with a rational facade.
Not to say Deno doesn't try, some of their marketing feels very "how do you do fellow kids" like they're trying to play the JS hype game but don't know how to.
Deno has a cute mascot, but everything else about it says "trust me, I'm not exciting". Ryan Dahl himself also brings an "I've done his before" pedigree.
(closing the circle)
Try it for yourself:
> node
> Promise.all([Promise.reject()])
> Promise.reject()
> Promise.allSettled([Promise.reject()])
Promise.allSettled never results in an unhandledRejection, because it never rejects under any circumstance.
I definitely had a crash like that a long time ago, and you can find multiple articles describing that behavior. It was existing for quite a time, so I didn't think that is something they would fix so I didn't keep track of it.
But for larger and more complex projects, I tend to use Vitest these days. At 40MBs down, and most of the dependency weight falling to Vite (33MBs and something I likely already have installed directly), it's not too heavy of a dependency.
I have found this to not be true.
In my experience ASP.NET 9 is vastly more productive and capable than Node.js. It has a nicer developer experience, it is faster to compile, faster to deploy, faster to start, serves responses faster, it has more "batteries included", etc, etc...
What's the downside?
The breadth of npm packages is a good reason to use node. It has basically everything.
I regularly see popular packages that are developed by essentially one person, or a tiny volunteer team that has priorities other than things working.
Something else I noticed is that NPM packages have little to no "foresight" or planning ahead... because they're simply an itch that someone needed to scratch. There's no cohesive vision or corporate plan as a driving force, so you get a random mish-mash of support, compatibility, lifecycle, support, etc...
That's fun, I suppose, if you enjoy a combinatorial explosion of choice and tinkering with compatibility shims all day instead of delivering boring stuff like "business value".
It's still single-threaded, it still uses millions of tiny files (making startup very slow), it still has wildly inconsistent basic management because it doesn't have "batteries included", etc...
But yes there are downsides. But the biggest ones you brought up are not true.
Rather than taking the logical focus on making money, it is wasting time on shuffling around code and being an architecture astronaut with the main focus on details rather than shipping.
One of the biggest errors one can make is still using Node.js and Javascript on the server in 2025.
Just because a new feature can't always easily be slipped into old codebases doesn't make it a bad feature.
Yes, it’s 100% junior, amateur mentality. I guess you like pointless toil and not getting things done.
No idea why you think otherwise, I’m over here actually shipping.
While I can see some arguments for "we need good tools like Node so that we can more easily write actual applications that solve actual business problems", this seems to me to be the opposite.
All I should ever have to do to import a bunch of functions from a file is
"import * from './path'"
anything more than that is a solution in search of a problem
Hopefully this helps! :D
That also happens automatically, it is abstracted away from the users of streams.
Streams can be piped, split, joined etc. You can do all these things with arrays but you'll be doing a lot of bookkeeping yourself. Also streams have backpressure signalling
Manually managing memory is in fact almost always better than what we are given in node and java and so on. We succeed as a society in spite of this, not because of this.
There is some diminishing point of returns, say like, the difference between virtual and physical memory addressing, but even then it is extremely valuable to know what is happening, so that when your magical astronaut code doesn't work on an SGI, now we know why.