- Apple Calculator: 32GB RAM leak - Spotify on macOS: 79GB memory consumption - CrowdStrike: One missing bounds check = 8.5M crashed computers - macOS Spotlight: Wrote 26TB to SSDs overnight
Meanwhile Big Tech is spending $364B on infrastructure instead of fixing the code.
I wrote up the full analysis with citations: https://techtrenches.substack.com/p/the-great-software-quality-collapse
But the real question: When did we normalize this? What happened to basic quality standards?
What are you seeing in your organizations?
Companies aren’t creating new value; they’re monetizing hope — issuing debt against models that don’t yet work and counting that as “growth.” It’s not innovation anymore. It’s financial theater dressed as progress.
Everything human beings create is ephemeral. That restaurant you love will gradually drop standards and decay. That inspiring startup will take new sources of funding and chase new customers and leave you behind, on its own trajectory of eventual oblivion.
When I frame things this way, I conclude that it's not that "software quality" is collapsing, but the quality of specific programs and companies. Success breeds failure. Apple is almost 50 years old. Seems fair to stipulate that some entropy has entered it. Pressure is increasing for some creative destruction. Whose job is it to figure out what should replace your Apple Calculator or Spotify? I'll put it to you that it's your job, along with everyone else's. If a program doesn't work, go find a better program. Create one. Share what works better. Vote with your attention and your dollars and your actual votes for more accountability for big companies. And expect every team, org, company, country to decay in its own time.
Shameless plug: https://akkartik.name/freewheeling-apps
I've published a blog post urging [0] top programmers to quit for‑profit social media and rebuild better norms away from that noise.
[0] https://abner.page/post/exit-the-feed/
Look at trappist brewers. Long tradition of consistent quality. You just have to devote your life to the ascetic pursuit of monkhood. It attracts a completely different kind of person.
Could we fuck off with this nihilist shit and leave it on Discord? It's so childish.
We are not ephemeral by nature, we build pyramids for a multi generational 2500 years vs back in my day (12 years ago) laments.
> Seems fair to stipulate that some entropy has entered it.
Entropy is a more valid analysis but there is zero reason it can't be kept at bay. No group of people can keep a restaurant clean for 80 years?
Why would it not be complexity? Apple's code base is insanely large.
The entire premise "software quality collapsing" is probably wrong but the solution if it's real is not "go find a better program. Create one. Share what works better"
If Chrome easily told me the memory used on a tab maybe there would be more pressure to make websites tighter. The hover was a great step forward but it's still not prominent enough.
At many large companies, there is an incentive to create systems that are as complicated as possible. A side effect of that is gaps in what’s actually observable. This manifests itself in shitty user experiences with partially loading pages and widgets or widgets that take multiple times longer to load than other parts of the page.
All this is a direct result of large company barriers in communication, crossing between stacks with no single vertical observability solution. At medium sized companies (<9000), it begins to fall apart. A single user request has dozens of internal hops to arrive at the final API and product managers wonder why a response takes several seconds.
This resource allocation strategy seems rational though. We could consume all available resources endlessly polishing things and never get anything new shipped.
Honestly it seems like the another typical example of the “cost center” vs “revenue center” problem. How much should we spend on quality? It’s hard to tell up front. You don’t want to spend any more than the minimum to prevent whatever negative outcomes you think poor quality can cause. Is there any actual $ increase from building higher quality software than “acceptable”?
More revenue -> company grows
Less revenue -> company shrinks
Regarding the new conversation topic: some open source software does have revenue. Forms of revenue include: donations, selling support, selling licenses that are less restrictive than the original open source license, ads, and selling addons. Yes, revenue for open source software is generally less than for-profit software, and despite that the open source software is often higher quality. I didn't claim that a higher quality product will always have more revenue than a lower quality product. I just made a claim about where the money goes.
I'd rather take a step in the right direction than none at all. If the management can be convinced that there's more money to be made this way then that gives us engineers more power to convince them to solve other such problems. If they care about quality then that gives us back negotiating power. You don't outsource to a third world software mill or AI when your concern is quality. But you do when you were trying to sell the cheapest piece of shit that people will still buy. So yeah, I'm okay with this
> You don't outsource to a third world software mill or AI when your concern is quality.
That's a disastrously fallacious set of presuppositions. A good engineer will use AI well to improve their software, whereas a bad engineer will use it to produce junk.
I want to stress that this is a highly complex problem that needs to be solved and that means we need to break it down into smaller manageable tasks. You're not going to change everything overnight, a single person won't change things, nor will a single action change things. There's no clear definitive objective that needs to be solved to sole this problem. Nor is there a magic wizard in the tower that needs to be defeated.
In other words, I gave you my explanation for why I think this can be a step in the right direction (in a sister comment I said even more if you want to read that). But you have complained and given no alternative. Your only critique is that it does not solve the problem in one fell swoop. That was never an assumption I made, it is not a reasonable assumption to make (as you yourself are noting), and I explicitly said it is not an assumption being made. Do not invent problems to win an argument. All you've done is attempt to turn a conversation into an argument.
So don't stop after one step. Read more carefully. I did not say "use AI" I said "outsource to AI". There is a huge difference in these two things.Do we need to fight or can we actually have a discussion to help figure out this problem together? You do not need agree with me, contention can be beneficial to the process, but you do need to listen. I have no interest in fighting, so I'll leave the choice to you.
As a simple version think about it this way: if a customer can't tell the difference in quality at time of purchase then the only signal they have is price.
I think even here on HN if we're being honest with ourselves it's hard to tell quality prior to purchase. Let alone the average nontechnical person. It's crazy hard to evaluate software even hands on. How much effort you need you put in these days. The difficulty of differentiating sponsored "reviews" from legitimate ones. Even all the fake reviews or how Amazon allows changing a product and inheriting the reviews of the old product.
No one asks you because all the sellers rely too heavily on their metrics. It's not just AI people treat like black boxes, it's algorithms and metrics in general. But you can't use any of that effectively without context.
At engineers I think we should be a bit more grumpy. Our job is to find problems and fix them. Be grumpy to find them. Don't let the little things slip because even though a papercut isn't a big deal, a thousand is. Go in and fix bugs without being asked to. Push back against managers who don't understand. You're the technical expert, not them (even if they were once an engineer, those skills atrophy and you get disconnected from a system when you aren't actively working on it). Don't let them make you make arguments about some made up monetary value for a feature or a fix. It's managements job to worry about money and our job to worry about the product. There needs to be a healthy adversarial process here. When push comes to shove, we should prioritize the product over the profit while they should do the opposite. This contention is a feature, not a bug. Because if we always prioritize profits, well, that's a race to the bottom. It kills innovation. It asks "what's the shittiest cheapest thing we can sell but people will still buy". It enables selling hype rather than selling products. So please, be a grumpy engineer. It's in the best interest of the company. Maybe not for the quarter, but it is for the year and the decade. (You don't need to be an asshole or even fight with your boss. Simply raising concerns about foreseeable bugs can be a great place to start. Filling bug reports for errors you find too! Or bugs your friends and family find. Or even help draft them with people like on HN that raise concerns about a product your company works on. Doesn't need to be your specific team, but file the bug report for someone who can't)
And as the techies, we should hold high standards. Others rely on us for recommendations. We need to distill the nuances and communicate better with our nontechnical friends and family.
These won't solve everything but I believe they are actionable, do not require large asks, and can push some progress. Better something than nothing, otherwise there will be no quality boots to buy
https://en.wikipedia.org/wiki/Boots_theory
We solve big problems by turning them into many small problems. There is no difference here. One step at a time.
I remember the good old days where nobody unit tested, there were no linters or any focus on quality tools in IDEs. Gang of four patterns we take for granted were considered esoteric gold plating.
Sure, memory usage is high, but hardware is cheap.
In the ’90s, inefficiency meant slower code. Today it means 32GB RAM leaks in calculator apps, billion-dollar outages from a missing array field, and 300% more vulnerabilities in AI-generated code.
We’ve automated guardrails, but we’ve also automated incompetence. The tooling got better, the results didn’t.
Crashes used to be localized, one app, one machine. Now a missing field in a config file can take down 8.5 million Windows systems globally. Spotify leaking 79GB of RAM isn’t a “bug,” it’s normalized waste.
The signal isn’t that bugs exist, it’s that catastrophic ones no longer trigger process change. We’ve accepted systemic failure as normal because hardware and cloud budgets hide the cost.
The more loudly someone speaks up, the faster they are shown the door. As a result, most people keep their head down, pick their battles carefully, and try to keep their head above water so they can pay the rent.
Every framework is like a regulation, something which solves an ostensible problem but builds up a rot of inefficiency that is not visible. The more frameworks, layers upon layers needed to make an application, the more it becomes slow, small errors are ignored, abstractions obfuscate actual functionality.
Each layer promises efficiency but adds hidden coordination cost. Ten years ago, a web app meant a framework and a database. Now it’s React → Electron → Chromium → Docker → Kubernetes → managed DB → API gateway - six layers deep to print “Hello, world.”
Every abstraction hides just enough detail to make debugging impossible. We’ve traded control for convenience, and now no one owns the full stack - just their slice of the slowdown.
> How do I make a link in a text submission?
> You can't. This is to prevent people from submitting a link with their comments in a privileged position at the top of the page. If you want to submit a link with comments, just submit the link, then add a regular comment.
https://news.ycombinator.com/newsfaq.html
You said, "I’m not allowed to post links yet, because of new account". There's a reason for that, and you're trying to bypass that restriction by misusing Ask HN instead.
If, on the on the other hand you regularly post links to other people's blogs and participate in discussion you won't have any trouble slipping in a link to your own blog. The key is you're expected to be part of the community.
All of the above is multiplied 1.3x-1.5x with accelerating ways to get upto speed with iterative indexing of knowledge with llms. I believe we are reliant on those early engineers whose software took a while to build (like a marathon), and not short-sprinted recyclable software we keep shipping on it. The difference is not a lot of people want to be in those shoes (responsibility/comp tradeoffs.
You mean CrowdStrike still crashes? Spotlight still writes 26TB every night? (Which only happened in beta, AFAIK...) Of course, they are fixing the code. Conflating infrastructure spending is not helpful.
The bitter truth is that complex software will always contain some bugs, it's close to impossible to ship a completely mathematically perfect software. It's how we react to bugs and the report/fix/update pipeline that truly matters.
We measure coverage instead of correctness, and AI-generated tests just made it worse, they validate syntax, not behavior. The illusion of safety lets teams ship faster while silently compounding technical debt.
The real regression isn’t missing tests, it’s that we stopped thinking during them.
We're seeing bugs in bigger slices because technology is, overall, a bigger pie. Full of bugs. The bigger the pie, the easier it is to eat around them.
Another principle at play might be "induced demand," most notoriously illustrated by widening highways, but might just as well apply to the widening of RAM.
Are we profligate consumers of our rareified, finite computing substrate? Perhaps, but the Maximum Power Transfer Theorem suggests that anything less than 50% waste heat would slow us down. What's the rush? That's above my pay grade.
I guess what I'm saying is that I don't see any sort of moral, procedural, or ideological decay at fault.
In my circles, QA is still very much a thing, only "shifted left" for tighter integration into CI/CD.
Edit: It's also worth reflecting on "The Mess We're In."[0] Approaches that avoid or mitigate the pitfalls common to writing software must be taught or rediscovered in every generation, or else wallow in the obscure quadrant of unknown-unknowns.
0. https://m.youtube.com/watch?v=lKXe3HUG2l4
Close. Failure-free is simply impossible. And believing the opposite fails even harder and dies out.
This is not "acceptable", because there is no alternative, there is no choice or refutation (non-acceptance). It is a fact of life. Maybe even more so than gravity and mechanical friction.
What’s interesting is that the result is immature developers. This becomes evident in that although the goal is rapid hiring/firing, which is completely hostile to the developer, the impacted developer is somehow convinced such hostility is their primary vector of empowerment. For example if an employer mandates use of a tool to lower barrier of entry to less qualified candidates those less qualified candidates are likely to believe that tool is there primarily to benefit them. That makes sense if the given candidate is otherwise completely unqualified, but it’s nonetheless shortsighted and narcissistic.
As a result software quality degrades as the quality of people doing the work degrades while business requirements simultaneously increase in complexity/urgency to compensate.
Teams are optimized for output volume, not outcome quality. Hiring pipelines favor those who can “ship fast,” while the systems they ship into grow exponentially more complex. The result: shallow competence at scale.
AI just poured fuel on it - it lets everyone look 30% more productive while compounding the same underlying brittleness.
Look at the construction industry. Many buildings on this planet were built hundreds, sometimes a thousand or more years ago. They still stand today as the quality of their build quality was excellent.
A house built today of cheap materials (i.e poor quality software engineers) as quickly as possible (i.e urgent business timelines) will fall apart in 30 years while older properties will continue to stand tall long after the "modern" house has crumbled.
These days software is often about being first to market with quality (and cough security) being a distant second priority.
However occasionally software does emerge as high quality and becomes a foundation for further software. Take Linux, FreeBSD and curl as examples of this. Their quality control is very high priority and time has proven this to be beneficial - for every user.
We’ve industrialized the process without industrializing the discipline. The result is mass-produced code built on shaky abstractions, fast to assemble, and faster to decay.
Linux and curl weren’t built on sprints or OKRs. They were built on ownership, long time horizons, and the idea that stability is innovation when everyone else is optimizing for speed.
True. And yet, far more buildings built then are not standing. We just don't notice them, because they aren't still here for us to notice.
So don't think that things were built better then. A few were; most weren't.
Then there's the shiny object syndrome of humanity in general even if we just look at websites they went through so many different cycles, plain html, flash, everything built with bootstrap.css, then came the frameworks, then back to SSR/SSG, etc... etc..
Both of those are just symptoms of a larger disease , namely lack of enthusiasm in general has fallen, a lot of it has to do with how demanding day to day software jobs have gotten, or how financially unstable the younger generations feel so they rarely set aside any time for creative endeavors and passion projects
All sense of teamwork was murdered about a decade ago by people with clipboards and other dead weight staff who don't give a rat's ass about anything.
Most devs under 30 don't have the same enthusiasm previous generations did because the opportunity being proposed just isn't the same. The room for creativity isn't there, and neither is the financial reward. Do more with less and these problems tend to go away.
I could improve the quality infrastructure, write more tests and clean up the code, but the work is not as fulfilling.
I do not use any of the software mentioned in that article, and I also do not have that much RAM in my computer.
The "vibe coding MVP" crowd treats code as disposable—ship fast, validate the idea, and discard it if it doesn't work.
The problem: most MVPs never get thrown away. They get customers, then funding, then "we'll refactor later" becomes "we can't afford downtime to refactor."
That's how you end up with production systems built on prototypes that were never meant to handle real load.
I'm with you: if you're not willing to build it properly, don't build it at all. Technical debt compounds faster than most founders realize.
The Twitter mob defends vibe coding because it worked once for someone. However, for every success story, there are thousands of companies struggling with unfixable codebases.
Do it right or don't do it. There's no "we'll fix it later" in production.
Software quality only matters when users can switch.
Nobody likes thinking critically and admitting that they haven’t achieved a responsible standard of care. If they aren’t forced to do it, why bother?
The rest is just a downhill trend.
My MacBook Pro M1 16" seems to be averaging about 13 watts of power, about the same as previous i7. My house idles at around 200 watts (lots of smart devices, etc). Hardly worth obsessing over it.
How many users does Spotify have? Multiply that by the 79GB mentioned above. Is it still cheaper?
If a user doesn't have enough ram to use Spotify, Spotify doesn't care. That user canceling their service is lost in the normal user churn. Spotify most likely has no idea and doesn't care if resource wastage affects their customers. It isn't an immediate first-order impact on their bottom line so it doesn't matter
Full featured IDEs like this have always been heavy, as far as I know. It's only the pure-text editors without advanced full code analysis that can get away with low resources.
What metrics specifically?
It's not the case in traditional engineering fields: when you build a dam, the manager cannot say "hmm just use half as much concrete here, it will be faster and nobody will realise". Because people can go to jail for that. The engineers know that they need to make it safe, and the managers know that if the engineers say "it has to be like that for safety", then the manager just accepts it.
In software it's different: nobody is responsible for bad software. Millions of people need to buy a new smartphone because software needs twice as much RAM for no reason? Who cares? So the engineers will be pushed to make what is more profitable: often that's bad software, because users don't have a clue either.
Normal people understand the risks if we talk about a bridge or a dam collapsing. But privacy, security, efficiency, not having to buy a new smartphone every 2 years to load Slack? They have no clue about that. They just want to use what the others use, and they don't want to pay for software.
And when it's not that, it's downright enshittification: users don't have a choice anymore.
Bridges collapse once. Software collapses silently, one abstraction at a time. The incentives are inverted: short-term growth gets rewarded, long-term stability gets deprecated.
When there’s no physical consequence for failure, “good enough” becomes the design philosophy.
This is correlated with "joy" and happiness, or contentment, which brings about patience. It is anti-correlated with pain and stress which brings about restlessness.
In short, good things take time. You cannot hurry the seasons. The competition is too fierce to worry about leaks.
We need to feel safe before we get creative, barring that, we hurry there.
I'm sure they're no better on my iPhone but I don't even have the appropriate tools to gauge it. Except that sometimes when I use them, another app I'm using closes and I lose my state.
There's no pressure to care. Most users can't tell that it's your app that's the lemon. The only reason I know anything about my Macbook is because I paid for iStatMenus to show me the CPU/RAM usage in the global menubar that can quickly show me the top 5 usage apps.
This basic info should be built in to every computer and phone.
I guess a lot depends on which software you focus on.
But in modern political and economic times where number must always go up, too big to fail is a thing and anti-trust enforcement isn't (to say nothing of the FTC mostly just ¯\_(ツ)_/¯ with regards to basically any merger/acquisition of big tech), the current batch of companies just keeps growing and growing instead of being naturally replaced. To say nothing of the fact that a lot of startup culture now sees being acquired as the endgame, rather than even dreaming of competing against these monstrosities.
Sadly, it won't fare well. You'll get a mix of flags and downvotes, along with "There's no problem! This is Fine!".
I feel that software has become vastly more complex, which increases what I call "trouble nodes." These are places where a branch, API junction, abstraction, etc., give space for bugs.
The vast complexity means that software does a lot more, but it also means that it is chock-full of trouble nodes, and that it needs to be tested a lot more rigorously than in the past.
Another huge problem is dependence on dependencies. Abstracting trouble nodes does not make them go away. It simply puts them into an area that we can't test properly and fix.
The difference isn’t complexity it’s priorities. Jobs-era Apple had smaller teams building fewer products with obsessive quality standards. Cook-era Apple has massive teams shipping constantly with “good enough” as the bar.
You’re right that testing helps. But when quality becomes optional, no amount of testing infrastructure fixes the cultural problem. We test for “does it work?” not “is it excellent?”
These issues passed all automated tests. They just didn’t pass the “would we be embarrassed to ship this?” test. That test doesn’t exist anymore at scale.
I tend to prefer test harnesses: https://littlegreenviper.com/various/testing-harness-vs-unit...
In the dark, distant past, we wrote programs that ran in kilobytes of memory on a double-digit-MHz CPU. Multiple cores or threads did not exist.
Today, the same program requires gigabytes of RAM and takes multiple seconds to do the same work with 32 4GHz CPUs.
This is truly not an exaggeration. Everyone who actually handled a Windows 95 machine in its natural environment will tell you that the experience of using a computer today is ten times slower and forty times more frustrating. Computers are slower than they ever have been, despite having hardware that is fast beyond the limits of anything we even dared to dream of in the 90s.
Win 95 was not only slow, but also prone to hangings of all kinds and it was hard to keep it up for more than a few days without a crash or a needed reset.
Consuming a huge amount of unnecessary resources is a modern problem, but it doesn't make software quality any less, well depending on how you define quality.
The bug in the 1980's Therac-25 software killed people, bugs in the Patriot system causing it not to intercept Iraqi Scuds killed people in 1991 and the Mars Pathfinder required an interstellar software update in 1997 to fix a bug.
Its definitely overused in certain circumstances, when you could just roll out a monolithic code base on a single server, but in many cases now, systems get built that were impossible to build in the past