2 comments

  • perrygeo 56 minutes ago
    I'm not sure I would frame unused memory as "waste". At least not necessarily. If the system is operating below max capacity, that memory isn't wasted - it's a reserve capacity to handle surges. The behavior of the system under load (when it matters) might very well depend on that "wasted" memory.

    You want to compare the _maximum_ memory under load - the high-water mark. If you combine Wozz with a benchmark to drive heavy traffic, you could more confidently say any unused memory was truly wasted.

  • wozzio 3 hours ago
    I've been consulting on EKS/GKE cost optimization for a few mid-sized companies and kept seeing the same pattern: massive over-provisioning of memory just to be safe.

    I wrote a simple CLI tool (bash wrapper around kubectl) to automate diffing kubectl top metrics against the declared requests in the deployment YAML.

    I ran it across ~500 pods in production. The "waste" (allocated vs. used) average by language was interesting:

    Python: ~60% waste (Mostly sized for startup spikes, then idles empty).

    Java: ~48% waste (Devs seem terrified to give the JVM less than 4Gi).

    Go: ~18% waste.

    The tool is called Wozz. It runs locally, installs no agents, and just uses your current kubecontext to find the gap between what you pay for (Requests) and what you use (Usage).

    It's open source. Feedback welcome.

    (Note: The install is curl | bash for convenience, but the script is readable if you want to audit it first).

    • pestatije 1 hour ago
      is this an instantaneous measure or goes over the whole duration of the process?
    • rvz 2 hours ago
      > I've been consulting on EKS/GKE cost optimization for a few mid-sized companies and kept seeing the same pattern: massive over-provisioning of memory just to be safe.

      Correct.

      Developers keep over-provisioning as they need enough memory for the app to continue running as demand scales up. Since these languages have their own runtimes and GCs to manage memory, it already pre-allocates lots of RAM before running the app; adding to the bloat.

      Part of the problem is not only a technical one (the language may be bloated and inefficient) but it is completely psychological as developers are scared of their app getting an out-of-memory exception in production.

      As you can see, the languages with the most waste are the ones that are inefficient both runtime (speed) and space (memory) complexity and take up the most memory and are slower (Python, and Java) and costs a lot of money to continue maintaining them.

      I got downvoted over questioning the microservice cargo cult [0] with Java being the darling of that cult. If you imagine a K8s cluster with any of these runtimes, you can see which one will cost the most as you scale up with demand + provisioning.

      Languages like Go, and Rust are the clear winners if you want to save lots of money and are looking for efficiency.

      [0] https://news.ycombinator.com/item?id=44950060