Hi, former cofounder of Bitnami here. I left VMware quite a while ago, so not involved with this. The technical team at Bitnami is still top notch and great people. I am quite baffled at this business decision.
Is there a company more "Take what you can, give nothing back" than Broadcom? Probably not.
Broadcom's continued ability to perform well while only serving ever more upmarket areas, & cutting everyone else loose (& generally giving no figs) is fantastically impressive.
Broadcom is just private equity buying products to bleed dry. Nobody thinks VMware is the future, but the folks that use it are enterprises with deep pockets who are slow and reluctant to change so you can multiply the price by big numbers and get paid big while your dying acquired product meets its end.
VMware's lock on enterprise virtualization simultaneously made it essentially impossible for anyone else to compete like-for-like using a different platform and probably also doomed VMware's attempt to start pivoting to containers.
I was a service provider of Zimbra and had great relations with VMware folks on Page Mill many moons ago. One my friends helped move VMware HQs within PA just out of college.
Fuck Wall St. greedy morons at Broadcom. Hubris will educate them the hard way as they fade in relevance.
It’s mostly that they don’t understand their own users and potential customers in this particular case of Bitnami. There are so many other ways to increase revenue without alienating the core developer base. Enterprise want stability, breaking changes is a poor way to convince someone to pay you.
Last company was pretty heavy free user of Bitnami charts for various things but biggest being Redis clusters. I can't imagine they could convert everything into a cluster using their own charts before this kicks in. Very possible they end up tossing at least a year worth of licensing towards Bitnami.
> All existing container images, including older or versioned tags (e.g., 2.50.0, 10.6), will be moved from the public catalog (docker.io/bitnami) to the Bitnami Legacy repository (docker.io/bitnamilegacy). This legacy catalog will receive no further updates or support and should only be used for temporary migration purposes.
This sucks, I used to like the Bitnami container images (didn't need the Helm charts) because the images were consistent and consistently nice (documentation, persistent storage, configuration, sizes), but now I need to move off of those.
Basically, I'll need to move to the regular upstream images for:
* web servers (Apache2 because it's well suited for my needs, but the same would apply to Nginx and Caddy)
* relational DBs (MariaDB, though I'm moving over to MySQL 8 for any software that needs it due to their 11 release having compatibility issues with MySQL drivers; as well as PostgreSQL)
* key value stores (Redis)
* document stores (MongoDB)
* message queues (RabbitMQ and NATS)
* S3 compatible blob stores (MinIO and SeaweedFS)
* utility containers (like Trivy)
(either that, or I'll need to build them myself if the Dockerfiles remain available)
I'll stay away from Broadcom as much as possible.
Edit:
> Helm charts and container images' open-source code will continue to be maintained up-to-date and accessible on GitHub under the Apache 2 license.
I knew this would happen eventually. The images always looked nice, but were so hopelessly entangled in the Bitnami world there was no chance of forking them, or easily migrating away. Good thing I dodged that bullet… never trust a commercial vendor that trades you convenience for interoperability.
To me the images always looked overly complex, to the point where I frequently felt more at ease just doing an image from scratch myself. They never felt like a good fit for production systems.
I also know a ton of people and project who are now sort of screwed, because they can not possibly maintain a fork do to the Bitnami complexity, but there's also a reason why they didn't just do their own image.
Stumbled upon issues when updating from an older MariaDB 10 release to MariaDB 11 when some Go software was trying to connect to it using a MySQL driver. Seems like people have similar issues with other stacks as well as well: https://bugs.mysql.com/bug.php?id=111697
I could just use MariaDB drivers where available, but honestly MySQL seems more popular and the MariaDB SPAC and layoffs soured my view of them; ofc PostgreSQL is also nice.
This announcement is a little hard to read. They make it seem that the current images under docker.io/bitnami/* get deleted on August 28? But individual chart READMEs seem to say that images will move during a period starting on August 28 and ending two weeks later? But looking at https://hub.docker.com/u/bitnamilegacy images have been copied already?
> Starting August 28th, over two weeks, all existing container images, including older or versioned tags (e.g., 2.50.0, 10.6), will be migrated from the public catalog (docker.io/bitnami) to the “Bitnami Legacy” repository (docker.io/bitnamilegacy), where they will no longer receive updates.
The complete history of Bitnami container images has been copied to the "bitnamilegacy" repository. New tags will continue to be synced there until August 28th. After that date, "bitnamilegacy" will no longer receive updates, and images in the mainline "bitnami" repository will begin to be removed over a period that may take up to two weeks.
Once the cleanup is complete, the mainline "bitnami" repository on DockerHub will contain only a limited subset of Bitnami Secure Images (at this moment available at "bitnamisecure"). These are hardened, security-enhanced containers intended for development or trial use, providing a preview of the full feature set available in the paid offering.
From the bottom of the post I know what they are hoping users will do:
> Suppose your deployed Helm chart is failing to pull images from docker.io/bitnami. In that case, you can resolve this by subscribing to Bitnami Secure Images, ensuring that the Helm charts receive continued support and security updates.
They don't want to give instructions that are too helpful. They want your company CC to be the easiest way to fix the problem they created.
The removal (or moving) of the Bitnami images from Docker Hub is going to break a ton of systems that depend on them. I helped set up https://www.stablebuild.com/ some years ago to counter these types of issues, it provides (among other things) a transparent cache to Docker Hub which automatically caches image tags and makes them immutable - underlying tag might be deleted or modified, but you’ll get the exact same original image back.
This is exactly why so many of us have advocating for private registries and copies of every image you run in production. Pulling straight from Docker Hub was always a risk.
I’ve never used Helm charts. I learned K8S in a shop in which kustomize is the standard and helm is a permitted exception to the standard, but I just never felt any reason to learn helm. Am I missing out?
Sometimes the limitations of kustomize annoy me, but we find ways to live with them
Would you like to count the number of spaces that various items in your manifests are indented and then pass that as an argument to a structure-unaware text file templating engine? Would you like to discover your inevitable yaml file templating errors after submitting those manifests to the cluster? Then yes, you are really missing out!
A couple of years ago I was using https://kapitan.dev/ which I found really good at the time when using jsonnet files to generate the configs. I haven't used it for some time though.
I never understood why people didn’t serialize from JSON if they run a transformation step on the YAML anyway. Read JSON, alter in your favorite language, dump YAML as the last step, deploy. Instant prevention of a lot of sadness.
Helm gives you more than enough rope to hang yourself with. At $dayjob we barely use 3rd party helm charts, and when we do we eventually run into problems with clever code.
We do package our own helm charts, not in the least because we sign contracts with our customers that we will help them run the software we're selling them. So we use package docker and helm artifacts that we sell in addition to running locally.
So we write some charts that don't use most helm features. The one useful thing about Helm that I don't want to live without is the packaging story. We seem to be the only people in the ecosystem that "burn in" the Docker image sha into the Helm chart we package, and set our v1.2.3 version only on the chart. This means we don't have to consider a version matrix between our config and application. Instead we just change the code and config in the same git sha and it just works.
We just set chart version equal to image version, they live in the same repo and are released and built together (and the chart is only published after successfully publishing the image, so it's always valid). The chart allows to override the image version but we almost never do that, it's for emergencies.
Replacing with hash is a neat idea, might start doing that too.
1. having the ability to create a release artefact helm chart for a version, and store that artefact easily in OCI repositories.
2. being able to uninstall and install a chart and not have to worry about extra state. Generally in Kustomize people just keep applying the yaml and you end up in a state where there’s more deployed than there is in the kustomize config
I wouldn't say you're missing out. If kustomize works for you, keep using it. I personally use helm because I cannot for the life of me wrap my head around kustomize. I've looked at tutorials, read the docs, and it just doesn't make sense to me. Helm, on the other hand, immediately clicked and I was able to pretty effortlessly write charts for our use. It's just a case of different preference in tools, imo.
Kustomize feels like less of a hack to me, without the gotpl madness, but it’s way more painful to get something done in my experience. I’ve landed on just writing real code to craft the objects I want (using the actual types, not text), if I absolutely can’t get by with static manifests.
One thing I haven't seen mentioned in comment. Dunno if Kustomize has something here. But: Helm is a shit but at least some kind of composition tool. Some way to have resource of various types associated to some top level idea.
Very very little else seems to bring this basic sense to Kubernetes. Metacontroller kind of could do that. Crossplane's whole business is this, but it's been infra-specialized: but the Crossplane v2.0 release is trying to be much more generally useful. https://docs.crossplane.io/v2.0-preview/whats-new/ . Would love other examples of what does composition in Kube.
- Makes it possible to go from zero to fully running k8s integrated components in 5 seconds by just running 'helm install --repo https://example.com/charts/ mynginx nginx' (very useful: https://artifacthub.io/)
- Gives the ability to transactionally apply k8s configs, and un-apply them if there is a failure along the way (atomic rollbacks)
- Stores copies/versions/etc of each installation in the server so you have metadata for troubleshooting/operations/etc without having to keep it in some external system in a custom way.
- Allows a user who doesn't know anything about K8s to provide some simple variables to customize the installation of a bunch of K8s resources.
- Is composeable, has templates, etc.
So basically Helm has a lot of features, while Kustomize has... one. Very different purposes I think. You can also use both at the same time.
Personally I think Helm's atomic deployment feature is well worth it. I also love how easy it is to install charts. It feels a bit like magic.
> a plain helm install without any values rarely if ever gives you the deployment you need
works for me most of the time
> This is hardly unique to helm.
So what? The guy was asking what is nice about Helm vs Kustomize. Does Kustomize have rollbacks?
> In 2025 you should probably be using gitops
Gitops is literally just "hey I have some configs in Git and I run some command based on a checkout", i.e. infrastructure as code in a git repo. Gitops does not track live server metadata and deployment history. I don't get why people over-inflate this idea.
What do you mean by atomic deployment? There are no transactions in the Kubernetes API. Helm has to make one request for each object it creates or modifies, like any other client.
Kustomize is nice but you’re missing out on objects lifecycle management.
Kustomize had the issue that it would leave objects dangling in the cluster and you had to manually clean them up of you removed them from your kustomization file.
I work at Grafana, and Jsonnet powers our whole k8s infrastructure. It can get a little baroque sometimes but overall it’s tremendously powerful, and it’s fun to work with.
Write a few Helm charts and you'll understand why people want to stop using it. `nindent` will become a curse word in your vocabulary. It's a fine tool at the user level, but the DX is an atrocity.
I'm using either opentofu(terraform) or plain yaml. I'm not a huge fan of HCL but at least it is structured and easily manipulated without worrying about whitespace.
I used kustomize to build an ArgoCD install at a previous company, and I was impressed at how powerful it was. Our setup was pretty involved, and kustomize was able to handle all the per-environment differences easily, and the code was easy to work with.
We use kustomize because we have four environments that run basically the same stuff (dev with k3s, test, and two cloud regions). If we didn’t use kustomize, we’d be forced to reinvent it to avoid duplicating so much yaml.
Consuming one that is well written isn't too much pain, IME. But writing or modifying one can be really annoying. Aiui the values.yaml has no type schema, just vibes. The whole thing is powered off using text templating with yaml (a whitespace sensitive language), which is error prone and often hard to read. That's basically the main issues in a nutshell, it may not sound like much, but helm doesn't exactly do a whole lot and it does that limited set of stuff poorly.
Ah, I must have met some lazy charts then. Thanks for the correction. Still, it seems like that schema would end up a little inconvenient to integrate into your editor for writing the templates...
I suggest checking out Anemos (https://github.com/ohayocorp/anemos), the new boy in the town. It is an open source single-binary tool written in Go and allows you to use JavaScript/TypeScript to define your manifests using templates, object oriented approach, and YAML node manipulation.
On that note, I'm already looking at migrating my codebase off of Spring. Just testing the waters with Quarkus, Helidon, Micronaut, Pekko, Vert.x, and plain Jakarta EE right now.
Red Hat effectively killed their JBoss/Middleware team and the rest of it moved to IBM https://www.redhat.com/en/blog/evolving-our-middleware-strat... Quarkus and other tools were pushed to CommonHaus/Apache. I believe Vert.X was also mostly developer by RH team, although moved to Eclispe Foundation a decade ago.
Oracle also ended up somehow sponsoring 2 frameworks: Helidon & Micronaut.
I'd bet Spring is still the safest choice next to Jakarta EE standards that all are built on top of nowadays.
Yeah my old colleagues who work on Kroxylicious are now IBM. I keep asking them if they're wearing a blue tie to the office yet, they still don't think it's funny.
I quite like Micronaut, especially the ability to use its compile time DI as a standalone library in a non-Micronaut app.
Quarkus is pretty similar, but is built on top of Vert.x so a lot of the fun of Vert.x (don't block the event loop!) is still present. It also does compile time DI.
And as popular and widely used as Spring is, that would 100% happen. To me at least, I wouldn't count this as a particularly huge risk. But in an enterprise setting, with mandatory auditing and stuff, I can understand why there would be a requirement to at least pre-identify alternative(s).
Probably a bit of overreaction given that Broadcom is now in charge of Spring. At the end of the day it’s a wildly popular open source project — it has a path forward if Broadcom pulls shenanigans.
That said, I have noticed that the free support window for any given version is super short these days. I.e. if you’re not on top of constantly upgrading you’re looking at paid support if you want security patches.
If there's no money in it for them - reduction of staff or funding leading to slower releases and bugfixes
Moving some features like Spring Cloud / Spring Integration, or new development behind a paywall (think RHEL)
Big users (like Netflix, Walmart, JPMorgan, LinkedIn/Microsoft, etc) would likely be able to pay for it (until they moved off), but smaller companies and individual developers not so much
I think it would be more of a Redis situation - steward changes the license, someone large enough to maintain a fork creates one, and everyone moves to the fork. In Redis's case, Amazon forked it into Valkey.
Spring is so widely used that there are multiple "large enough" companies who could do this
"Helm charts and container images' open-source code will continue to be maintained up-to-date and accessible on GitHub under the Apache 2 license."
Doesn't this mean everything is still available, just in source form instead of binary? Could a situation like AlmaLinux/Rocky Linux etc. spin up where folks build a community-supported set of binaries from source?
Bitnami images have been problematic for a little while, especially given their core focus on security but still resulting in a CVE 9.4 in PgPool recently that ended up being used in the underlying infrastructure for a bunch of cloud hosts:
That's what Bitnami Secure Images comes to solve. Bitnami regularly updates its images with the latest system packages; however, certain CVEs may persist until they are patched in the OS (Debian 12) or the application itself. Additionally, some CVEs remain unfixed due to the absence of available patches. In vulnerability scanners like Trivy, you can use the `--ignore-unfixed` flag to ignore such CVEs.
In the case of Bitnami Secure Image, the underlying distro is PhotonOS, which is oriented to have zero CVEs.
I mean I understand that's the goal, but in this specific CVE it looks like the issue was introduced in Bitnami's own scripts sitting on top of everything, so a ideally-zero-CVE underlying OS isn't going to solve that problem at all.
It also seems like this set of changes was made in this specific way to forcibly disrupt anyone using the existing images, many of which were made off the backs of previously existing non-bitnami open source projects, so I assume you can understand why people are annoyed.
But again, anyone with any knowledge or experience of Broadcom saw this coming, so...
The source code for Bitnami containers and Helm charts remains publicly available on GitHub and continues to be licensed under Apache 2.
What’s changing is that Bitnami will no longer publish the full catalog of container images to DockerHub. If you need any image, you can still build/package it yourself from the open-source GitHub repositories.
Check out Artifact Hub, the CNCF-hosted charts from projects like Prometheus/Grafana, or the official k8s-at-home charts as solid alternatives to Bitnami.
RPi doesn't exist due to Broadcom. It exists despite Broadcom.
Using RPis can be a huge PITA, if you'd like to do something a bit more complex with the hardware. HDMI, the video decoders are all behind closed doors with blobs on top of blobs and NDAs.
RPi SoCs are some of the weirdest out there. It boots from the GPU ffs.
Broadcom's continued ability to perform well while only serving ever more upmarket areas, & cutting everyone else loose (& generally giving no figs) is fantastically impressive.
Fuck Wall St. greedy morons at Broadcom. Hubris will educate them the hard way as they fade in relevance.
All gone now. Sad.
Last company was pretty heavy free user of Bitnami charts for various things but biggest being Redis clusters. I can't imagine they could convert everything into a cluster using their own charts before this kicks in. Very possible they end up tossing at least a year worth of licensing towards Bitnami.
> All existing container images, including older or versioned tags (e.g., 2.50.0, 10.6), will be moved from the public catalog (docker.io/bitnami) to the Bitnami Legacy repository (docker.io/bitnamilegacy). This legacy catalog will receive no further updates or support and should only be used for temporary migration purposes.
This sucks, I used to like the Bitnami container images (didn't need the Helm charts) because the images were consistent and consistently nice (documentation, persistent storage, configuration, sizes), but now I need to move off of those.
Basically, I'll need to move to the regular upstream images for:
(either that, or I'll need to build them myself if the Dockerfiles remain available)I'll stay away from Broadcom as much as possible.
Edit:
> Helm charts and container images' open-source code will continue to be maintained up-to-date and accessible on GitHub under the Apache 2 license.
Hmmm: https://github.com/bitnami/containers/tree/main/bitnami/mari... and https://github.com/bitnami/containers/commit/7651d48119a1f3f...
I also know a ton of people and project who are now sort of screwed, because they can not possibly maintain a fork do to the Bitnami complexity, but there's also a reason why they didn't just do their own image.
This did feel inevitable.
Could you tell more?
I could just use MariaDB drivers where available, but honestly MySQL seems more popular and the MariaDB SPAC and layoffs soured my view of them; ofc PostgreSQL is also nice.
From ticket https://github.com/bitnami/charts/issues/35164:
> Now – August 28th, 2025: Plan your migration: Update CI/CD pipelines, Helm repos, and image references
> August 28th, 2025: Legacy assets are archived in the Bitnami Legacy repository.
From README https://github.com/bitnami/charts/blob/4973fd08dd7e95398ddcc...:
> Starting August 28th, over two weeks, all existing container images, including older or versioned tags (e.g., 2.50.0, 10.6), will be migrated from the public catalog (docker.io/bitnami) to the “Bitnami Legacy” repository (docker.io/bitnamilegacy), where they will no longer receive updates.
What are users expected to do exactly?
Once the cleanup is complete, the mainline "bitnami" repository on DockerHub will contain only a limited subset of Bitnami Secure Images (at this moment available at "bitnamisecure"). These are hardened, security-enhanced containers intended for development or trial use, providing a preview of the full feature set available in the paid offering.
- Bitnami: https://hub.docker.com/u/bitnami - Bitnami Legacy: https://hub.docker.com/u/bitnamilegacy - Bitnami Secure Images: https://hub.docker.com/u/bitnamisecure
From the bottom of the post I know what they are hoping users will do:
> Suppose your deployed Helm chart is failing to pull images from docker.io/bitnami. In that case, you can resolve this by subscribing to Bitnami Secure Images, ensuring that the Helm charts receive continued support and security updates.
They don't want to give instructions that are too helpful. They want your company CC to be the easiest way to fix the problem they created.
https://aws.amazon.com/marketplace/pp/prodview-pwqgz3mnvxvok...
You can always follow the "contact sales" form and see if they give you a higher or lower number than that.
But damn oh damn does Broadcom feel like a good fit for this statement.
Sometimes the limitations of kustomize annoy me, but we find ways to live with them
Yet somehow, all we have is YAML templating?
We do package our own helm charts, not in the least because we sign contracts with our customers that we will help them run the software we're selling them. So we use package docker and helm artifacts that we sell in addition to running locally.
So we write some charts that don't use most helm features. The one useful thing about Helm that I don't want to live without is the packaging story. We seem to be the only people in the ecosystem that "burn in" the Docker image sha into the Helm chart we package, and set our v1.2.3 version only on the chart. This means we don't have to consider a version matrix between our config and application. Instead we just change the code and config in the same git sha and it just works.
Replacing with hash is a neat idea, might start doing that too.
1. having the ability to create a release artefact helm chart for a version, and store that artefact easily in OCI repositories. 2. being able to uninstall and install a chart and not have to worry about extra state. Generally in Kustomize people just keep applying the yaml and you end up in a state where there’s more deployed than there is in the kustomize config
Very very little else seems to bring this basic sense to Kubernetes. Metacontroller kind of could do that. Crossplane's whole business is this, but it's been infra-specialized: but the Crossplane v2.0 release is trying to be much more generally useful. https://docs.crossplane.io/v2.0-preview/whats-new/ . Would love other examples of what does composition in Kube.
- Makes it possible to go from zero to fully running k8s integrated components in 5 seconds by just running 'helm install --repo https://example.com/charts/ mynginx nginx' (very useful: https://artifacthub.io/)
- Gives the ability to transactionally apply k8s configs, and un-apply them if there is a failure along the way (atomic rollbacks)
- Stores copies/versions/etc of each installation in the server so you have metadata for troubleshooting/operations/etc without having to keep it in some external system in a custom way.
- Allows a user who doesn't know anything about K8s to provide some simple variables to customize the installation of a bunch of K8s resources.
- Is composeable, has templates, etc.
So basically Helm has a lot of features, while Kustomize has... one. Very different purposes I think. You can also use both at the same time.
Personally I think Helm's atomic deployment feature is well worth it. I also love how easy it is to install charts. It feels a bit like magic.
Realistically, a plain helm install without any values rarely if ever gives you the deployment you need, so you have to study the chart anyways.
> rollback on failure
This is hardly unique to helm.
> history metadata without (...) some external system
In 2025 you should probably be using gitops anyways, in which case the git repo is your history.
works for me most of the time
> This is hardly unique to helm.
So what? The guy was asking what is nice about Helm vs Kustomize. Does Kustomize have rollbacks?
> In 2025 you should probably be using gitops
Gitops is literally just "hey I have some configs in Git and I run some command based on a checkout", i.e. infrastructure as code in a git repo. Gitops does not track live server metadata and deployment history. I don't get why people over-inflate this idea.
Kustomize had the issue that it would leave objects dangling in the cluster and you had to manually clean them up of you removed them from your kustomization file.
In true GitOps, I think it's should be default on.
I might use Helm charts for initial deploys of operators, but that's about it.
Kustomize is, IMO, a better approach if you need to dynamically modify the YAML of your resources and tools like ArgoCD support it.
You can read a comparison with Helm here: https://www.ohayocorp.com/anemos/docs/comparison/helm
P.S. I am the author of the tool.
On that note, I'm already looking at migrating my codebase off of Spring. Just testing the waters with Quarkus, Helidon, Micronaut, Pekko, Vert.x, and plain Jakarta EE right now.
Oracle also ended up somehow sponsoring 2 frameworks: Helidon & Micronaut.
I'd bet Spring is still the safest choice next to Jakarta EE standards that all are built on top of nowadays.
Quarkus is pretty similar, but is built on top of Vert.x so a lot of the fun of Vert.x (don't block the event loop!) is still present. It also does compile time DI.
The second highest risk is using USA based cloud with 66/100.
The first one was using Spring Boot everywhere 77/100. Till the end of 2025 we need to have migration path to something else with 2 PoCs done.
How do I reconcile this statement with VMWare holding the copyright which you will find unambiguously littered in the official Spring Boot repository?
Since you contend the contrary, who does in fact hold the copyright?
That said, I have noticed that the free support window for any given version is super short these days. I.e. if you’re not on top of constantly upgrading you’re looking at paid support if you want security patches.
If there's no money in it for them - reduction of staff or funding leading to slower releases and bugfixes
Moving some features like Spring Cloud / Spring Integration, or new development behind a paywall (think RHEL)
Big users (like Netflix, Walmart, JPMorgan, LinkedIn/Microsoft, etc) would likely be able to pay for it (until they moved off), but smaller companies and individual developers not so much
Spring is so widely used that there are multiple "large enough" companies who could do this
- license change -> restricting features behind a paid tier (https://spring.io/blog/2025/04/21/spring-cloud-data-flow-com...)
- reducing headcount of people -> slow security patching + not following industry standards
- all eggs in one basket :)
- cut from major clouds (Azure Spring apps)
Doesn't this mean everything is still available, just in source form instead of binary? Could a situation like AlmaLinux/Rocky Linux etc. spin up where folks build a community-supported set of binaries from source?
[pgpool] Unauthenticated access to postgres through pgpool · Advisory · bitnami/charts https://share.google/JcgDCtktG8dE2TZY8
In the case of Bitnami Secure Image, the underlying distro is PhotonOS, which is oriented to have zero CVEs.
It also seems like this set of changes was made in this specific way to forcibly disrupt anyone using the existing images, many of which were made off the backs of previously existing non-bitnami open source projects, so I assume you can understand why people are annoyed.
But again, anyone with any knowledge or experience of Broadcom saw this coming, so...
What’s changing is that Bitnami will no longer publish the full catalog of container images to DockerHub. If you need any image, you can still build/package it yourself from the open-source GitHub repositories.
I don't know why but Artifact Hub never shows up in Google search when you search for "web site with helm charts". Hopefully this gives it a boost.
Using RPis can be a huge PITA, if you'd like to do something a bit more complex with the hardware. HDMI, the video decoders are all behind closed doors with blobs on top of blobs and NDAs.
RPi SoCs are some of the weirdest out there. It boots from the GPU ffs.
I’m surprised anybody works at bcom these days.