Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Solutions such as Kubernetes tout the benefits of zero-downtime deployments but ignore that their inherent complexity causes availability issues."

This is completely accurate. I've seen several teams do kubernetes, only to both spend 50% of their dev time on ops, AND cause outages due to kubernetes complexity. They do this all while boasting about zero downtime deployments. It's comical really.



Well then you just need multiple Kubernetes clusters for redundancy :-) :-)

From another thread on the home page right now:

The current trend goes to multi-cluster environments, because it's way too easy to destroy a single k8s cluster due to bugs, updates or human mistake. Just like it's not an very unlikely event to kill a single host in the network e.g. due to updates/maintenance.

For instance, we had several outages when upgrading the kubernetes version in our clusters. If you have many small cluster it's much easier and more save to apply cluster wide updates, one cluster at a time.

https://news.ycombinator.com/item?id=26106353

:-( :-(


This reads like a bad joke.

Where did the KISS principle go?


Web developers buried it under a mountain of trash


I'm a web dev and I agree. Complexity justifies paychecks.


k8 is about as anti "keep it simple" as it can get.


It truly is. I see the complexity stems from replication of the many OS services applications require. As a result, the containerized ecosystem ends up full circle but reinvented with a leaky abstraction that generates complexity. To me the comical part is how the IT team fails to acknowledge this.

Naturally it spreads like cancer. Non k8s native infrastructure is now abandon-ware. All that tech built over the last ten years is no longer seeing investment. Unfortunately, it solves real problems and rather well. Now a candidate to be reinvented and under the guise of reducing complexity it instead throws the users under the bus and actually does the opposite while costing a fortune.

When you step back and see: bare metals, vms, docker, k8s, stack of k8s plugins and tools, on prem and multi cloud all running concurrently... the IT team is really great at creating work and justifying their existence. Management needs to stop padding them on their back and hold them accountable for the mess they're generating.

It easy for me to complain, I guess, I'm not smart enough to understand how to kill this hydra. But I care about users and their experience and so maybe that's what's missing from this new frontier.


> the IT team is really great at creating work and justifying their existence.

Once you said this, the is no longer a neccesity to say more. I could not agree more.


I call these "aspirational features".

The tech aspires to provide that feature, and _technically_ can, but in practice you're always chasing the goose. (or spending so much on ops time/people that it becomes an invalid option, of course this becomes clear after you've fully invested in it)


I have always wondered if PostgreSQL for the data layer + HAProxy(multiple instances of web services) would be enough in most cases where Kubernetes is used. HAProxy provides blue-green deployment strategies for web applications for continuous deployments. Database is best kept outside of a distributed system anyway..

A good weekend project and some learning ahead.


It was a few years ago now, but my team at Amazon just deployed to a bunch of servers behind a load balancer and very rarely experienced downtime due to infrastructure. I’m working on a system that isn’t live right now and the infrastructure is so much more complex than that without a single customer. I spend a huge amount of dev time debugging issues that have their root cause in a flakey kubernetes cluster.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: