Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Of course, nobody wants to incur the costs associated with "How could we keep business continuity without AWS?" because it gets very expensive very quickly for anything non trivial.

I think that mostly depends on how much data you generate in a day. If sending backups out increases your bandwidth use by 5% then it's pretty easy to throw that into cold storage on google or azure or local or all three.



It's not just the data (that's an easy enough problem to solve).

It's the infrastructure.

Even if you've only got a reasonable simple platform, say some redundant EC2 app servers behind a load balancer and a multi AZ RDS database, with some S3 storage and a Cloudfront distribution serving static assets - you've probably also got Route53 DNS hosting, AKS ssl certs, deployment AMIs, CloudWatch monitoring/alerting, and a bunch of other "incidental" but effectively proprietary AWS stuff - because it's there.

How do you get all that stood up "right now" in Azure or GCP or DigitalOcean or wherever, unless you've already put the time/effort into making that happen?

How many "single points of failure" are locked inside your AWS account? (For my stuff, Route53 is the thing that keeps me up at night occasionally. If we lost access to our domains registered/hosted in AWS, we'd need to pick new domain names and update all out apps...)


I guess it depends on how much you need "right now".

It doesn't take crazy amounts of effort to set up app servers, a load balancer, a database, and S3-compatible storage somewhere else.

If you had one person working on that two days a month you could keep a warm fallback system ready to go. That takes some effort to keep a map of what your cloud services are actually doing, which is a good idea anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: