Cloud outages show multicloud is essential
Something is rotten in the state of Denmark in all of Europe actually and Amazon has been tight-lipped about it. It seems there might have been a hack or a well-executed denial-of-service attack. I realize this was in October, but Google autocomplete suggests that “AWS DDoS attack” be followed by a year. These things happen frequently.
Denial-of-service attacks are as old if not older than the internet and so is the lack of candor on the part of your data center operator or hosting provider. The thing that protected us all in the past from watching the whole net go black is the same thing that will protect us again: multiple data centers run by different providers. That is to say, multicloud.
A multicloud strategy starts with the obvious deploying (or maintaining your ability to deploy) on multiple vendors’ clouds. Meaning you keep your software on AWS and Azure and maybe even on GCP. You forego using any vendor services that might prevent your ability to move, and you pursue a data architecture that allows you to scale across data centers.
Single cloud advantages and drawbacks
Relying on a single vendor’s cloud allows you to eat the buffet of sometimes lower-cost alternatives from the cloud provider. Adding these is usually seamless. Meaning, if you’re an AWS customer, you use Amazon Elasticsearch Service instead of building your own search cluster. If you’re on Google, you can use their document database, Google Cloud Datastore, instead of rolling your own.