Public Cloud Managed Kubernetes: Our Hands-On Experience
With Kubernetes firmly established as the way to orchestrate container-based applications, all major cloud providers have rolled out their managed Kubernetes products. In a recent post, we covered why this is good, yet unsurprising, news — and how this will accelerate the emergence of the pipeline problem.
On a technical level, we run our own stack on Kubernetes (see case study), and use mostly homegrown container ops tooling around the deployment pipeline on one side, and lifecycle management on the other. But offering some of that tooling to our customers means that we routinely test it — e.g. our container deployment pipeline Cloud 66 Skycap — with various Kubernetes-based services, so we can continue to build our roadmap. This gives us a great vantage point, so in this post, we’ll take a step back and attempt to do a quick review of the major clouds’ managed Kubernetes services. We are going to keep this review updated as new products come about and the existing ones get updated. If we’ve left anything out, please let us know! (Note that for efficiency, we’ve focused on services from what we see as the leading clouds.)
Amazon Web Services is the largest public cloud provider by far, with both native and Kubernetes-based Container-as-a-Service offerings:
Amazon Elastic Container Service, or ECS for short, was the first container-based runtime product AWS released. ECS acts as a runtime that orchestrates containers on top of EC2 instances in your account. ECS works with Docker containers but is not based on Kubernetes (the ECS scheduler is AWS’s own proprietary one). As an AWS product, ECS has been popular enough for many of those who were running their non-container workloads on AWS and wanted to use containers. However, in the last several months, we’ve seen a shift of mostly net-new workloads towards EKS, AWS’s service built on Kubernetes. There are some technical differences between ECS and EKS, mostly around load balancing (e.g. the lack of a node-based load balancer like kube-proxy in ECS means tighter integration with AWS-specific load balancers like ALB).
- Easy to get started if you’re familiar with AWS products.
- Deeper and more seamless integration with AWS load balancers, security groups, IAM and Elastic Network Interfaces (ENI).
- Proprietary orchestrator means a smaller community compared to Kubernetes, and a higher reliance on AWS Support.
- AWS seems to be shifting its attention for the future of containers to Kubernetes and therefore EKS. Whether this will have an effect on the roadmap remains to be seen.
- Not portable between clouds.
AWS Elastic Kubernetes Service (officially, Amazon Elastic Container Service for Kubernetes) is their official Kubernetes rollout. EKS works on top of your own EC2 instances and delivers a managed Kubernetes experience, while giving you access to the underlying EC2 instances that power it. EKS can be managed via
kubectl (the Kubernetes native CLI client) which is a great benefit compared to ECS. Setting up EKS is not very easy if you are not familiar with IAM, VPC, and other AWS setups, and can be quite challenging if you are starting out with both AWS and Kubernetes. This complexity increases the chances of making mistakes which are harder to fix later on, and might even introduce security vulnerabilities into your infrastructure.
- Native upstream Kubernetes with all the relevant benefits.
- Deep integration with other AWS products like IAM and VPC as well as storage and load balancers.
- The most difficult to setup from scratch, compared to other cloud providers.
- Charges for the cluster as well as the underlying EC2 instances.
AWS Fargate is a fully managed Containers-as-a-Service, offered for both ECS and EKS. It removes the need to launch and maintain the EC2 instances needed to power your cluster, and is managed by AWS when it comes to upgrades and maintenance. At the time of writing, AWS Fargate is not available in all AWS regions.
- Fully managed Kubernetes (or ECS) with no EC2 instances to worry about.
- Native upstream Kubernetes.
- Needs some learning in the way it operates; differently from other fully managed Kubernetes services, Fargate does not give you an endpoint and a client certificate to get you started.
- Not yet available in all regions.
AWS Elastic Container Registry is an AWS Docker registry hosted and managed by AWS. You can use it to store your built Docker images to be used by ECS, EKS, Fargate or your own clusters just like any other hosted Docker image repository. However, while compatible with Docker images, ECR has its own slightly different way of dealing with the Docker client: you will need to regenerate temporary credentials for the repository to use it with the native Ddocker client for a simple
docker pull or
docker push. While this is a good security measure and compensates for some of Docker’s authentication/authorization shortcomings, it is yet another example of an AWS process which is ever-so-slightly-different from open tools, which might make your life a bit more difficult if you would like to use ECR in conjunction with non-AWS components.
- Hosted by AWS and backed by S3 which means high service levels.
- Integration with other AWS components.
- Might require you to re-tool or change your toolchain flow to use it alongside non-AWS components.
Azure Kubernetes Service is Azure’s official Kubernetes offering. This gives you fully managed Kubernetes with a wide range of different versions supported and deep integration with other Azure services, including Azure Active Directory (AzureAD) and Service Principals which could be very useful for controlling access to the cluster for enterprises that use Microsoft Active Directory.
As you would expect from Microsoft, AKS is very flexible but can involve a steep learning curve as you’d require to familiarise yourself with other Azure concepts around storage, networking and identity management before setting up a production cluster.
Recently Azure had a major availability incident which affected many of their services including AzureAD and AKS (which they have started debriefing here). While this is not very rare in the cloud space, the length of the downtime and the speed that Azure recovered from it can be a cause for caution when choosing Azure to run high SLA production workloads (though you could also argue that Azure is now less likely to repeat the same mistakes).
- Managed Kubernetes with a wide version support
- Deep AzureAD integration
- Complicated setup
- Windows support is not available yet.
Azure container registry provides a decent private docker registry to be used with Azure and non-Azure docker workloads. If you are using Microsoft Active Directory, Container Registry is a good candidate with its full integration into Azure AD. Container Registry also integrates natively with the docker client so you can follow your normal development and CI flow when using it.
- Native Docker registry support
- Deep AzureAD support
- Unnecessary complication if you’re not using AzureAD
Google Cloud Platform
Google Cloud Platform (GCP) is a full service cloud provider alongside AWS and Azure, but unsurprisingly, as the original developer of the core Kubernetes technology (Kubernetes was open sourced by Google in 2014), has the most mature product in this space.
Google Kubernetes Engine is Google’s official managed Kubernetes-as-a-Service product. GKE is by far the most complete and mature product in this field, with support for the latest stable Kubernetes releases being available within a very short time of their release, and constant updates and patching being applied to the platform. GKE offers a vanilla upstream Kubernetes which makes it very easy to get started with if you are familiar with Kubernetes: you will get an endpoint and your cluster credentials and can start using
- Mature product with support for federation and fully managed Kubernetes including upgrades and updates.
- Complete upstream Kubernetes
- Deep integration with other GCP products.
- Google do not charge for the cluster, just for the cores.
- GCP in general is going through its own growing pains and has had some reliability issues, compared to AWS having set the bar very high on that front.
- GCP service accounts and using
gcloudCLI and how they work with GKE can be somewhat confusing for those who are not familiar with GCP.
Container Registry is GCP’s hosted and managed Docker container registry. It is a standard implementation of a Docker registry with integration into GCP authentication and authorization. This integration runs across the entire GCP product set which allows your team to use their G-Suite credentials to access GCP infrastructure.
GCP’s Container Registry works very well with the native Docker client, without the need for much customization to Docker’s native authentication flow.
- Native docker registry experience.
- Integration with the rest of GCP and GSuite products.
- Use of different regions means different image names with different URLs that can sometime cause confusion.
GCP Cloud Build is a service by GCP that builds Docker images automatically and can be integrated with your git repository to build them upon every commit. Cloud Build was recently “re-announced” at the Next ’18 conference with some improvements, but at its core it is a simple Docker build system hosted by GCP.
- Simple to get started with.
- Generous free tier (for now).
- Limited workflow features.
- No support for use of secrets during the build process.
DigitalOcean is a popular cloud provider among developers and has been improving its service quality and product range for a while now, making them increasingly worthy of being considered a 1st-tier cloud provider used for production workloads.
DigitalOcean has a primary focus on simplicity around all of their products, and in that context, their managed Kubernetes offering doesn’t disappoint. While still in beta, DOK (not the official name for DigitalOcean Kubernetes, but what I’m going to call it here!), is very simple to setup and get started with. Once you have a cluster, you will get an endpoint (managed by DO and not on your own Droplets — i.e. servers) and a cluster certificate and can get started. DOK provides native Kubernetes as a managed service, where you pay for the worker nodes and possibly something for the cluster itself (it’s in beta and free at the moment).
Like other cloud providers that have integrated cloud components like load balancers or persistent block storage, DO has integrated DigitalOcean Load Balancers and Block Storage into their managed Kubernetes service. The result is a good user experience which hopefully will be matched by good availability and service level.
- Very easy to setup and get started.
- Native Kubernetes plus native DO component integration.
- Historically, patchy service level and availability track record when compared to the big players.
- Limited availability to the public; pricing unclear.
Note: Out of beta with the limited availability from 1st October.
The prominence and ubiquity of managed Kubernetes is great news for developers and operators everywhere: worrying about upgrading Kubernetes or its components should be the last thing you need to think about when you are focusing on adding value to your business. At the other side of this equation is a need for better tooling to get you from git to
kubectl in a way that is automate-able, reliable, repeatable, and easily maintainable—as well as tools to manage the lifecycle of applications atop of those Kubernetes clusters.
Originally published at blog.cloud66.com on September 25, 2018.