Serverless or Kubernetes?

K - Aug 27 '20 - - Dev Community

I had to dive into Kubernetes (K8s) for some articles I wrote in the last weeks. So I did a few online courses on the topic and see what K8s brings to the table.

First things first: There are managed services for K8s that try to make it more serverless-ish. For example, the Elastic Kubernetes Service by AWS and their automated worker node provisioning service Fargate, but after I saw what infrastructure a basic example deployed, this didn't have nearly the same feeling as a serverless deployment.

I'm mostly a serverless dev, never got the appeal of K8s. After all, why would anyone manage their own infrastructure in 2020?

Well, it turns out there are at least three main problems with serverless.

1. Cloud Provider Lock-In

You implement against the AWS interface; you won't get away from it ever again. If you implement against the K8s interface, you can freely move between cloud providers.

The idea is that the main reason that a cloud provider locks you in is that you used their non-standardized APIs.

While I think that this problem can be mitigated rather well with an excellent architectural design, it's not unfounded.

2. Missing Features

Serverless isn't perfect; you can't just put every idea into managed services, glue them together with a few functions, and hope that everything goes well.

I have yet to see a realtime multiplayer game (RMG) running with a completely serverless architecture. Such systems are highly stateful and send thousands of messages in a very short time, both requirements that serverless doesn't solve well.

Sure, there are managed services like AppSync, API Gateway, and Pusher that help with WebSockets connections, but their prices blow up your bill when you try to put the network stack of an RMG on it.

Things are getting better here, but it takes time.

3. Legacy Systems

LIFT AND SHIFT!
LIFT AND SHIFT!
LIFT AND SHIFT!

A prevalent cloud adoption strategy for companies with legacy systems. They lift existing infrastructure from their own datacenter and shift it into the cloud.

After all, owning a data center and hardware running in it can be considerably more expensive than just spinning up a few VMs.

Serverless just isn't suited for this.

For example, AWS Lambda has a 250MB code size limit that some backend projects eat for breakfast.

New additions to Lambda, like using an Elastic File System, make it more flexible in that regard. But some legacy systems have so long startup times, that you can't just start and stop them every few minutes.

K8s to the Rescue!

Containers solve these problems.

They can run indefinitely, hold network connections for as long as you like, and with K8s, you can move your microservice cluster around cloud providers like nobodies business.

But at what cost?

Well, if you compare the prices per second for FaaS with a K8s deployment, you probably are cheaper off.

Ten Lambda seconds are more expensive than ten K8s seconds.

The price you pay for running K8s usually comes in the form of ops salaries and time.

Plain K8s

In terms of "work done by your employees," a plain K8s deployment is probably the most intensive.

You build your cloud infrastructure on VMs, link them together in VPCs, deploy and maintain K8s yourself.

You need VMs for your control and worker nodes, these have operating systems and K8s installations that need to be up to date, and K8s has to be configured 100% manually.

But it's also the most flexible one. You can choose whatever K8s version and plugin you like.

Only go for this solution if the managed K8s services would hurt.

Managed K8s

Managed K8s is a middle ground. Choose a cloud provider that offers a managed service of your liking and let them take care of your control plane.

The control plane of K8s alone is a whole story on itself, and getting rid of that work frees your ops personnel for doing more work that is closer to the actual product you're trying to sell.

Go for this solution if you know your workloads. You still have to provision your worker nodes, so you can save some money compared to the next step that is about on-demand pricing for worker nodes.

Managed K8s with automatic Worker Node Provisioning

Using automatic worker node provisioning is the most serverless K8s gets. The cloud provider takes care of all the hardware related stuff. Control plane and worker nodes are provisioned as needed. You define your deployments and pods and are good to go.

Go for this if you don't know what to expect, you know serverless won't cut it, but you don't know how high the load will be.

If you provision worker nodes yourself, it could be that you over or under-provision. Solutions like AWS Fargate shoulder the risk for you by offering on-demand pricing.

Serverless

If the problems above don't concern you, serverless is the way.

While I'm writing about AWS here, most of the time, it's also essential to look at different cloud providers when choosing a serverless stack. They all offer the same categories of services but are slightly different.

Sometimes the problem you have with serverless could be related to AWS and be not a problem with Cloudflare. The same goes for managed K8s services. While EKS+Fargate is a reliable solution, Google made K8s, so its solutions can be more serverless than what AWS offers.

As I wrote initially, solutions like AWS EKS+Fargate are more serverless than plain K8s, but they still deploy a fair amount of supporting infrastructure around your K8s cluster.

Seeing VPCs, subnets, and internet gateways in a CloudFormation template stroke me as an anachronism.

What do you think?


Fullstack Frontend: Switching to a 2 Week Cadence

I'm doing Fullstack Frontend for two months now and have the feeling I can't deliver the quality I want if I write an article every week. That's why I'll switch to writing every two weeks from September on.

But I also added a new format called #cloudsnack where I will post concise know-how about cloud services on social media at least once a week.

If you're interested in that, you can also follow me on my social media accounts:

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .