Kubernetes Deployment & StatefulSets simply explained🚀

Archit Sharma - Aug 16 '22 - - Dev Community

This article will use the same Node with two Pods example as the previous two articles.

Deployment

So everything is now working perfectly, and a user may visit our application via a browser.

Example of Node with two Pods in Kubernetes
What if my-app Pod dies, crashes, or I need to restart it because I built a new container image?🤔

In this case, we would experience downtime where users will be unable to access our application, which is certainly something we do not want it to happen in production.

This is exactly the advantage of Distributed Systems and Containers.
So, rather than depending on just one application Pod, one Database Pod, and so on, we are replicating everything over many servers, so we will have another Node where a duplicate of our my-app Pod will run, which will also be connected to Service.

We know that Service is a persistent IP address with a DNS name so you don't have to constantly adjust the Endpoint when Pod dies but Service is also a Load Balancer which means it will also handle Port forwarding.

Deployments in Kubernetes
To build the second replica of my-app Pod, you will not build the second Pod, but rather define a Blueprint for my-app Pod and specify how many replicas of that Pod you want to run.
This component or Blueprint is called Deployment

Deployment is another component of Kubernetes and in practice you will not work with Pods or create Pods, you'll be creating Deployment because there you can specify number of replicas you need and you can also scale up or scale down number of replicas of Pod you need
Similarly to how Pod is a layer of abstraction on top of Containers, Deployment is a layer of abstraction on top of Pods that makes it easier to interact with, duplicate, and configure Pods.
So you will mostly work with Deployments not with Pods.

If one of your replicas of my-app Pod fails, the Service will send the request to another one, ensuring that your application remains accessible to the user.

StatefulSets

Statefulsets in Kubernetes
You may be thinking what about the Database Pod, since if your Database Pod dies, your application will be inaccessible. As a result, we require a Database replica as well, however we cannot replicate the Database using a Deployment because a database has a state that is its data, replicas of the database would all need to access the same shared data storage, which would require some kind of mechanism that manages which Pods are currently writing to that storage or which Pods are reading from that storage in order to avoid data inconsistencies and this feature in addition to replicating feature is offered by another Kubernetes component called StatefulSet.

StatefulSet is used specifically for application like Databases so mySQL, mongoDB, elasticsearch etc applications or Databases should be created using StatefulSet and not Deployments.

StatefulSet just like Deployments will take care of replicating the Pods and scaling them up or scaling down but making sure database reads and write are synchronized so that no database inconsistencies are offered.

One thing I'd like to point out is that deploying a database application using StatefulSet in a Kubernetes Cluster can be time-consuming, so it's definitely more difficult to work with StatefulSet than Deployment, which is why it's also common practice to host the database outside of the Kubernetes Cluster.

Now we have two copies of my-app Pod and Database Pod, and they are both Load balanced; as a result, our configuration is more robust; even if Node 1 or Server fails or reboots, we still have a Node 2 with our Pods running, and the application is still accessible to the user.

So with this we have learned about the most of the Kubernetes Components and just using these core components you can actually build pretty powerful Kubernetes Cluster.

In the upcoming articles we are going to learn how to create these Kubernetes components.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .