Understanding Pods in Kubernetes

Understanding Pods in Kubernetes

Pods

In the VMware world, the atomic unit of deployment is the virtual machine (VM). In the Docker world, it’s the container. Well… in the Kubernetes world, it’s the Pod.

VM, Container and Pods
VM, Container and Pods


Pods and containers

It’s true that Kubernetes runs containerized apps. But those containers always run inside of Pods! You cannot run a container directly on a Kubernetes cluster.
However, it’s a bit more complicated than that. The simplest model is to run a single container inside of a Pod, but there are advanced use-cases where you can run multiple containers inside of a single Pod. These multi-container Pods are beyond the scope of this book, but common examples include the following:

  • web containers supported a helper container that ensures the latest content is available to the web server.
  • web containers with a tightly coupled log scraper tailing the logs off to a logging service somewhere else.
Pods and container
Pods and container

Pod anatomy

At the highest-level, a Pod is a ring-fenced environment to run containers. The Pod itself doesn’t actually run anything, it’s just a sandbox to run containers in. Keeping it high level, you ring-fence an area of the host OS, build a network stack, create a bunch of kernel namespaces, and run one or more containers in it – that’s a Pod.
If you’re running multiple containers in a Pod, they all share the same environment – things like the IPC namespace, shared memory, volumes, network stack etc. As an example, this means that all containers in the same Pod will share the same IP address (the Pod’s IP).

Pods IP interface
Pods IP interface.

If those containers need to talk to each other (container-to-container within the Pod) they can use the Pods localhost interface.

Pods localhost interface.
Pods localhost interface.

This means that multi-container Pods are ideal when you have requirements for tightly coupled containers – maybe they need to share memory and storage etc. However, if you don’t need to tightly couple your containers, you should put them in their own Pods and loosely couple them over the network.

Tightly coupled pods

Figure shows two tightly coupled containers sharing memory and storage inside a single Pod.

Loosely coupled pods

Figure shows two loosely coupled containers in separate Pods on the same network.

Pods as the atomic unit

Pods are also the minimum unit of scaling in Kubernetes. If you need to scale your app, you do so by adding or removing Pods. You do not scale by adding more of the same containers to an existing Pod! Multi-container Pods are for two complimentary containers that need to be intimate – they are not for scaling. Figure below shows how to scale the nginx front-end of an app using multiple Pods as the unit of scaling.

Scaling with pods

The deployment of a Pod is an all-or-nothing job. You never get to a situation where you have a partially deployed Pod servicing requests. The entire Pod either comes up and it’s put into service, or it doesn’t, and it fails. A Pod is never declared as up and available until every part of it is up and running.
A Pod can only exist on a single node. This is true even of multi-container Pods, making them ideal when complimentary containers need to be scheduled side-by-side on the same node.

Pod lifecycle

Pods are mortal. They’re born, they live, and they die. If they die unexpectedly, we don’t bother trying to bring them back to life! Instead, Kubernetes starts another one in its place – but it’s not the same Pod, it’s a shiny new one that just happens to look, smell, and feel exactly like the one that just died.
Pods should be treated as cattle – don’t build your Kubernetes apps to be emotionally attached to their Pods so that when one dies you get sad and try to nurse it back to life. Build your apps so that when their Pods fail, a totally new one (with a new ID and IP address) can pop up somewhere else in the cluster and take its place.

Deploying Pods

We normally deploy Pods indirectly as part of something bigger, such as a ReplicaSet or Deployment (more on these later).

Deploying Pods via ReplicaSets

Before moving on to talk about Services, we need to give a quick mention to ReplicaSets (rs).
A ReplicaSet is a higher-level Kubernetes object that wraps around a Pod and adds features. As the names suggests, they take a Pod template and deploy a desired number of replicas of it. They also instantiate a background reconciliation loop that checks to make sure the right number of replicas are always running – desired state vs actual state.
ReplicaSets can be deployed directly. But more often than not, they are deployed indirectly via even higher-level objects such as Deployments.

Leave a Reply

Your email address will not be published. Required fields are marked *