I have been working with AWS for a little more than two years. Recently, though, I decided to explore Kubernetes on the Google Kubernetes Engine. Kubernetes is an open-source system developed by Google. I took the opportunity that Google offers: a twelve-month, free trial for any Google Cloud Platform product to learn Kubernetes. Before I began learning the subject, I first did a quick Google search on what makes Kubernetes so popular. The question brought up many different search results, and I quickly realized that this was the most popular and frequently-asked question by a newbie.
Kubernetes has attracted the attention of many industries because it is not locked to one particular vendor or platform. Then you may be asking: Why use Google Cloud Platform instead of another platform? GKE is fully integrated and mature hosted service that has most advanced features currently available in the market. Google provider's GKE handles production workloads better than some of the leading providers like EKS (AWS) or AKS (Azure). As developers, this gives us the advantage of adopting new features early, and to make better choices in designing application and infrastructure.
The Need for Kubernetes
Most organizations have started moving away from the big monolithic application and instead have started building microservices. Unlike monolithic applications, microservices are smaller components that can be deployed and scaled independently. Microservices help with quick deliverables for agile requirements because these components are faster to develop, easier to maintain, loosely coupled, and highly cohesive. Each microservice has a well-focused purpose and can be deployed to multiple servers.
Microservices leverage containerization.Here are some reasons for preferring containers over a virtual machine.
Containers of a physical machine share the same operating system, in contrast, virtual machines run distinct operating systems that share the same hardware.
When compared with virtual machines, containers are conservative in consuming resources that the application needs.
Docker is the most popular container platform for building, delivering, and running applications. Some benefits of Docker containers are quick startup time, lightweight, easier deployment, portable, isolation of process, and the ability to replicate the server in a local development environment. Therefore, Docker containers are best suited for microservices.
Microservices architecture is supported by other cloud providers as well. For example, AWS has a Docker container orchestrator (AWS ECS).
But what makes Kubernetes so special? With a growing number of microservices, the challenge now is managing these components and running them efficiently. Kubernetes provides developers with a platform for deploying and running applications. Kubernetes makes life easier for both developers and operations by automating configurations, scheduling and monitoring applications, communicating with other components, and recovering and handling failures.
Learning About Kubernetes Cluster
A Kubernetes cluster is composed of a master node and worker nodes. The master node controls, schedules, and manages the state of the Kubernetes cluster, and the worker nodes run the deployed application. Figure 1.0 shows a simple representation of Kubernetes Cluster.
Kubernetes manages its containers through pods. According to Kubernetes documentation, "a pod represents a running process on your cluster. A pod is a group of one or more containers." A notable fact about the containers of a pod is that the containers of the same pod share an IP address and port space. The cuboids in Figure 1.0 represents the pods, and c1, c2, c3, and c4 represent the containers within the pod. The pods can be replicated in other nodes as well.
Figure 10, below, shows a pod with containers c1 and c2 replicated in two worker nodes. Similarly, the pod running container c3 is replicated in three worker nodes. The IP addresses of the pods are not exposed externally. Instead, services are used for exposing cluster IP. The logical set of pods running in a cluster is exposed through the service. Figure 1.0 shows that the pods with containers c1 and c2 are exposed through 'Service A;' pods with container c3 are exposed through 'Service B;' and the pod with container c4 is exposed through 'Service C'.