Your web browser is out of date. Update your browser for more security, speed and the best experience on this site.

Update your browser
CapTech Home Page

Blog November 1, 2018

Where do I start with Kubernetes on GKE?

CapTech

My Journey Learning Kubernetes

I have been working with AWS for a little more than two years. Recently, though, I decided to explore Kubernetes on the Google Kubernetes Engine. Kubernetes is an open-source system developed by Google. I took the opportunity that Google offers: a twelve-month, free trial for any Google Cloud Platform product to learn Kubernetes. Before I began learning the subject, I first did a quick Google search on what makes Kubernetes so popular. The question brought up many different search results, and I quickly realized that this was the most popular and frequently-asked question by a newbie.

Kubernetes has attracted the attention of many industries because it is not locked to one particular vendor or platform. Then you may be asking: Why use Google Cloud Platform instead of another platform? GKE is fully integrated and mature hosted service that has most advanced features currently available in the market. Google provider's GKE handles production workloads better than some of the leading providers like EKS (AWS) or AKS (Azure). As developers, this gives us the advantage of adopting new features early, and to make better choices in designing application and infrastructure.

The Need for Kubernetes

Most organizations have started moving away from the big monolithic application and instead have started building microservices. Unlike monolithic applications, microservices are smaller components that can be deployed and scaled independently. Microservices help with quick deliverables for agile requirements because these components are faster to develop, easier to maintain, loosely coupled, and highly cohesive. Each microservice has a well-focused purpose and can be deployed to multiple servers.

  • Microservices leverage containerization.Here are some reasons for preferring containers over a virtual machine.
  • Containers of a physical machine share the same operating system, in contrast, virtual machines run distinct operating systems that share the same hardware.
  • When compared with virtual machines, containers are conservative in consuming resources that the application needs.

Docker is the most popular container platform for building, delivering, and running applications. Some benefits of Docker containers are quick startup time, lightweight, easier deployment, portable, isolation of process, and the ability to replicate the server in a local development environment. Therefore, Docker containers are best suited for microservices.

Microservices architecture is supported by other cloud providers as well. For example, AWS has a Docker container orchestrator (AWS ECS).

But what makes Kubernetes so special? With a growing number of microservices, the challenge now is managing these components and running them efficiently. Kubernetes provides developers with a platform for deploying and running applications. Kubernetes makes life easier for both developers and operations by automating configurations, scheduling and monitoring applications, communicating with other components, and recovering and handling failures.

Learning About Kubernetes Cluster

A Kubernetes cluster is composed of a master node and worker nodes. The master node controls, schedules, and manages the state of the Kubernetes cluster, and the worker nodes run the deployed application. Figure 1.0 shows a simple representation of Kubernetes Cluster.

Kubernetes manages its containers through pods. According to Kubernetes documentation, "a pod represents a running process on your cluster. A pod is a group of one or more containers." A notable fact about the containers of a pod is that the containers of the same pod share an IP address and port space. The cuboids in Figure 1.0 represents the pods, and c1, c2, c3, and c4 represent the containers within the pod. The pods can be replicated in other nodes as well.

Figure 10, below, shows a pod with containers c1 and c2 replicated in two worker nodes. Similarly, the pod running container c3 is replicated in three worker nodes. The IP addresses of the pods are not exposed externally. Instead, services are used for exposing cluster IP. The logical set of pods running in a cluster is exposed through the service. Figure 1.0 shows that the pods with containers c1 and c2 are exposed through 'Service A;' pods with container c3 are exposed through 'Service B;' and the pod with container c4 is exposed through 'Service C'.

Kubernetes Cluster
Figure 1.0: Kubernetes Cluster

Creating a Kubernetes Cluster

I chose Minikube to understand how the pods and containers worked on a single node. Once I understood the working of a single node cluster, I decided to look into Google Kubernetes Engine's multi-node Kubernetes cluster. I set up Google cloud project and installed the necessary SDKs, the Kubernetes client, and the command-line tools.

The simplest way to deploy the application is to use the command-line tools. I created a cluster using the "gcloud container clusters create" command. This command accepts a list of parameters to create a cluster. I deployed Nginx Docker image to the Kubernetes cluster. The snippet below shows kubectl commands to run the image, list the pods, expose the service, and list the services.

 gcloud container clusters create the-cluster --num-nodes 2 --machine-type n1-standard-2 --zone us-east1-b
 kubectl run the-cluster --image=nginx --port 80 --requests="cpu=200m"
 kubectl expose deployment/the-cluster --type="NodePort" --port 80
 kubectl get services
Listing 1.0: Simplest commands for deploying an app to Kubernetes

Another way of configuring the properties is to use YAML or JSON. Some benefits of using YAML include: configuring properties without limitation; allowing improved readability both for the user and the system; providing a good data structure; and the ability to store in the version-control system. One thing I had to be careful about was the compatibility of the resources used in the YAML with the Kubernetes release version. For example, the Ingress resource I used in the below YAML is not available in Kubernetes release prior to 1.1.The YAML below uses three key kinds of objects for deploying an application. They are Deployment, Services, and Ingress.

Here are some key terminologies to know before applying configuration using YAML:

  • Replication Controller is responsible for creating, monitoring, replacing, and running a specified number of pods.
  • ReplicaSet is responsible for running one or more pods. It is a replacement for Replication Controller.
  • LoadBalancer is a type of service that allows connecting to the pods from outside of the cluster through the load balancer's public IP.
  • The Deployment is an alternative to Replication Controller for deploying an application. It is supported by ReplicaSet which manages and replicates the pod as needed. The specification shows
    • Replicas - the number of pods to create during deployment
    • Image - the Docker image to be deployed
    • Template - the instructions to create Kubernetes pods
  • The Service exposes the pods. It accepts connection on the port and routes each connection to the target port. The IP address of the service doesn't change as long as it is alive. The specification uses NodePort to expose the pods to clients by reserving a port.
  • The Ingress is an alternative to LoadBalancer service. Ingress has a single IP address and maps different services to different paths of the same or different host. GKE Ingress Controller takes care of load balancing the services. In the snippet below, Ingress exposes the service by mapping my.sample.com to the sample service.

Instructions for Using YAML To Deploy Nginx

Step 1: Start with the clean slate by removing the existing cluster.

 gcloud container clusters delete the-cluster --zone us-east1-b

Step 2: Create nginx-sample YAML and save it to your local folder.

 apiVersion: apps/v1beta1
 kind: Deployment
 metadata:
 name: sample-deploy
 spec:
 replicas: 1
 template:
 metadata:
 name: sample
 labels:
 app: sample
 spec:
 containers:
 - image: nginx:latest
 name: sample-server
 ports:
 - containerPort: 80
 protocol: TCP
 resources:
 requests:
 memory: "4Gi"
 cpu: "200m"
 ---
 apiVersion: v1
 kind: Service
 metadata:
 name: sample-svc
 spec:
 type: NodePort
 selector:
 app: sample
 ports:
 - port: 80
 targetPort: 80
 ---
 apiVersion: extensions/v1beta1
 kind: Ingress
 metadata:
 name: sample-ing
 spec:
 rules:
 - host: my.sample.com
 http:
 paths:
 - path: /
 backend:
 serviceName: sample-svc
 servicePort: 80
Listing 1.1: The YAML

Step 3: Create the cluster using the command below.

 gcloud container clusters create the-cluster --num-nodes 1 --machine-type n1-standard-2 --zone us-east1-b

Step 4: The nginx-sample.yaml file has the instructions to create deployment, service, and ingress. Apply the configuration by file name.

 kubectl apply -f nginx-sample.yaml

Step 5: Use 'get ingress' command to get the address.

 kubectl get ingress

 NAME HOSTS ADDRESS PORTS AGE
 sample-ing my.sample.com 35.241.0.213 80 1m

Step 6: Map the host name and IP address by adding it to the /etc/hosts file. Open the /etc/hosts file and copy the following in the file: 35.241.0.213 my.sample.com
Note that I used the IP address from Step 3.

Step 7: Open a browser and enter http://my.sample.com/ to see the Nginx welcome page.

Options for Creating a Stack

Some options I explored for creating a custom stack include Chef, Puppet, Ansible, and Terraform. Chef, Puppet, and Ansible are configuration management tools, while Terraform is an orchestration tool. Throughout my research I used Docker images to deploy the application in Kubernetes. Most of the configuration management requirements are managed by Docker. The Docker image has everything configured, so the only aspect I was concerned with was the orchestration piece. An orchestration tool will help in provisioning servers to run the application, so I choose Terraform.

Examples on Terraform usages can be found here. I'm also listing Terraform resources from the Terraform documentation to read before creating infrastructure here:

  • Create a GKE cluster using "google_container_cluster"
  • Create firewall access to and from the instances using "google_compute_firewall"
  • Manage a node pool resource using "google_container_node_pool"
  • Create a VPC using "google_compute_network" and create a subnet using "google_compute_subnetwork" resource
  • Create DNS service resources "google_dns_managed_zone" and "google_dns_record_set"
  • Create modules that reuse the resources
GitHub project developed by Artem Starostenko

Design Decisions

I learned that I have to take a few things into consideration before designing the infrastructure. As a designer, here are some questions I had to answer:

  • Are there benefits of deploying different applications to the same pod?
  • Should pods be scheduled closer or further away?
  • Should pods be scheduled to a specific node or not?
  • Should pods be co-located in the same geographic region and/or availability zone?
  • Should a pod mount a volume that references an external persistent disk?
  • Should different services be mapped to different paths of the same host or different hosts?
  • How to handle existing and new connections after a SIGTERM signal is received?
  • How long should we wait to close inactive connections before shutting down completely?
  • Smaller images work well with Kubernetes. What are the necessary tools that will help in troubleshooting issues when running an image in a container, and still keep the image small?

Kubernetes provides many features, and it is up to the designer to choose the right design. It addresses the challenges faced by applications deployed in the cloud. This open-source platform provides a great value to manage microservices and run them efficiently, but comes with a learning curve for both developers and operations.

Resources