ARTH TASK 19

Adithya Gangadhar Shetty
9 min readJan 12, 2022

--

Ansible Role to Configure Kubernetes Multi Node Cluster over AWS Cloud

Task Description :
📌 Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.
🔅 Create Ansible Playbook to launch 3 AWS EC2 Instance
🔅 Create Ansible Playbook to configure Docker over those instances.
🔅 Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
🔅 Convert Playbook into roles.

What is Kubernetes?

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

What can Kubernetes do for you?

With modern web services, users expect applications to be available 24/7, and developers expect to deploy new versions of those applications several times a day. Containerization helps package software to serve these goals, enabling applications to be released and updated without downtime. Kubernetes helps you make sure those containerized applications run where and when you want, and helps them find the resources and tools they need to work. Kubernetes is a production-ready, open source platform designed with Google’s accumulated experience in container orchestration, combined with best-of-breed ideas from the community.

Why you need Kubernetes and what it can do

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

Kubernetes provides you with:

  • Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
  • Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration

Kubernetes Clusters

Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. To make use of this new model of deployment, applications need to be packaged in a way that decouples them from individual hosts: they need to be containerized. Containerized applications are more flexible and available than in past deployment models, where applications were installed directly onto specific machines as packages deeply integrated into the host. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way. Kubernetes is an open-source platform and is production-ready.

Kubernetes Components

When you deploy Kubernetes, you get a cluster.

A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.

The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.

Nodes

Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods

Typically you have several nodes in a cluster; in a learning or resource-limited environment, you might have only one node.

What are pods in Kubernetes?

Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster. Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod’s resources.

Containers

Each container that you run is repeatable; the standardization from having dependencies included means that you get the same behavior wherever you run it.

Containers decouple applications from underlying host infrastructure. This makes deployment easier in different cloud or OS environments.

What is a Multi-Node cluster in Kubernetes?

A multi-node cluster in Kubernetes is a setup with various nodes among which one is known as the master node and the rest are the worker nodes.

What is Multi Node Kubernetes Cluster ?

A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes , you’re running a cluster . At a minimum, a cluster contains a control plane and one or more compute machines, or nodes . Nodes actually run the applications and workloads.

Multi Node Kubernetes Cluster is a Group of nodes in which there is one Master Node and Multiple Worker/Slave Nodes.

Now, let’s jump into the task:

Step 1 : Setup the Ansible configuration file and the inventory. My setup is built upon a dynamic inventory.

For details visit: Automating HAProxy using Ansible. Using ansible for configuring HAProxy… | by Gursimar Singh | Mar, 2021 | Medium

Configuration File ansible.cfg :

To setup dynamic inventory for AWS EC2 instances, download ec2.py and ec2.ini file to the controller node using the wget command.

$ wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.py$ wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini

Install the SDK for AWS that is boto3

$ pip3 install boto3

Make these 2 files executable:

$ chmod +x ec2.py$ chmod +x ec2.ini

Export the following variables along with their values for our particular AWS account, in my case I have chosen region as ap-south-1.

Step 2 : Create 3 roles using the ansible-galaxy init command namely,

  • aws_ec2 :- To setup 3 AWS EC2 instances for the multi-node setup.
  • k8s_master :- To setup kubernetes master on the instance.
  • k8s_worker :- To setup kubernetes worker on the instances.

Step 3 : Create a playbook on the role aws_ec2 with corresponding modules to launch 3 AWS EC2 instances. Run this playbook and after that run the ./ec.py command to verify the setup of dynamic inventory as explained above in step 1.

  • Vars file of playbook :
---# vars file for ec2-launch 
image: "ami-089c6f2e3866f0f14"
instance_type: "t2.micro"
region: "us-east-2"
key: testingkey
vpc_subnet_id: "subnet-2321516f"
security_group_id: "sg-07a58bacace819405"
OS_Names:
- "K8S_Master"
- "K8S_Slave1"
- "K8S_Slave2" akey: 'xxxxxxxxxxxxxx'
skey: 'xxxxxxxxxxxxxxxxxxxxxxxxxx'

Playbook for setup :

  • Playbook in the tasks directory of our ec2-launch role.
---# tasks file for ec2-launch 
- name: "launching ec2 instances..."
ec2:
image: "{{ image }}"
instance_type: "{{ instance_type }}"
region: "{{ region }}"
key_name: "{{ key }}"
wait: yes
count: 1
state: present
vpc_subnet_id: "{{ vpc_subnet_id }}"
group_id: "{{ security_group_id }}"
aws_access_key: "{{ akey }}"
aws_secret_key: "{{ skey }}"
instance_tags:
Name: "{{ item }}"
loop: "{{ OS_Names }}"
  • The main playbook ec2_setup.yml
- hosts: localhost
roles:
— role: “/wstask19/ec2-launch”
  • Run the playbook through the role aws_ec2 :
  • Status at Web UI after the successful execution of the playbook :
  • Now, let’s check the connectivity

Step 4 : Setting up the Multi-Node K8S cluster

  • Create 2 roles, one to configure K8s master node and one to configure K8s slave nodes
$ ansible-galaxy role init k8s-master$ ansible-galaxy role init k8s-slaves
  • Configuring k8s master
$ vim k8s-master/tasks/main.yml
  • The join token for the slave will be displayed on the screen by the debug module.
  • Configuring K8s Slaves
$ vim k8s-slaves/tasks/main.yml
  • Main Playbook for setting up K8s cluster:
- hosts: ["tag_Name_K8S_Master"]
roles:
- name: "config master node.."
role: "/wstask19/k8s-master"
- hosts: ["tag_Name_K8S_Slave1", "tag_Name_K8S_Slave2"]
roles:
- name: "config slave nodes.."
role: "/wstask19/k8s-slaves"
  • Let’s run the Playbook to configure and set up the Multi Node Cluster
  • The playbook ran successfully
  • Now, let’s check the status of the cluster by logging in to our EC2 master node.
  • The Kubelet service is active and running. ($ Systemctl status kubelet)
  • Docker is also active and running. ($ Systemctl status docker)

Let us upload these roles to Ansible Galaxy

  • Creating SSH key
$ ssh-keygen
  • Read and Copy the SSH key
$ cat <filename>.pub
  • Go to Settings in GitHub and click SSH and GPG keys.
  • Click on add new and Paste the SSH key.
  • Login to GitHub via shell
$ ssh -T git@github.com
  • Go to GitHub WebUI and create a repository.
  • Initialize the directory and add all the files to the staging area
$ git init
$ git add ./*/*
$ git status
  • Commit, Branch, Add your remote origin and finally push your code to the GitHub repository. The files will be added to the repository.

Now,

  • Go to your ansible galaxy account and choose my content and click add content.
  • Then Choose Import files from GitHub and choose the repository to import
  • After a second your roles will be successfully uploaded.
  • Follow the same for all three roles.
  • And that’s it. The roles are successfully uploaded!

--

--