ARTH TASK 19 [📌 Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud. 🔅 Create Ansible Playbook to launch 3 AWS EC2 Instance 🔅 Create Ansible Playbook to configure Docker over those instances. 🔅 Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm. 🔅 Convert Playbook into roles and Upload those role on your Ansible Galaxy. 🔅 Also Upload all the YAML code over your GitHub Repository. 🔅 Create a README.md document using markdown language describing your Task in creative manner. 🔅 Create blog about task and share on your LinkedIN profile.]

 Step 1 : Setup the Ansible configuration file and the inventory. My setup is built upon a dynamic inventory.

For details visit: Automating HAProxy using Ansible. Using ansible for configuring HAProxy… | by Gursimar Singh | Mar, 2021 | Medium

Configuration File ansible.cfg :

To setup dynamic inventory for AWS EC2 instances, download ec2.py and ec2.ini file to the controller node using the wget command.

$ wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.py$ wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini

Install the SDK for AWS that is boto3

$ pip3 install boto3

Make these 2 files executable:

$ chmod +x ec2.py$ chmod +x ec2.ini

Export the following variables along with their values for our particular AWS account, in my case I have chosen region as ap-south-1.

Step 2 : Create 3 roles using the ansible-galaxy init command namely,

  • aws_ec2 :- To setup 3 AWS EC2 instances for the multi-node setup.
  • k8s_master :- To setup kubernetes master on the instance.
  • k8s_worker :- To setup kubernetes worker on the instances.

Step 3 : Create a playbook on the role aws_ec2 with corresponding modules to launch 3 AWS EC2 instances. Run this playbook and after that run the ./ec.py command to verify the setup of dynamic inventory as explained above in step 1.

  • Vars file of playbook :
---# vars file for ec2-launch 
image: "ami-089c6f2e3866f0f14"
instance_type: "t2.micro"
region: "us-east-2"
key: testingkey
vpc_subnet_id: "subnet-2321516f"
security_group_id: "sg-07a58bacace819405"
OS_Names:
- "K8S_Master"
- "K8S_Slave1"
- "K8S_Slave2"
akey: 'xxxxxxxxxxxxxx'
skey: 'xxxxxxxxxxxxxxxxxxxxxxxxxx'

Playbook for setup :

  • Playbook in the tasks directory of our ec2-launch role.
---# tasks file for ec2-launch 
- name: "launching ec2 instances..."
ec2:
image: "{{ image }}"
instance_type: "{{ instance_type }}"
region: "{{ region }}"
key_name: "{{ key }}"
wait: yes
count: 1
state: present
vpc_subnet_id: "{{ vpc_subnet_id }}"
group_id: "{{ security_group_id }}"
aws_access_key: "{{ akey }}"
aws_secret_key: "{{ skey }}"
instance_tags:
Name: "{{ item }}"
loop: "{{ OS_Names }}"
  • The main playbook ec2_setup.yml
- hosts: localhost
roles:
— role: “/wstask19/ec2-launch”
  • Run the playbook through the role aws_ec2 :
  • Status at Web UI after the successful execution of the playbook :
  • Now, let’s check the connectivity

Step 4 : Setting up the Multi-Node K8S cluster

  • Create 2 roles, one to configure K8s master node and one to configure K8s slave nodes
$ ansible-galaxy role init k8s-master$ ansible-galaxy role init k8s-slaves
  • Configuring k8s master
$ vim k8s-master/tasks/main.yml
  • The join token for the slave will be displayed on the screen by the debug module.
  • Configuring K8s Slaves
$ vim k8s-slaves/tasks/main.yml
  • Main Playbook for setting up K8s cluster:
- hosts: ["tag_Name_K8S_Master"]
roles:
- name: "config master node.."
role: "/wstask19/k8s-master"
- hosts: ["tag_Name_K8S_Slave1", "tag_Name_K8S_Slave2"]
roles:
- name: "config slave nodes.."
role: "/wstask19/k8s-slaves"
  • Let’s run the Playbook to configure and set up the Multi Node Cluster
  • The playbook ran successfully
  • Now, let’s check the status of the cluster by logging in to our EC2 master node.
  • The Kubelet service is active and running. ($ Systemctl status kubelet)
  • Docker is also active and running. ($ Systemctl status docker)

Let us upload these roles to Ansible Galaxy

  • Creating SSH key
$ ssh-keygen
  • Read and Copy the SSH key
$ cat <filename>.pub
  • Go to Settings in GitHub and click SSH and GPG keys.
  • Click on add new and Paste the SSH key.
  • Login to GitHub via shell
$ ssh -T git@github.com
  • Go to GitHub WebUI and create a repository.
  • Initialize the directory and add all the files to the staging area
$ git init
$ git add ./*/*
$ git status
  • Commit, Branch, Add your remote origin and finally push your code to the GitHub repository. The files will be added to the repository.

Comments

Popular posts from this blog

ARTH TASK 23 [📌 Automate Kubernetes Cluster Using Ansible 🔅 Launch ec2-instances on AWS Cloud eg. for master and slave. 🔅 Create roles that will configure master node and slave node seperately. 🔅 Launch a wordpress and mysql database connected to it in the respectine slaves. 🔅 Expose the wordpress pod and client able hit the wordpress ip with its respective port. ]

Arth Task6 "Create High Availability Architecture with AWS CLI" #awscloud #awscli #aws #vimaldaga #righteducation #educationredefine #rightmentor #worldrecordholder #linuxworld #makingindiafutureready #righeudcation #awsbylw #arthbylw

RH294 real used cases workshop #RedHat #vimaldaga #righteducation #educationredefine #rightmentor #linuxworld #makingindiafutureready #righeducation #arthbylw #ansiblebylw #ansible #expertsession #sreejith #arun #practicalimplementation #rh294