.. description: Deploying a kubernetes cluster locally on KVM.
.. type: text
I wanted to explore *kubernetes* even more for myself and for this blog. I've worked on pieces of this at work but not the totality of the work which I would like to understand for myself. I wanted, also to explore new tools and ways to leverage the power of *kubernetes*.
So far, I have been using *minikube* to do the deployments but there is an inherit restriction that comes with using a single bundled node. Sure, it is easy to get it up and running but at some point I had to use ``nodePort`` to go around the IP restriction. This is a restriction that you will have in an actual *kubernetes* cluster but I will show you later how to go around it. For now, let's just get a local cluster up and running.
.. TEASER_END
Objective
=========
I needed a local *kubernetes* cluster using all open source tools and easy to deploy. So I went with using *KVM* as the hypervisor layer and installed ``virt-manager`` for shallow management. As an OS, I wanted something light and made for *kubernetes*. As I already know of Rancher (being an easy way to deploy *kubernetes* and they have done a great job so far since the launch of their Rancer 2.0) I decided to try *RancherOS*. So let's see how all that works together.
Requirements
============
Let's start by thinking about what we actually need. Rancher, the dashboard they offer is going to need a VM by itself and they `recommend <https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/>`_*4GB of RAM*. I only have *16GB of RAM* on my machine so I'll have to do the math to see how much I can afford to give this *dashboard* and *manager*. By looking at the *RancherOS* hardware `requirements <https://rancher.com/docs/os/v1.x/en/>`_, I can tell that by giving a each node *2GB* of RAM I should be able to host a *3 node cluster* and with *2* more for the *dashboard* that puts me right on *8GB of RAM*. So we need to create *4 VMs* with *2GB of RAM* each.
Installing RancherOS
====================
Once all 4 nodes have been created, when you boot into the *RancherOS*`ISO <https://rancher.com/docs/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/>`_ do the following.
If you read the *RancherOS*`documentation <https://rancher.com/docs/os/v1.x/en/>`_, you'll find out that you can configure the *OS* with a ``YAML`` configuration file so let's do that.
Make sure that your **public***ssh key* is replaced in the example before and if you have a different network configuration for your VMs, change the network configuration here.
After *RancherOS* has been installed, one will need to configure ``/etc/hosts`` and it should look like the following if one is working off of the *Rancher* box.
Optionally, you can choose your **Network Providor**, in my case I chose **Calico**. Then I clicked on **show advanced** at the bottom right corner then expanded the *newly shown tab***Advanced Cluster Options**.
We will disable the **Nginx Ingress** and the **Pod Security Policy Support** for the time being. This will become more apparent why in the future, hopefully. Then hit **Next**.
Make sure that you select all **3 Node Roles**. Set the **Public Address** and the **Node Name** to the first node and then copy the command and paste it on the *first* node.
Do the same for *all the rest*. Once the first docker image gets downloaded and ran you should see a message pop at the bottom.
Do **NOT** click *done* until you see all *3 nodes registered*.
Finalizing
==========
Now that you have *3 registered nodes*, click **Done** and go grab yourself a cup of coffee. Maybe take a long walk, this will take time. Or if you are curious like me, you'd be looking at the logs, checking the containers in a quad pane ``tmux`` session.
After a long time has passed, our story ends with a refresh and a welcome with this page.
At this point, you can check that all the nodes are healthy and you got yourself a kubernetes cluster. In future blog posts we will explore an avenue to deploy *multiple ingress controllers* on the same cluster on the same ``port: 80`` by giving them each an IP external to the cluster.
But for now, you got yourself a kubernetes cluster to play with. Enjoy.