173 lines
7.9 KiB
Org Mode
173 lines
7.9 KiB
Org Mode
#+BEGIN_COMMENT
|
|
.. title: Minikube Setup
|
|
.. date: 2019-02-09
|
|
.. updated: 2019-07-02
|
|
.. status: published
|
|
.. tags: minikube, kubernetes, ingress, ingress-controller,
|
|
.. category: kubernetes
|
|
.. slug: minikube-setup
|
|
.. authors: Elia el Lazkani
|
|
.. description: A quick and dirty minikube setup.
|
|
.. type: text
|
|
#+END_COMMENT
|
|
|
|
If you have ever worked with /kubernetes/, you'd know that minikube out of the box does not give you what you need for a quick setup. I'm sure you can go =minikube start=, everything's up... Great... =kubectl get pods -n kube-system=... It works, let's move on...
|
|
|
|
But what if it's not let's move on to something else. We need to look at this as a local test environment in capabilities. We can learn so much from it before applying to the lab. But, as always, there are a few tweaks we need to perform to give it the magic it needs to be a real environment.
|
|
|
|
{{{TEASER_END}}}
|
|
|
|
* Prerequisites
|
|
If you are looking into /kubernetes/, I would suppose that you know your linux's ABCs and you can install and configure /minikube/ and its prerequisites prior to the beginning of this tutorial.
|
|
|
|
You can find the guide to install /minikube/ and configure it on the /minikube/ [[https://kubernetes.io/docs/setup/minikube/][webpage]].
|
|
|
|
Anyway, make sure you have /minikube/ installed, /kubectl/ and whatever driver dependencies you need to run it under that driver. In my case, I am using /kvm2/ which will be reflected in the commands given to start /minikube/.
|
|
|
|
* Starting /minikube/
|
|
Let's start minikube.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ minikube start --vm-driver=kvm2
|
|
Starting local Kubernetes v1.13.2 cluster...
|
|
Starting VM...
|
|
Getting VM IP address...
|
|
Moving files into cluster...
|
|
Setting up certs...
|
|
Connecting to cluster...
|
|
Setting up kubeconfig...
|
|
Stopping extra container runtimes...
|
|
Starting cluster components...
|
|
Verifying apiserver health ...
|
|
Kubectl is now configured to use the cluster.
|
|
Loading cached images from config file.
|
|
|
|
|
|
Everything looks great. Please enjoy minikube!
|
|
#+END_EXAMPLE
|
|
|
|
Great... At this point we have a cluster that's running, let's verify.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
# Id Name State
|
|
--------------------------
|
|
3 minikube running
|
|
#+END_EXAMPLE
|
|
|
|
For me, I can check =virsh=. If you used /VirtualBox/ you can check that.
|
|
|
|
We can also test with =kubectl=.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl version
|
|
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
|
|
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
|
|
#+END_EXAMPLE
|
|
|
|
Now what ? Well, now we deploy a few addons that we need to deploy in production as well for a functioning /kubernetes/ cluster.
|
|
|
|
Let's check the list of add-ons available out of the box.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ minikube addons list
|
|
- addon-manager: enabled
|
|
- dashboard: enabled
|
|
- default-storageclass: enabled
|
|
- efk: disabled
|
|
- freshpod: disabled
|
|
- gvisor: disabled
|
|
- heapster: enabled
|
|
- ingress: enabled
|
|
- kube-dns: disabled
|
|
- metrics-server: enabled
|
|
- nvidia-driver-installer: disabled
|
|
- nvidia-gpu-device-plugin: disabled
|
|
- registry: disabled
|
|
- registry-creds: disabled
|
|
- storage-provisioner: enabled
|
|
- storage-provisioner-gluster: disabled
|
|
#+END_EXAMPLE
|
|
|
|
Make sure you have /dashboard/, /heapster/, /ingress/ and /metrics-server/ *enabled*. You can enable add-ons with =kubectl addons enable=.
|
|
|
|
* What's the problem then ?
|
|
Here's the problem that comes next. How do you access the dashboard or anything running in the cluster ? Everyone online suggests you proxy a port and you access the dashboard. Is that really how it should work ? Is that how production system do it ?
|
|
|
|
The answer is of course not. They use different types of /ingresses/ at their disposal. In this case, /minikube/ was kind enough to provide one for us, the default /kubernetes ingress controller/, It's a great option for an ingress controller that's solid enough for production use. Fine, a lot of babble. Yes sure but this babble is important. So how do we access stuff on a cluster ?
|
|
|
|
To answer that question we need to understand a few things. Yes, you can use a =NodePort= on your service and access it that way. But do you really want to manage these ports ? What's in use and what's not ? Besides, wouldn't it be better if you can use one port for all of the services ? How you may ask ?
|
|
|
|
We've been doing it for years, and by we I mean /ops/ and /devops/ people. You have to understand that the kubernetes ingress controller is simply an /nginx/ under the covers. We've always been able to configure /nginx/ to listen for a specific /hostname/ and redirect it where we want to. It shouldn't be that hard to do right ?
|
|
|
|
Well this is what an ingress controller does. It uses the default ports to route traffic from the outside according to hostname called. Let's look at our cluster and see what we need.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl get services --all-namespaces
|
|
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
default kubernetes ClusterIP 10.96.0.1 443/TCP 17m
|
|
kube-system default-http-backend NodePort 10.96.77.15 80:30001/TCP 17m
|
|
kube-system heapster ClusterIP 10.100.193.109 80/TCP 17m
|
|
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 17m
|
|
kube-system kubernetes-dashboard ClusterIP 10.106.156.91 80/TCP 17m
|
|
kube-system metrics-server ClusterIP 10.103.137.86 443/TCP 17m
|
|
kube-system monitoring-grafana NodePort 10.109.127.87 80:30002/TCP 17m
|
|
kube-system monitoring-influxdb ClusterIP 10.106.174.177 8083/TCP,8086/TCP 17m
|
|
#+END_EXAMPLE
|
|
|
|
In my case, you can see that I have a few things that are in =NodePort= configuration and you can access them on those ports. But the /kubernetes-dashboard/ is a =ClusterIP= and we can't get to it. So let's change that by adding an ingress to the service.
|
|
|
|
* Ingress
|
|
An ingress is an object of kind =ingress= that configures the ingress controller of your choice.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
apiVersion: extensions/v1beta1
|
|
kind: Ingress
|
|
metadata:
|
|
name: kubernetes-dashboard
|
|
namespace: kube-system
|
|
annotations:
|
|
nginx.ingress.kubernetes.io/rewrite-target: /
|
|
spec:
|
|
rules:
|
|
- host: dashboard.kube.local
|
|
http:
|
|
paths:
|
|
- path: /
|
|
backend:
|
|
serviceName: kubernetes-dashboard
|
|
servicePort: 80
|
|
#+END_SRC
|
|
|
|
Save that to a file =kube-dashboard-ingress.yaml= or something then run.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl apply -f kube-bashboard-ingress.yaml
|
|
ingress.extensions/kubernetes-dashboard created
|
|
#+END_EXAMPLE
|
|
|
|
And now we get this.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl get ingress --all-namespaces
|
|
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
|
|
kube-system kubernetes-dashboard dashboard.kube.local 80 17s
|
|
#+END_EXAMPLE
|
|
|
|
Now all we need to know is the IP of our kubernetes cluster of /one/.
|
|
Don't worry /minikube/ makes it easy for us.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ minikube ip
|
|
192.168.39.79
|
|
#+END_EXAMPLE
|
|
|
|
Now let's add that host to our =/etc/hosts= file.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
192.168.39.79 dashboard.kube.local
|
|
#+END_EXAMPLE
|
|
|
|
Now if you go to [[http://dashboard.kube.local]] in your browser, you will be welcomed with the dashboard. How is that so ? Well as I explained, point it to the nodes of the cluster with the proper hostname and it works.
|
|
|
|
You can deploy multiple services that can be accessed this way, you can also integrate this with a service mesh or a service discovery which could find the up and running nodes that can redirect you to point to at all times. But this is the clean way to expose services outside the cluster.
|