blog.lazkani.io/content/posts/your-first-minikube-helm-deployment.md

15 KiB

+++ title = "Your First Minikube Helm Deployment" author = ["Elia el Lazkani"] date = 2019-02-10 lastmod = 2019-06-21 tags = ["minikube", "ingress", "helm", "prometheus", "grafana"] categories = ["kubernetes"] draft = false +++

In the last post, we have configured a basic minikube cluster. In this post we will deploy a few items we will need in a cluster and maybe in the future, experiment with it a bit.

Prerequisite

During this post and probably during future posts, we will be using helm to deploy to our minikube cluster. Some offered by the helm team, others by the community and maybe our own. We need to install helm on our machine. It should be as easy as downloading the binary but if you can find it in your package manager go that route.

Deploying Tiller

Before we can start with the deployments using helm, we need to deploy tiller. It's a service that manages communications with the client and deployments.

 $ helm init --history-max=10
 Creating ~/.helm
 Creating ~/.helm/repository
 Creating ~/.helm/repository/cache
 Creating ~/.helm/repository/local
 Creating ~/.helm/plugins
 Creating ~/.helm/starters
 Creating ~/.helm/cache/archive
 Creating ~/.helm/repository/repositories.yaml
 Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
 Adding local repo with URL: http://127.0.0.1:8879/charts
 $HELM_HOME has been configured at ~/.helm.

 Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

 Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
 To prevent this, run ``helm init`` with the --tiller-tls-verify flag.
 For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

Tiller is deployed, give it a few minutes for the pods to come up.

Deploy Prometheus

We often need to monitor multiple aspects of the cluster easily. Sometimes maybe even write our applications to (let's say) publish metrics to prometheus. And I said 'let's say' because technically we offer an endpoint that a prometheus exporter will consume regularly and publish to the prometheus server. Anyway, let's deploy prometheus.

 $ helm install stable/prometheus-operator --name prometheus-operator --namespace kube-prometheus
 NAME:   prometheus-operator
 LAST DEPLOYED: Sat Feb  9 18:09:43 2019
 NAMESPACE: kube-prometheus
 STATUS: DEPLOYED

 RESOURCES:
 ==> v1/Secret
 NAME                                           TYPE    DATA  AGE
 prometheus-operator-grafana                    Opaque  3     4s
 alertmanager-prometheus-operator-alertmanager  Opaque  1     4s

 ==> v1beta1/ClusterRole
 NAME                                              AGE
 prometheus-operator-kube-state-metrics            3s
 psp-prometheus-operator-kube-state-metrics        3s
 psp-prometheus-operator-prometheus-node-exporter  3s

 ==> v1/Service
 NAME                                          TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)    AGE
 prometheus-operator-grafana                   ClusterIP  10.107.125.114         80/TCP     3s
 prometheus-operator-kube-state-metrics        ClusterIP  10.99.250.30           8080/TCP   3s
 prometheus-operator-prometheus-node-exporter  ClusterIP  10.111.99.199          9100/TCP   3s
 prometheus-operator-alertmanager              ClusterIP  10.96.49.73            9093/TCP   3s
 prometheus-operator-coredns                   ClusterIP  None                   9153/TCP   3s
 prometheus-operator-kube-controller-manager   ClusterIP  None                   10252/TCP  3s
 prometheus-operator-kube-etcd                 ClusterIP  None                   4001/TCP   3s
 prometheus-operator-kube-scheduler            ClusterIP  None                   10251/TCP  3s
 prometheus-operator-operator                  ClusterIP  10.101.253.101         8080/TCP   3s
 prometheus-operator-prometheus                ClusterIP  10.107.117.120         9090/TCP   3s

 ==> v1beta1/DaemonSet
 NAME                                          DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
 prometheus-operator-prometheus-node-exporter  1        1        0      1           0                   3s

 ==> v1/Deployment
 NAME                          DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
 prometheus-operator-operator  1        1        1           0          3s

 ==> v1/ServiceMonitor
 NAME                                         AGE
 prometheus-operator-alertmanager             2s
 prometheus-operator-coredns                  2s
 prometheus-operator-apiserver                2s
 prometheus-operator-kube-controller-manager  2s
 prometheus-operator-kube-etcd                2s
 prometheus-operator-kube-scheduler           2s
 prometheus-operator-kube-state-metrics       2s
 prometheus-operator-kubelet                  2s
 prometheus-operator-node-exporter            2s
 prometheus-operator-operator                 2s
 prometheus-operator-prometheus               2s

 ==> v1/Pod(related)
 NAME                                                     READY  STATUS             RESTARTS  AGE
 prometheus-operator-prometheus-node-exporter-fntpx       0/1    ContainerCreating  0         3s
 prometheus-operator-grafana-8559d7df44-vrm8d             0/3    ContainerCreating  0         2s
 prometheus-operator-kube-state-metrics-7769f5bd54-6znvh  0/1    ContainerCreating  0         2s
 prometheus-operator-operator-7967865bf5-cbd6r            0/1    ContainerCreating  0         2s

 ==> v1beta1/PodSecurityPolicy
 NAME                                          PRIV   CAPS      SELINUX           RUNASUSER  FSGROUP    SUPGROUP  READONLYROOTFS  VOLUMES
 prometheus-operator-grafana                   false  RunAsAny  RunAsAny          RunAsAny   RunAsAny   false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
 prometheus-operator-kube-state-metrics        false  RunAsAny  MustRunAsNonRoot  MustRunAs  MustRunAs  false     secret
 prometheus-operator-prometheus-node-exporter  false  RunAsAny  RunAsAny          MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim,hostPath
 prometheus-operator-alertmanager              false  RunAsAny  RunAsAny          MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
 prometheus-operator-operator                  false  RunAsAny  RunAsAny          MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
 prometheus-operator-prometheus                false  RunAsAny  RunAsAny          MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim

 ==> v1/ConfigMap
 NAME                                           DATA  AGE
 prometheus-operator-grafana-config-dashboards  1     4s
 prometheus-operator-grafana                    1     4s
 prometheus-operator-grafana-datasource         1     4s
 prometheus-operator-etcd                       1     4s
 prometheus-operator-grafana-coredns-k8s        1     4s
 prometheus-operator-k8s-cluster-rsrc-use       1     4s
 prometheus-operator-k8s-node-rsrc-use          1     4s
 prometheus-operator-k8s-resources-cluster      1     4s
 prometheus-operator-k8s-resources-namespace    1     4s
 prometheus-operator-k8s-resources-pod          1     4s
 prometheus-operator-nodes                      1     4s
 prometheus-operator-persistentvolumesusage     1     4s
 prometheus-operator-pods                       1     4s
 prometheus-operator-statefulset                1     4s

 ==> v1/ClusterRoleBinding
 NAME                                            AGE
 prometheus-operator-grafana-clusterrolebinding  3s
 prometheus-operator-alertmanager                3s
 prometheus-operator-operator                    3s
 prometheus-operator-operator-psp                3s
 prometheus-operator-prometheus                  3s
 prometheus-operator-prometheus-psp              3s

 ==> v1beta1/Role
 NAME                         AGE
 prometheus-operator-grafana  3s

 ==> v1beta1/Deployment
 NAME                                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
 prometheus-operator-kube-state-metrics  1        1        1           0          3s

 ==> v1/Alertmanager
 NAME                              AGE
 prometheus-operator-alertmanager  3s

 ==> v1/ServiceAccount
 NAME                                          SECRETS  AGE
 prometheus-operator-grafana                   1        4s
 prometheus-operator-kube-state-metrics        1        4s
 prometheus-operator-prometheus-node-exporter  1        4s
 prometheus-operator-alertmanager              1        4s
 prometheus-operator-operator                  1        4s
 prometheus-operator-prometheus                1        4s

 ==> v1/ClusterRole
 NAME                                     AGE
 prometheus-operator-grafana-clusterrole  4s
 prometheus-operator-alertmanager         3s
 prometheus-operator-operator             3s
 prometheus-operator-operator-psp         3s
 prometheus-operator-prometheus           3s
 prometheus-operator-prometheus-psp       3s

 ==> v1/Role
 NAME                                   AGE
 prometheus-operator-prometheus-config  3s
 prometheus-operator-prometheus         2s
 prometheus-operator-prometheus         2s

 ==> v1beta1/RoleBinding
 NAME                         AGE
 prometheus-operator-grafana  3s

 ==> v1beta2/Deployment
 NAME                         DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
 prometheus-operator-grafana  1        1        1           0          3s

 ==> v1/Prometheus
 NAME                            AGE
 prometheus-operator-prometheus  2s

 ==> v1beta1/ClusterRoleBinding
 NAME                                              AGE
 prometheus-operator-kube-state-metrics            3s
 psp-prometheus-operator-kube-state-metrics        3s
 psp-prometheus-operator-prometheus-node-exporter  3s

 ==> v1/RoleBinding
 NAME                                   AGE
 prometheus-operator-prometheus-config  3s
 prometheus-operator-prometheus         2s
 prometheus-operator-prometheus         2s

 ==> v1/PrometheusRule
 NAME                                                      AGE
 prometheus-operator-alertmanager.rules                    2s
 prometheus-operator-etcd                                  2s
 prometheus-operator-general.rules                         2s
 prometheus-operator-k8s.rules                             2s
 prometheus-operator-kube-apiserver.rules                  2s
 prometheus-operator-kube-prometheus-node-alerting.rules   2s
 prometheus-operator-kube-prometheus-node-recording.rules  2s
 prometheus-operator-kube-scheduler.rules                  2s
 prometheus-operator-kubernetes-absent                     2s
 prometheus-operator-kubernetes-apps                       2s
 prometheus-operator-kubernetes-resources                  2s
 prometheus-operator-kubernetes-storage                    2s
 prometheus-operator-kubernetes-system                     2s
 prometheus-operator-node.rules                            2s
 prometheus-operator-prometheus-operator                   2s
 prometheus-operator-prometheus.rules                      2s

 NOTES: The Prometheus Operator has been installed. Check its status by
 running: kubectl --namespace kube-prometheus get pods -l
 "release=prometheus-operator"

 Visit [[https://github.com/coreos/prometheus-operator]] for
 instructions on how to create & configure Alertmanager and Prometheus
 instances using the Operator.

At this point, prometheus has been deployed to the cluster. Give it a few minutes for all the pods to come up. Let's keep on working to get access to the rest of the consoles offered by the prometheus deployment.

Prometheus Console

Let's write an ingress configuration to expose the prometheus console. First off we need to list all the service deployed for prometheus.

 $ kubectl get service prometheus-operator-prometheus -o yaml -n kube-prometheus
 apiVersion: v1
 kind: Service
 metadata:
   creationTimestamp: "2019-02-09T23:09:55Z"
   labels:
     app: prometheus-operator-prometheus
     chart: prometheus-operator-2.1.6
     heritage: Tiller
     release: prometheus-operator
   name: prometheus-operator-prometheus
   namespace: kube-prometheus
   resourceVersion: "10996"
   selfLink: /api/v1/namespaces/kube-prometheus/services/prometheus-operator-prometheus
   uid: d038d6fa-2cbf-11e9-b74f-48ea5bb87c0b
 spec:
   clusterIP: 10.107.117.120
   ports:
   - name: web
     port: 9090
     protocol: TCP
     targetPort: web
   selector:
     app: prometheus
     prometheus: prometheus-operator-prometheus
   sessionAffinity: None
   type: ClusterIP
 status:
   loadBalancer: {}

As we can see from the service above, its name is prometheus-operator-prometheus and it's listening on port 9090. So let's write the ingress configuration for it.

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: prometheus-dashboard
  namespace: kube-prometheus
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: prometheus.kube.local
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus-operator-prometheus
          servicePort: 9090

Save the file as kube-prometheus-ingress.yaml or some such and deploy.

 $ kubectl apply -f kube-prometheus-ingress.yaml
 ingress.extensions/prometheus-dashboard created

And then add the service host to our /etc/hosts.

 192.168.39.78   prometheus.kube.local

Now you can access http://prometheus.kube.local from your browser.

Grafana Console

Much like what we did with the prometheus console previously, we need to do the same to the grafana dashboard.

First step, let's check the service.

 $ kubectl get service prometheus-operator-grafana -o yaml -n kube-prometheus

Gives you the following output.

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-02-09T23:09:55Z"
  labels:
    app: grafana
    chart: grafana-1.25.0
    heritage: Tiller
    release: prometheus-operator
  name: prometheus-operator-grafana
  namespace: kube-prometheus
  resourceVersion: "10973"
  selfLink: /api/v1/namespaces/kube-prometheus/services/prometheus-operator-grafana
  uid: cffe169b-2cbf-11e9-b74f-48ea5bb87c0b
spec:
  clusterIP: 10.107.125.114
  ports:
  - name: service
    port: 80
    protocol: TCP
    targetPort: 3000
  selector:
    app: grafana
    release: prometheus-operator
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

We get prometheus-operator-grafana and port 80. Next is the ingress configuration.

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: prometheus-grafana
  namespace: kube-prometheus
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: grafana.kube.local
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus-operator-grafana
          servicePort: 80

Then we deploy.

 $ kubectl apply -f kube-grafana-ingress.yaml
 $ ingress.extensions/prometheus-grafana created

And let's not forget /etc/hosts.

 192.168.39.78   grafana.kube.local

And the grafana dashboard should appear if you visit http://grafana.kube.local.