144 lines
6.5 KiB
Org Mode
144 lines
6.5 KiB
Org Mode
#+BEGIN_COMMENT
|
|
.. title: Deploying Helm in your Kubernetes Cluster
|
|
.. date: 2019-03-16
|
|
.. updated: 2019-07-02
|
|
.. status: published
|
|
.. tags: kubernetes, helm, tiller,
|
|
.. category: kubernetes
|
|
.. slug: deploying-helm-in-your-kubernetes-cluster
|
|
.. authors: Elia el Lazkani
|
|
.. description: Post explaining how to deploy helm in your kubernetes cluster.
|
|
.. type: text
|
|
#+END_COMMENT
|
|
|
|
In the previous post in the /kubernetes/ series, we deployed a small /kubernetes/ cluster locally on /KVM/. In future posts we will be deploying more things into the cluster. This will enable us to test different projects, ingresses, service meshes, and more from the open source community, build specifically for /kubernetes/. To help with this future quest, we will be leveraging a kubernetes package manager. You've read it right, helm is a kubernetes package manager. Let's get started shall we ?
|
|
|
|
{{{TEASER_END}}}
|
|
|
|
* Helm
|
|
As mentioned above, helm is a kubernetes package manager. You can read more about the helm project on their [[https://helm.sh/][homepage]]. It offers a way to Go template the deployments of service and package them into a portable package that can be installed using the helm command line.
|
|
|
|
Generally, you would install the helm binary on your machine and install it into the cluster. In our case, the /RBACs/ deployed in the kubernetes cluster by rancher prevent the default installation from working. Not a problem, we can go around the problem and we will in this post. This is a win for us because this will give us the opportunity to learn more about helm and kubernetes.
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title">Note</p>
|
|
#+END_EXPORT
|
|
This is not a production recommended way to deploy helm. I would *NOT* deploy helm this way on a production cluster. I would restrict the permissions of any =ServiceAccount= deployed in the cluster to its bare minimum requirements.
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
* What are we going to do ?
|
|
We need to understand a bit of what's going on and what we are trying to do. To be able to do that, we need to understand how /helm/ works. From a high level, the =helm= command line tool will deploy a service called /Tiller/ as a =Deployment=.
|
|
|
|
The /Tiller/ service talks to the /kubernetes/ /API/ and manages the deployment process while the =helm= command line tool talks to /Tiller/ from its end. So a proper deployment of /Tiller/ in a /kubernetes/ sense is to create a =ServiceAccount=, give the =ServiceAccount= the proper permissions to be able to do what it needs to do and you got yourself a working /Tiller/.
|
|
|
|
* Service Account
|
|
This is where we start by creating a =ServiceAccount=. The =ServiceAccount= looks like this.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
apiVersion: v1
|
|
kind: ServiceAccount
|
|
metadata:
|
|
name: tiller
|
|
namespace: kube-system
|
|
#+END_SRC
|
|
|
|
We de deploy the =ServiceAccount= to the cluster. Save it to =ServiceAccount.yaml=.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl apply -f ServiceAccount.yaml
|
|
serviceaccount/tiller created
|
|
#+END_EXAMPLE
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title">Note</p>
|
|
#+END_EXPORT
|
|
To read more about =ServiceAccount= and their uses please visit the /kubernetes/ documentation page on the [[https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/][topic]].
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
* Cluster Role Binding
|
|
We have /Tiller/ (=ServiceAccount=) deployed in =kube-system= (=namespace=). We need to give it access.
|
|
|
|
** Option 1
|
|
We have the option of either creating a =Role= which would restrict /Tiller/ to the current =namespace=, then tie them together with a =RoleBinding=.
|
|
|
|
This option will restrict /Tiller/ to that =namespace= and that =namespace= only.
|
|
|
|
** Option 2
|
|
Another option is to create a =ClusterRole= and tie the =ServiceAccount= to that =ClusterRole= with a =ClusterRoleBinding= and this will give /Tiller/ access across /namespaces/.
|
|
|
|
** Option 3
|
|
In our case, we already know that =ClustRole= =cluster-admin= already exists in the cluster so we are going to give /Tiller/ =cluster-admin= access.
|
|
|
|
#+BEGIN_SRC yaml
|
|
---
|
|
apiVersion: rbac.authorization.k8s.io/v1
|
|
kind: ClusterRoleBinding
|
|
metadata:
|
|
name: tiller
|
|
roleRef:
|
|
apiGroup: rbac.authorization.k8s.io
|
|
kind: ClusterRole
|
|
name: cluster-admin
|
|
subjects:
|
|
- kind: ServiceAccount
|
|
name: tiller
|
|
namespace: kube-system
|
|
#+END_SRC
|
|
|
|
Save the following in =ClusterRoleBinding.yaml= and then
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ kubectl apply -f ClusterRoleBinding.yaml
|
|
clusterrolebinding.rbac.authorization.k8s.io/tiller created
|
|
#+END_EXAMPLE
|
|
|
|
* Deploying Tiller
|
|
Now that we have all the basics deployed, we can finally deploy /Tiller/ in the cluster.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ helm init --service-account tiller --tiller-namespace kube-system --history-max 10
|
|
Creating ~/.helm
|
|
Creating ~/.helm/repository
|
|
Creating ~/.helm/repository/cache
|
|
Creating ~/.helm/repository/local
|
|
Creating ~/.helm/plugins
|
|
Creating ~/.helm/starters
|
|
Creating ~/.helm/cache/archive
|
|
Creating ~/.helm/repository/repositories.yaml
|
|
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
|
|
Adding local repo with URL: http://127.0.0.1:8879/charts
|
|
$HELM_HOME has been configured at ~/.helm.
|
|
|
|
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
|
|
|
|
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
|
|
To prevent this, run `helm init` with the --tiller-tls-verify flag.
|
|
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
|
|
Happy Helming!
|
|
#+END_EXAMPLE
|
|
|
|
#+BEGIN_EXPORT html
|
|
<div class="admonition note">
|
|
<p class="admonition-title">Note</p>
|
|
#+END_EXPORT
|
|
Please make sure you read the helm installation documentation if you are deploying this in a production environment. You can find how you can make it more secure [[https://helm.sh/docs/using_helm/#securing-your-helm-installation][there]].
|
|
#+BEGIN_EXPORT html
|
|
</div>
|
|
#+END_EXPORT
|
|
|
|
After a few minutes, your /Tiller/ deployment or as it's commonly known as a =helm install= or a =helm init=. If you want to check that everything has been deployed properly you can run.
|
|
|
|
#+BEGIN_EXAMPLE
|
|
$ helm version
|
|
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
|
|
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
|
|
#+END_EXAMPLE
|
|
|
|
Everything seems to be working properly. In future posts, we will be leveraging the power and convenience of helm to expand our cluster's capabilities and learn more about what we can do with kubernetes.
|