blog.lazkani.io/content/posts/deploying-helm-in-your-kubernetes-cluster.md

6.2 KiB

+++ title = "Deploying Helm in your Kubernetes Cluster" author = ["Elia el Lazkani"] date = 2019-07-02T21:00:00+02:00 lastmod = 2021-06-28T00:00:58+02:00 tags = ["helm", "tiller"] categories = ["kubernetes"] draft = false +++

In the previous post in the kubernetes series, we deployed a small kubernetes cluster locally on KVM. In future posts we will be deploying more things into the cluster. This will enable us to test different projects, ingresses, service meshes, and more from the open source community, build specifically for kubernetes. To help with this future quest, we will be leveraging a kubernetes package manager. You've read it right, helm is a kubernetes package manager. Let's get started shall we ?

Helm

As mentioned above, helm is a kubernetes package manager. You can read more about the helm project on their homepage. It offers a way to Go template the deployments of service and package them into a portable package that can be installed using the helm command line.

Generally, you would install the helm binary on your machine and install it into the cluster. In our case, the RBACs deployed in the kubernetes cluster by rancher prevent the default installation from working. Not a problem, we can go around the problem and we will in this post. This is a win for us because this will give us the opportunity to learn more about helm and kubernetes.

Note

This is not a production recommended way to deploy helm. I would NOT deploy helm this way on a production cluster. I would restrict the permissions of any ServiceAccount deployed in the cluster to its bare minimum requirements.

What are we going to do ?

We need to understand a bit of what's going on and what we are trying to do. To be able to do that, we need to understand how helm works. From a high level, the helm command line tool will deploy a service called Tiller as a Deployment.

The Tiller service talks to the kubernetes API and manages the deployment process while the helm command line tool talks to Tiller from its end. So a proper deployment of Tiller in a kubernetes sense is to create a ServiceAccount, give the ServiceAccount the proper permissions to be able to do what it needs to do and you got yourself a working Tiller.

Service Account

This is where we start by creating a ServiceAccount. The ServiceAccount looks like this.

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system

We de deploy the ServiceAccount to the cluster. Save it to ServiceAccount.yaml.

 $ kubectl apply -f ServiceAccount.yaml
 serviceaccount/tiller created

Note

To read more about ServiceAccount and their uses please visit the kubernetes documentation page on the topic.

Cluster Role Binding

We have Tiller (ServiceAccount) deployed in kube-system (namespace). We need to give it access.

Option 1

We have the option of either creating a Role which would restrict Tiller to the current namespace, then tie them together with a RoleBinding.

This option will restrict Tiller to that namespace and that namespace only.

Option 2

Another option is to create a ClusterRole and tie the ServiceAccount to that ClusterRole with a ClusterRoleBinding and this will give Tiller access across namespaces.

Option 3

In our case, we already know that ClustRole cluster-admin already exists in the cluster so we are going to give Tiller cluster-admin access.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Save the following in ClusterRoleBinding.yaml and then

 $ kubectl apply -f ClusterRoleBinding.yaml
 clusterrolebinding.rbac.authorization.k8s.io/tiller created

Deploying Tiller

Now that we have all the basics deployed, we can finally deploy Tiller in the cluster.

 $  helm init --service-account tiller --tiller-namespace kube-system --history-max 10
 Creating ~/.helm
 Creating ~/.helm/repository
 Creating ~/.helm/repository/cache
 Creating ~/.helm/repository/local
 Creating ~/.helm/plugins
 Creating ~/.helm/starters
 Creating ~/.helm/cache/archive
 Creating ~/.helm/repository/repositories.yaml
 Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
 Adding local repo with URL: http://127.0.0.1:8879/charts
 $HELM_HOME has been configured at ~/.helm.

 Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

 Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
 To prevent this, run `helm init` with the --tiller-tls-verify flag.
 For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
 Happy Helming!

Note

Please make sure you read the helm installation documentation if you are deploying this in a production environment. You can find how you can make it more secure there.

After a few minutes, your Tiller deployment or as it's commonly known as a helm install or a helm init. If you want to check that everything has been deployed properly you can run.

 $ helm version
 Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
 Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}

Everything seems to be working properly. In future posts, we will be leveraging the power and convenience of helm to expand our cluster's capabilities and learn more about what we can do with kubernetes.