8.5 KiB
+++ title = "Raspberry Pi, Container Orchestration and Swarm right at home" author = ["Elia el Lazkani"] date = 2022-08-24 lastmod = 2022-08-25 tags = ["docker", "linux", "arm", "ansible", "swarm", "raspberry-pi"] categories = ["container"] draft = false +++
When I started looking into solutions for my home container orchestration, I wanted a solution that runs on my 2 Raspberry Pis. These beasts have 4 virtual CPUs and a whoping 1GB of memory each. In other words, not a lot of resources to go around. What can I run on these? I wonder!
Consideration
If we look at the state of container orchestration today, we see that Kubernetes domates the space. Kubernetes is awesome, but will it run on my Pis ? I doubt it.
Fret not ! There are other, more lightweight, solutions out there. Let's discuss them briefly.
K3s
I have experience with K3s. I even wrote a blog [post]({{< relref "building-k3s-on-a-pi" >}}) on it. Unfortunately, I found that K3s uses almost half of the memory resources of the Pis to run. That's too much overhead lost.
MicroK8s
MicroK8s is a Canonical project. It has similarities to K3s in the way of easy deployment and lightweight focus. The end result is also extremly similar to K3s in resource usage.
Nomad
Nomad is a HashiCorp product and just all their other products, it is very well designed, very robust and extremly versatile. Running it on the Pis was a breeze, it barely used any resources.
It shoulds great so far, doesn't it ? Well, sort of. The deployment and configuration of Nomad is a bit tricky and requires a bit of moving components. Those can be automated with Ansible eventually. Aside that, Nomad requires extra configuration to install and enable CNI and service discovery.
Finally, it has a steep learning curve to deploy containers in the cluster and you have HCL to deal with.
Swarm
I was surprised to find that not only Docker Swarm is still alive, it also became a mode which comes preshipped with docker since a few years ago.
I also found out that Swarm has great Ansible integration, for both initializing and creating the cluster and deploying stacks and services into it. After all, if you are already familiar with docker-compose, you'll feel right at home.
Setting up a Swarm cluster
I set up to deploy my Swarm Cluster and manage it using Ansible. I didn't want to do the work again in the future and I wanted to go the IaC (Infrastructure as Code) route, as should you.
At this stage, I have to take a few assumptions. I assume that you already have
at least 2 machines with a Linux Distribution installed on them. I, also, assume
that docker is already installed and running on both machines. Finally, all
the dependencies required to run Ansible on both hosts (python3-docker
and
python3-jsondiff
on Ubuntu).
There are two types of nodes in a Swarm cluster; manager
and worker
.
The first node used to initialize the cluster is the leader node which is
also a manager
node.
Leader
For the leader
node, our tasks are going to be initializing the cluster.
Before we do so, let's create our quick and dirty Ansible inventory
file.
---
all:
hosts:
children:
leader:
hosts:
node001:
ansible_host: 192.168.0.100
ansible_user: user
ansible_port: 22
ansible_become: yes
ansible_become_method: sudo
manager:
worker:
hosts:
node002:
ansible_host: 192.168.0.101
ansible_user: user
ansible_port: 22
ansible_become: yes
ansible_become_method: sudo
warning
This isn't meant to be deployed in production in a professional setting. It
goes without saying, the leader
is static, not highly available and prone to
failure. The manager
and worker
node tasks are, also, dependent on the
successful run of the initialization task on the leader
.
Now that we've taken care of categorizing the nodes and writing the Ansible
inventory
, let's initialize a Swarm cluster.
---
- name: Init a new swarm cluster
community.docker.docker_swarm:
state: present
advertise_addr: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
register: clustering_swarm_cluster
Note
We use hostvars[inventory_hostname]['ansible_default_ipv4']['address']
which
returns the IP address of the node itself. This is the IP adress used to advertise.
Note
We use register
to save the returned response from the cluster initialization
into a new variable we called clustering_swarm_cluster
. This will come handy later.
This should take care of initializing a new Swarm cluster.
You can verify if Swarm is running.
$ docker system info 2>&1 | grep Swarm
Swarm: active
Manager
If you have a larger number of nodes, you might require more than one manager
node. To join more managers to the cluster, we can use the power of Ansible again.
---
- name: Add manager node to Swarm cluster
community.docker.docker_swarm:
state: join
advertise_addr: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
join_token: "{{ hostvars[groups['leader'][0]]['clustering_swarm_cluster']['swarm_facts']['JoinTokens']['Manager'] }}"
remote_addrs: [ "{{ hostvars[groups['leader'][0]]['ansible_default_ipv4']['address'] }}:2377" ]
Note
We access the token we saved earlier on the leader
to join a manager
to the cluster using hostvars[groups['leader'][0]]['clustering_swarm_cluster']['swarm_facts']['JoinTokens']['Manager']
.
Note
If we can get a hostvar from a different node, we can also get the IP of such
node with hostvars[groups['leader'][0]]['ansible_default_ipv4']['address']
.
Now that we've taken care of the manager
code, let's work on the worker
nodes.
Worker
Just as easily as we created the task to join a manager
node to the cluster,
we do the same for the worker
.
---
- name: Add worker node to Swarm cluster
community.docker.docker_swarm:
state: join
advertise_addr: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
join_token: "{{ hostvars[groups['leader'][0]]['clustering_swarm_cluster']['swarm_facts']['JoinTokens']['Worker'] }}"
remote_addrs: [ "{{ hostvars[groups['leader'][0]]['ansible_default_ipv4']['address'] }}:2377" ]
Note
Déjà vu when it comes to the join_token
, except that we use the worker
token instead.
The glue code you're looking for that does the magic is this.
---
- name: Bootstrap Swarm depedencies
include_tasks: common.yml
- name: Bootstrap leader node
include_tasks: leader.yml
when: inventory_hostname in groups['leader']
- name: Bootstrap manager node
include_tasks: manager.yml
when: inventory_hostname in groups['manager']
- name: Bootstrap worker node
include_tasks: worker.yml
when: inventory_hostname in groups['worker']
Each of the tasks described above should be in its own file, as shown in the glue code, and they will only run on the group they are meant to run on.
Following these tasks, I ended up with the cluster below.
# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
h4scu4nry2r9p129rsdt88ae2 * node001 Ready Active Leader 20.10.17
uyn43a9tsdn2n435untva9pae node002 Ready Active 20.10.17
There, we see both nodes and they both seem to be in a Ready
state.
Conclusion
If you're outside a professional setting and you find yourself needing to run a container orchestration platform, some platforms might be overkill. Docker Swarm has great community support in Ansible making the management of small clusters on low resource devices extremly easy. It comes with the added bonus of having built-in service discovery and networking. Give it a try, you might be pleasently surprised like I was.