blog.lazkani.io/content/posts/deploying-traefik-and-pihole-on-the-swarm-home-cluster.md

228 lines
7.1 KiB
Markdown
Raw Permalink Normal View History

+++
title = "Deploying Traefik and Pihole on the Swarm home cluster"
author = ["Elia el Lazkani"]
date = 2022-08-25
lastmod = 2022-08-25
tags = ["docker", "linux", "arm", "ansible", "traefik", "pihole", "swarm", "raspberry-pi"]
categories = ["container"]
draft = false
+++
In the [previous post]({{< relref "raspberry-pi-container-orchestration-and-swarm-right-at-home" >}}), we setup a _Swarm_ cluster. That's fine and dandy but that
cluster, as far as we're concerned, is useless. Let's change that.
<!--more-->
## Traefik {#traefik}
I've talked and played with _Traefik_ previously on this blog and here we go
again, with another orchestration technology. As always, we need an ingress to
our cluster. _Traefik_ makes a great ingress that's easily configurable with `labels`.
Let's not forget, we're working with _Swarm_ this time around. _Swarm_ stacks
look very similar to `docker-compose` manifests.
But, before we do that, there is a small piece of information that we need to be
aware of. For _Traefik_ to be able to route traffic to our services, both
_Traefik_ and the service need to be on the same network. Let's make this a bit
more predictable and manage that network ourselves.
<div class="admonition warning">
<p class="admonition-title">warning</p>
Only `leader` and `manager` nodes will allow interaction with the _Swarm_
cluster. The `worker` nodes will not give you any useful information about the
cluster.
</div>
### Network Configuration {#network-configuration}
We started with _Ansible_ and we shall continue with _Ansible_. We begin with
creating the network.
```yaml
---
- name: Create a Traefik Ingress network
community.docker.docker_network:
name: traefik-ingress
driver: overlay
scope: swarm
```
### Ingress {#ingress}
Once the network is in place, we can go ahead and deploy _Traefik_.
<div class="admonition warning">
<p class="admonition-title">warning</p>
This setup is not meant to be deploy in a **production** setting. **SSL**
certificates require extra configuration steps that might come in a future post.
</div>
```yaml
---
- name: Deploy Traefik Stack
community.docker.docker_stack:
state: present
name: Traefik
compose:
- version: '3'
services:
traefik:
image: traefik:latest
restart: unless-stopped
command:
- --entrypoints.web.address=:80
- --providers.docker=true
- --providers.docker.swarmMode=true
- --accesslog
- --log.level=INFO
- --api
- --api.insecure=true
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- traefik-ingress
deploy:
replicas: 1
resources:
limits:
cpus: '1'
memory: 80M
reservations:
cpus: '0.5'
memory: 40M
placement:
constraints:
- node.role == manager
labels:
- traefik.protocol=http
- traefik.docker.network=traefik-ingress
- traefik.http.routers.traefik-api.rule=Host(`traefik.our-domain.com`)
- traefik.http.routers.traefik-api.service=api@internal
- traefik.http.services.taefik-api.loadbalancer.server.port=8080
networks:
traefik-ingress:
external: true
```
<div class="admonition note">
<p class="admonition-title">Note</p>
Even though these are _Ansible_ tasks, _Swarm_ stack manifests are not much
different as I'm using mostly the raw format.
</div>
Let's talk a bit about what we did.
`--providers.docker=true` and `--providers.docker.swarmMode=true`
: We
configure _Traefik_ to enable both _docker_ and _swarm_ mode providers.
`--api` and `--api-insecure=true`
: We enable the API which offers the UI
and we allow it to run insecure.
The rest, I believe, have been explained in the previous blog post.
If everything went well, and we configured our _DNS_ properly, we should be
welcomed by a _Traefik_ dashboard on `traefik.our-domain.com`.
## Pi-hole {#pi-hole}
Now I know most people install the _Pi-hole_ straight on the _Pi_. Well, I'm not
most people and I'd like to deploy it in a container. I feel it's easier all
around than installing it on the system, you'll see.
```yaml
---
- name: Deploy PiHole Stack
community.docker.docker_stack:
state: present
name: PiHole
compose:
- version: '3'
services:
pihole:
image: pihole/pihole:latest
restart: unless-stopped
ports:
- "53:53"
- "53:53/udp"
cap_add:
- NET_ADMIN
environment:
TZ: "Europe/Vienna"
VIRTUAL_HOST: pihole.our-domain.com
VIRTUAL_PORT: 80
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80/"]
interval: 30s
timeout: 20s
retries: 3
volumes:
- /opt/pihole/data/pihole-config:/etc/pihole
- /opt/pihole/data/pihole-dnsmasq.d:/etc/dnsmasq.d
networks:
- traefik-ingress
deploy:
replicas: 1
placement:
constraints:
- node.role == worker
labels:
- traefik.docker.network=traefik-ingress
- traefik.http.routers.pihole-http.entrypoints=web
- traefik.http.routers.pihole-http.rule=Host(`pihole.our-domain.com`)
- traefik.http.routers.pihole-http.service=pihole-http
- traefik.http.services.pihole-http.loadbalancer.server.port=80
- traefik.http.routers.pihole-http.middlewares=pihole-main
- traefik.http.middlewares.pihole-main.chain.middlewares=frame-deny,browser-xss-filter
- traefik.http.middlewares.frame-deny.headers.framedeny=true
- traefik.http.middlewares.browser-xss-filter.headers.browserxssfilter=true
networks:
traefik-ingress:
external: true
```
We make sure to expose port `53` for **DNS** on all nodes, and configure the
proper `labels` to our service so that _Traefik_ can pick it up.
Once deployed and your _DNS_ is pointing properly then `pihole.our-domain.com`
is waiting for you. This also shows us that the networking between nodes works
properly. Let's test it out.
```shell
$ nslookup duckduckgo.com pihole.our-domain.com
Server: pihole.our-domain.com
Address: 192.168.1.100#53
Non-authoritative answer:
Name: duckduckgo.com
Address: 52.142.124.215
```
Alright, seems that our _Pi-hole_ works.
## Conclusion {#conclusion}
On these small Raspberry Pis, the cluster seems to be working very well. The
_Pi-hole_ has been running without any issues for a few days running my internal
_DNS_. There's a few improvements that can be done to this setup, mainly the
deployment of an _SSL_ cert. That may come in the future, time permitting. Stay
safe, until the next one !