chore(): Publishing a blog post about deploying Traefik and Pi-hole on
the Pi Swarm cluster
This commit is contained in:
parent
526b27b280
commit
4a1580f113
2 changed files with 444 additions and 0 deletions
|
@ -2812,6 +2812,223 @@ clusters on low resource devices extremly easy. It comes with the added bonus of
|
||||||
having built-in /service discovery/ and /networking/. Give it a try, you might
|
having built-in /service discovery/ and /networking/. Give it a try, you might
|
||||||
be pleasently surprised like I was.
|
be pleasently surprised like I was.
|
||||||
|
|
||||||
|
*** DONE Deploying Traefik and Pihole on the /Swarm/ home cluster :docker:linux:arm:ansible:traefik:pihole:swarm:raspberry_pi:
|
||||||
|
:PROPERTIES:
|
||||||
|
:EXPORT_HUGO_LASTMOD: 2022-08-25
|
||||||
|
:EXPORT_DATE: 2022-08-25
|
||||||
|
:EXPORT_FILE_NAME: deploying-traefik-and-pihole-on-the-swarm-home-cluster
|
||||||
|
:CUSTOM_ID: deploying-traefik-and-pihole-on-the-swarm-home-cluster
|
||||||
|
:END:
|
||||||
|
|
||||||
|
In the [[#raspberry-pi-container-orchestration-and-swarm-right-at-home][previous post]], we setup a /Swarm/ cluster. That's fine and dandy but that
|
||||||
|
cluster, as far as we're concerned, is useless. Let's change that.
|
||||||
|
|
||||||
|
#+hugo: more
|
||||||
|
|
||||||
|
**** Traefik
|
||||||
|
I've talked and played with /Traefik/ previously on this blog and here we go
|
||||||
|
again, with another orchestration technology. As always, we need an ingress to
|
||||||
|
our cluster. /Traefik/ makes a great ingress that's easily configurable with ~labels~.
|
||||||
|
|
||||||
|
Let's not forget, we're working with /Swarm/ this time around. /Swarm/ stacks
|
||||||
|
look very similar to ~docker-compose~ manifests.
|
||||||
|
|
||||||
|
But, before we do that, there is a small piece of information that we need to be
|
||||||
|
aware of. For /Traefik/ to be able to route traffic to our services, both
|
||||||
|
/Traefik/ and the service need to be on the same network. Let's make this a bit
|
||||||
|
more predictable and manage that network ourselves.
|
||||||
|
|
||||||
|
#+BEGIN_EXPORT html
|
||||||
|
<div class="admonition warning">
|
||||||
|
<p class="admonition-title">warning</p>
|
||||||
|
#+END_EXPORT
|
||||||
|
Only ~leader~ and ~manager~ nodes will allow interaction with the /Swarm/
|
||||||
|
cluster. The ~worker~ nodes will not give you any useful information about the
|
||||||
|
cluster.
|
||||||
|
#+BEGIN_EXPORT html
|
||||||
|
</div>
|
||||||
|
#+END_EXPORT
|
||||||
|
|
||||||
|
***** Network Configuration
|
||||||
|
We started with /Ansible/ and we shall continue with /Ansible/. We begin with
|
||||||
|
creating the network.
|
||||||
|
|
||||||
|
#+begin_src yaml
|
||||||
|
---
|
||||||
|
- name: Create a Traefik Ingress network
|
||||||
|
community.docker.docker_network:
|
||||||
|
name: traefik-ingress
|
||||||
|
driver: overlay
|
||||||
|
scope: swarm
|
||||||
|
#+end_src
|
||||||
|
|
||||||
|
***** Ingress
|
||||||
|
Once the network is in place, we can go ahead and deploy /Traefik/.
|
||||||
|
|
||||||
|
#+BEGIN_EXPORT html
|
||||||
|
<div class="admonition warning">
|
||||||
|
<p class="admonition-title">warning</p>
|
||||||
|
#+END_EXPORT
|
||||||
|
This setup is not meant to be deploy in a *production* setting. *SSL*
|
||||||
|
certificates require extra configuration steps that might come in a future post.
|
||||||
|
#+BEGIN_EXPORT html
|
||||||
|
</div>
|
||||||
|
#+END_EXPORT
|
||||||
|
|
||||||
|
#+begin_src yaml
|
||||||
|
---
|
||||||
|
- name: Deploy Traefik Stack
|
||||||
|
community.docker.docker_stack:
|
||||||
|
state: present
|
||||||
|
name: Traefik
|
||||||
|
compose:
|
||||||
|
- version: '3'
|
||||||
|
services:
|
||||||
|
traefik:
|
||||||
|
image: traefik:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
command:
|
||||||
|
- --entrypoints.web.address=:80
|
||||||
|
- --providers.docker=true
|
||||||
|
- --providers.docker.swarmMode=true
|
||||||
|
- --accesslog
|
||||||
|
- --log.level=INFO
|
||||||
|
- --api
|
||||||
|
- --api.insecure=true
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
volumes:
|
||||||
|
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
||||||
|
networks:
|
||||||
|
- traefik-ingress
|
||||||
|
deploy:
|
||||||
|
replicas: 1
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '1'
|
||||||
|
memory: 80M
|
||||||
|
reservations:
|
||||||
|
cpus: '0.5'
|
||||||
|
memory: 40M
|
||||||
|
placement:
|
||||||
|
constraints:
|
||||||
|
- node.role == manager
|
||||||
|
|
||||||
|
labels:
|
||||||
|
- traefik.protocol=http
|
||||||
|
- traefik.docker.network=traefik-ingress
|
||||||
|
- traefik.http.routers.traefik-api.rule=Host(`traefik.our-domain.com`)
|
||||||
|
- traefik.http.routers.traefik-api.service=api@internal
|
||||||
|
- traefik.http.services.taefik-api.loadbalancer.server.port=8080
|
||||||
|
|
||||||
|
networks:
|
||||||
|
traefik-ingress:
|
||||||
|
external: true
|
||||||
|
#+end_src
|
||||||
|
|
||||||
|
#+BEGIN_EXPORT html
|
||||||
|
<div class="admonition note">
|
||||||
|
<p class="admonition-title">Note</p>
|
||||||
|
#+END_EXPORT
|
||||||
|
Even though these are /Ansible/ tasks, /Swarm/ stack manifests are not much
|
||||||
|
different as I'm using mostly the raw format.
|
||||||
|
#+BEGIN_EXPORT html
|
||||||
|
</div>
|
||||||
|
#+END_EXPORT
|
||||||
|
|
||||||
|
Let's talk a bit about what we did.
|
||||||
|
- ~--providers.docker=true~ and ~--providers.docker.swarmMode=true~ :: We
|
||||||
|
configure /Traefik/ to enable both /docker/ and /swarm/ mode providers.
|
||||||
|
- ~--api~ and ~--api-insecure=true~ :: We enable the API which offers the UI
|
||||||
|
and we allow it to run insecure.
|
||||||
|
|
||||||
|
The rest, I believe, have been explained in the previous blog post.
|
||||||
|
|
||||||
|
If everything went well, and we configured our /DNS/ properly, we should be
|
||||||
|
welcomed by a /Traefik/ dashboard on ~traefik.our-domain.com~.
|
||||||
|
|
||||||
|
**** Pi-hole
|
||||||
|
Now I know most people install the /Pi-hole/ straight on the /Pi/. Well, I'm not
|
||||||
|
most people and I'd like to deploy it in a container. I feel it's easier all
|
||||||
|
around than installing it on the system, you'll see.
|
||||||
|
|
||||||
|
#+begin_src yaml
|
||||||
|
---
|
||||||
|
- name: Deploy PiHole Stack
|
||||||
|
community.docker.docker_stack:
|
||||||
|
state: present
|
||||||
|
name: PiHole
|
||||||
|
compose:
|
||||||
|
- version: '3'
|
||||||
|
services:
|
||||||
|
pihole:
|
||||||
|
image: pihole/pihole:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "53:53"
|
||||||
|
- "53:53/udp"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
environment:
|
||||||
|
TZ: "Europe/Vienna"
|
||||||
|
VIRTUAL_HOST: pihole.our-domain.com
|
||||||
|
VIRTUAL_PORT: 80
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "curl", "-f", "http://localhost:80/"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 20s
|
||||||
|
retries: 3
|
||||||
|
volumes:
|
||||||
|
- /opt/pihole/data/pihole-config:/etc/pihole
|
||||||
|
- /opt/pihole/data/pihole-dnsmasq.d:/etc/dnsmasq.d
|
||||||
|
networks:
|
||||||
|
- traefik-ingress
|
||||||
|
deploy:
|
||||||
|
replicas: 1
|
||||||
|
placement:
|
||||||
|
constraints:
|
||||||
|
- node.role == worker
|
||||||
|
labels:
|
||||||
|
- traefik.docker.network=traefik-ingress
|
||||||
|
- traefik.http.routers.pihole-http.entrypoints=web
|
||||||
|
- traefik.http.routers.pihole-http.rule=Host(`pihole.our-domain.com`)
|
||||||
|
- traefik.http.routers.pihole-http.service=pihole-http
|
||||||
|
- traefik.http.services.pihole-http.loadbalancer.server.port=80
|
||||||
|
- traefik.http.routers.pihole-http.middlewares=pihole-main
|
||||||
|
- traefik.http.middlewares.pihole-main.chain.middlewares=frame-deny,browser-xss-filter
|
||||||
|
- traefik.http.middlewares.frame-deny.headers.framedeny=true
|
||||||
|
- traefik.http.middlewares.browser-xss-filter.headers.browserxssfilter=true
|
||||||
|
|
||||||
|
networks:
|
||||||
|
traefik-ingress:
|
||||||
|
external: true
|
||||||
|
#+end_src
|
||||||
|
|
||||||
|
We make sure to expose port ~53~ for *DNS* on all nodes, and configure the
|
||||||
|
proper ~labels~ to our service so that /Traefik/ can pick it up.
|
||||||
|
|
||||||
|
Once deployed and your /DNS/ is pointing properly then ~pihole.our-domain.com~
|
||||||
|
is waiting for you. This also shows us that the networking between nodes works
|
||||||
|
properly. Let's test it out.
|
||||||
|
|
||||||
|
#+begin_src shell
|
||||||
|
$ nslookup duckduckgo.com pihole.our-domain.com
|
||||||
|
Server: pihole.our-domain.com
|
||||||
|
Address: 192.168.1.100#53
|
||||||
|
|
||||||
|
Non-authoritative answer:
|
||||||
|
Name: duckduckgo.com
|
||||||
|
Address: 52.142.124.215
|
||||||
|
#+end_src
|
||||||
|
|
||||||
|
Alright, seems that our /Pi-hole/ works.
|
||||||
|
|
||||||
|
**** Conclusion
|
||||||
|
On these small Raspberry Pis, the cluster seems to be working very well. The
|
||||||
|
/Pi-hole/ has been running without any issues for a few days running my internal
|
||||||
|
/DNS/. There's a few improvements that can be done to this setup, mainly the
|
||||||
|
deployment of an /SSL/ cert. That may come in the future, time permitting. Stay
|
||||||
|
safe, until the next one !
|
||||||
** K3s :@k3s:
|
** K3s :@k3s:
|
||||||
*** DONE Building k3s on a Pi :arm:kubernetes:
|
*** DONE Building k3s on a Pi :arm:kubernetes:
|
||||||
:PROPERTIES:
|
:PROPERTIES:
|
||||||
|
|
|
@ -0,0 +1,227 @@
|
||||||
|
+++
|
||||||
|
title = "Deploying Traefik and Pihole on the Swarm home cluster"
|
||||||
|
author = ["Elia el Lazkani"]
|
||||||
|
date = 2022-08-25
|
||||||
|
lastmod = 2022-08-25
|
||||||
|
tags = ["docker", "linux", "arm", "ansible", "traefik", "pihole", "swarm", "raspberry-pi"]
|
||||||
|
categories = ["container"]
|
||||||
|
draft = false
|
||||||
|
+++
|
||||||
|
|
||||||
|
In the [previous post]({{< relref "raspberry-pi-container-orchestration-and-swarm-right-at-home" >}}), we setup a _Swarm_ cluster. That's fine and dandy but that
|
||||||
|
cluster, as far as we're concerned, is useless. Let's change that.
|
||||||
|
|
||||||
|
<!--more-->
|
||||||
|
|
||||||
|
|
||||||
|
## Traefik {#traefik}
|
||||||
|
|
||||||
|
I've talked and played with _Traefik_ previously on this blog and here we go
|
||||||
|
again, with another orchestration technology. As always, we need an ingress to
|
||||||
|
our cluster. _Traefik_ makes a great ingress that's easily configurable with `labels`.
|
||||||
|
|
||||||
|
Let's not forget, we're working with _Swarm_ this time around. _Swarm_ stacks
|
||||||
|
look very similar to `docker-compose` manifests.
|
||||||
|
|
||||||
|
But, before we do that, there is a small piece of information that we need to be
|
||||||
|
aware of. For _Traefik_ to be able to route traffic to our services, both
|
||||||
|
_Traefik_ and the service need to be on the same network. Let's make this a bit
|
||||||
|
more predictable and manage that network ourselves.
|
||||||
|
|
||||||
|
<div class="admonition warning">
|
||||||
|
<p class="admonition-title">warning</p>
|
||||||
|
|
||||||
|
Only `leader` and `manager` nodes will allow interaction with the _Swarm_
|
||||||
|
cluster. The `worker` nodes will not give you any useful information about the
|
||||||
|
cluster.
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
### Network Configuration {#network-configuration}
|
||||||
|
|
||||||
|
We started with _Ansible_ and we shall continue with _Ansible_. We begin with
|
||||||
|
creating the network.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
- name: Create a Traefik Ingress network
|
||||||
|
community.docker.docker_network:
|
||||||
|
name: traefik-ingress
|
||||||
|
driver: overlay
|
||||||
|
scope: swarm
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Ingress {#ingress}
|
||||||
|
|
||||||
|
Once the network is in place, we can go ahead and deploy _Traefik_.
|
||||||
|
|
||||||
|
<div class="admonition warning">
|
||||||
|
<p class="admonition-title">warning</p>
|
||||||
|
|
||||||
|
This setup is not meant to be deploy in a **production** setting. **SSL**
|
||||||
|
certificates require extra configuration steps that might come in a future post.
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
- name: Deploy Traefik Stack
|
||||||
|
community.docker.docker_stack:
|
||||||
|
state: present
|
||||||
|
name: Traefik
|
||||||
|
compose:
|
||||||
|
- version: '3'
|
||||||
|
services:
|
||||||
|
traefik:
|
||||||
|
image: traefik:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
command:
|
||||||
|
- --entrypoints.web.address=:80
|
||||||
|
- --providers.docker=true
|
||||||
|
- --providers.docker.swarmMode=true
|
||||||
|
- --accesslog
|
||||||
|
- --log.level=INFO
|
||||||
|
- --api
|
||||||
|
- --api.insecure=true
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
volumes:
|
||||||
|
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
||||||
|
networks:
|
||||||
|
- traefik-ingress
|
||||||
|
deploy:
|
||||||
|
replicas: 1
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '1'
|
||||||
|
memory: 80M
|
||||||
|
reservations:
|
||||||
|
cpus: '0.5'
|
||||||
|
memory: 40M
|
||||||
|
placement:
|
||||||
|
constraints:
|
||||||
|
- node.role == manager
|
||||||
|
|
||||||
|
labels:
|
||||||
|
- traefik.protocol=http
|
||||||
|
- traefik.docker.network=traefik-ingress
|
||||||
|
- traefik.http.routers.traefik-api.rule=Host(`traefik.our-domain.com`)
|
||||||
|
- traefik.http.routers.traefik-api.service=api@internal
|
||||||
|
- traefik.http.services.taefik-api.loadbalancer.server.port=8080
|
||||||
|
|
||||||
|
networks:
|
||||||
|
traefik-ingress:
|
||||||
|
external: true
|
||||||
|
```
|
||||||
|
|
||||||
|
<div class="admonition note">
|
||||||
|
<p class="admonition-title">Note</p>
|
||||||
|
|
||||||
|
Even though these are _Ansible_ tasks, _Swarm_ stack manifests are not much
|
||||||
|
different as I'm using mostly the raw format.
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
Let's talk a bit about what we did.
|
||||||
|
|
||||||
|
`--providers.docker=true` and `--providers.docker.swarmMode=true`
|
||||||
|
: We
|
||||||
|
configure _Traefik_ to enable both _docker_ and _swarm_ mode providers.
|
||||||
|
|
||||||
|
`--api` and `--api-insecure=true`
|
||||||
|
: We enable the API which offers the UI
|
||||||
|
and we allow it to run insecure.
|
||||||
|
|
||||||
|
The rest, I believe, have been explained in the previous blog post.
|
||||||
|
|
||||||
|
If everything went well, and we configured our _DNS_ properly, we should be
|
||||||
|
welcomed by a _Traefik_ dashboard on `traefik.our-domain.com`.
|
||||||
|
|
||||||
|
|
||||||
|
## Pi-hole {#pi-hole}
|
||||||
|
|
||||||
|
Now I know most people install the _Pi-hole_ straight on the _Pi_. Well, I'm not
|
||||||
|
most people and I'd like to deploy it in a container. I feel it's easier all
|
||||||
|
around than installing it on the system, you'll see.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
- name: Deploy PiHole Stack
|
||||||
|
community.docker.docker_stack:
|
||||||
|
state: present
|
||||||
|
name: PiHole
|
||||||
|
compose:
|
||||||
|
- version: '3'
|
||||||
|
services:
|
||||||
|
pihole:
|
||||||
|
image: pihole/pihole:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "53:53"
|
||||||
|
- "53:53/udp"
|
||||||
|
cap_add:
|
||||||
|
- NET_ADMIN
|
||||||
|
environment:
|
||||||
|
TZ: "Europe/Vienna"
|
||||||
|
VIRTUAL_HOST: pihole.our-domain.com
|
||||||
|
VIRTUAL_PORT: 80
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "curl", "-f", "http://localhost:80/"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 20s
|
||||||
|
retries: 3
|
||||||
|
volumes:
|
||||||
|
- /opt/pihole/data/pihole-config:/etc/pihole
|
||||||
|
- /opt/pihole/data/pihole-dnsmasq.d:/etc/dnsmasq.d
|
||||||
|
networks:
|
||||||
|
- traefik-ingress
|
||||||
|
deploy:
|
||||||
|
replicas: 1
|
||||||
|
placement:
|
||||||
|
constraints:
|
||||||
|
- node.role == worker
|
||||||
|
labels:
|
||||||
|
- traefik.docker.network=traefik-ingress
|
||||||
|
- traefik.http.routers.pihole-http.entrypoints=web
|
||||||
|
- traefik.http.routers.pihole-http.rule=Host(`pihole.our-domain.com`)
|
||||||
|
- traefik.http.routers.pihole-http.service=pihole-http
|
||||||
|
- traefik.http.services.pihole-http.loadbalancer.server.port=80
|
||||||
|
- traefik.http.routers.pihole-http.middlewares=pihole-main
|
||||||
|
- traefik.http.middlewares.pihole-main.chain.middlewares=frame-deny,browser-xss-filter
|
||||||
|
- traefik.http.middlewares.frame-deny.headers.framedeny=true
|
||||||
|
- traefik.http.middlewares.browser-xss-filter.headers.browserxssfilter=true
|
||||||
|
|
||||||
|
networks:
|
||||||
|
traefik-ingress:
|
||||||
|
external: true
|
||||||
|
```
|
||||||
|
|
||||||
|
We make sure to expose port `53` for **DNS** on all nodes, and configure the
|
||||||
|
proper `labels` to our service so that _Traefik_ can pick it up.
|
||||||
|
|
||||||
|
Once deployed and your _DNS_ is pointing properly then `pihole.our-domain.com`
|
||||||
|
is waiting for you. This also shows us that the networking between nodes works
|
||||||
|
properly. Let's test it out.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ nslookup duckduckgo.com pihole.our-domain.com
|
||||||
|
Server: pihole.our-domain.com
|
||||||
|
Address: 192.168.1.100#53
|
||||||
|
|
||||||
|
Non-authoritative answer:
|
||||||
|
Name: duckduckgo.com
|
||||||
|
Address: 52.142.124.215
|
||||||
|
```
|
||||||
|
|
||||||
|
Alright, seems that our _Pi-hole_ works.
|
||||||
|
|
||||||
|
|
||||||
|
## Conclusion {#conclusion}
|
||||||
|
|
||||||
|
On these small Raspberry Pis, the cluster seems to be working very well. The
|
||||||
|
_Pi-hole_ has been running without any issues for a few days running my internal
|
||||||
|
_DNS_. There's a few improvements that can be done to this setup, mainly the
|
||||||
|
deployment of an _SSL_ cert. That may come in the future, time permitting. Stay
|
||||||
|
safe, until the next one !
|
Loading…
Reference in a new issue