enhance(): Rewriting the blog in ox-hugo
commit
fd4d4df06f
|
@ -0,0 +1,2 @@
|
|||
(("content-org/"
|
||||
. ((org-mode . ((eval . (org-hugo-auto-export-mode)))))))
|
|
@ -0,0 +1 @@
|
|||
*.png filter=lfs diff=lfs merge=lfs -text
|
|
@ -0,0 +1,3 @@
|
|||
[submodule "themes/cactus"]
|
||||
path = themes/cactus
|
||||
url = https://github.com/monkeyWzr/hugo-theme-cactus.git
|
|
@ -0,0 +1,14 @@
|
|||
#+TITLE: TODO
|
||||
#+AUTHOR: Elia el Lazkani
|
||||
#+DESCRIPTION: List of TODOs
|
||||
#+TAGS: TODO
|
||||
|
||||
* Work left to be done
|
||||
|
||||
** TODO Switch to =cactus=
|
||||
|
||||
The recommendation was to use ~cactus~ instead of ~smol~.
|
||||
I will need to reconfigure a few things with cactus.
|
||||
|
||||
- [ ] Remove the favicon
|
||||
- [ ] Configure homepage
|
|
@ -0,0 +1,6 @@
|
|||
---
|
||||
title: "{{ replace .Name "-" " " | title }}"
|
||||
date: {{ .Date }}
|
||||
draft: true
|
||||
---
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
baseURL: https://blog.lazkani.io
|
||||
theme: cactus
|
||||
title: The DevOps Blog
|
||||
author: Elia el Lazkani
|
||||
copyright: Elia el Lazkani
|
||||
languageCode: en-US
|
||||
enableRobotsTXT: true
|
||||
pygmentsUseClasses: false
|
||||
pygmentsCodefences: true
|
||||
pygmentsStyle: monokai
|
||||
params:
|
||||
description: 'The DevOps Blog'
|
||||
title: The DevOps Blog
|
||||
show_updated: true
|
||||
showReadTime: true
|
||||
markup:
|
||||
highlight:
|
||||
anchorLineNos: false
|
||||
codeFences: true
|
||||
guessSyntax: false
|
||||
hl_Lines: ""
|
||||
lineAnchors: ""
|
||||
lineNoStart: 1
|
||||
lineNos: false
|
||||
lineNumbersInTable: true
|
||||
noClasses: true
|
||||
tabWidth: 4
|
||||
goldmark:
|
||||
renderer:
|
||||
unsafe: true
|
||||
menu:
|
||||
main:
|
||||
- identifier: "home"
|
||||
name: "Home"
|
||||
url: "/"
|
||||
weight: 1
|
||||
- identifier: "posts"
|
||||
name: "Posts"
|
||||
url: "/posts/"
|
||||
weight: 2
|
||||
- identifier: "categories"
|
||||
name: "Categories"
|
||||
url: "/categories/"
|
||||
weight: 3
|
||||
- identifier: "tags"
|
||||
name: "Tags"
|
||||
url: "/tags/"
|
||||
weight: 4
|
||||
- identifier: "rss"
|
||||
name: "RSS"
|
||||
url: "/posts/index.xml"
|
||||
weight: 5
|
File diff suppressed because it is too large
Load Diff
BIN
content-org/images/calendar-organization-with-org/01-calendar-overview.png (Stored with Git LFS)
Normal file
BIN
content-org/images/calendar-organization-with-org/01-calendar-overview.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/calendar-organization-with-org/02-calendar-day-overview.png (Stored with Git LFS)
Normal file
BIN
content-org/images/calendar-organization-with-org/02-calendar-day-overview.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/calendar-organization-with-org/03-calendar-day-closed-item-overview.png (Stored with Git LFS)
Normal file
BIN
content-org/images/calendar-organization-with-org/03-calendar-day-closed-item-overview.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/linux-containers/container-neofetch-fedora.png (Stored with Git LFS)
Normal file
BIN
content-org/images/linux-containers/container-neofetch-fedora.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/linux-containers/container-neofetch-ubuntu.png (Stored with Git LFS)
Normal file
BIN
content-org/images/linux-containers/container-neofetch-ubuntu.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/local-kubernetes-cluster-on-kvm/01-add-cluster.png (Stored with Git LFS)
Normal file
BIN
content-org/images/local-kubernetes-cluster-on-kvm/01-add-cluster.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/local-kubernetes-cluster-on-kvm/02-custom-cluster.png (Stored with Git LFS)
Normal file
BIN
content-org/images/local-kubernetes-cluster-on-kvm/02-custom-cluster.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/local-kubernetes-cluster-on-kvm/03-calico-networkProvider.png (Stored with Git LFS)
Normal file
BIN
content-org/images/local-kubernetes-cluster-on-kvm/03-calico-networkProvider.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/local-kubernetes-cluster-on-kvm/04-nginx-ingressDisabled.png (Stored with Git LFS)
Normal file
BIN
content-org/images/local-kubernetes-cluster-on-kvm/04-nginx-ingressDisabled.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/local-kubernetes-cluster-on-kvm/05-customize-nodes.png (Stored with Git LFS)
Normal file
BIN
content-org/images/local-kubernetes-cluster-on-kvm/05-customize-nodes.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/local-kubernetes-cluster-on-kvm/06-registered-nodes.png (Stored with Git LFS)
Normal file
BIN
content-org/images/local-kubernetes-cluster-on-kvm/06-registered-nodes.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/local-kubernetes-cluster-on-kvm/07-kubernetes-cluster.png (Stored with Git LFS)
Normal file
BIN
content-org/images/local-kubernetes-cluster-on-kvm/07-kubernetes-cluster.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/my-path-down-the-road-of-cloudflare-s-redirect-loop/flexible-encryption.png (Stored with Git LFS)
Normal file
BIN
content-org/images/my-path-down-the-road-of-cloudflare-s-redirect-loop/flexible-encryption.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/my-path-down-the-road-of-cloudflare-s-redirect-loop/full-encryption.png (Stored with Git LFS)
Normal file
BIN
content-org/images/my-path-down-the-road-of-cloudflare-s-redirect-loop/full-encryption.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/my-path-down-the-road-of-cloudflare-s-redirect-loop/too-many-redirects.png (Stored with Git LFS)
Normal file
BIN
content-org/images/my-path-down-the-road-of-cloudflare-s-redirect-loop/too-many-redirects.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/simple-cron-monitoring-with-healthchecks/borgbackup-healthchecks-logs.png (Stored with Git LFS)
Normal file
BIN
content-org/images/simple-cron-monitoring-with-healthchecks/borgbackup-healthchecks-logs.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/simple-cron-monitoring-with-healthchecks/borgbackup-healthchecks.png (Stored with Git LFS)
Normal file
BIN
content-org/images/simple-cron-monitoring-with-healthchecks/borgbackup-healthchecks.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/weechat-ssh-and-notification/01-weechat-weenotify.png (Stored with Git LFS)
Normal file
BIN
content-org/images/weechat-ssh-and-notification/01-weechat-weenotify.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/yet-another-rss-reader-move/01-elfeed-org-configuration.png (Stored with Git LFS)
Normal file
BIN
content-org/images/yet-another-rss-reader-move/01-elfeed-org-configuration.png (Stored with Git LFS)
Normal file
Binary file not shown.
BIN
content-org/images/yet-another-rss-reader-move/02-elfeed-search.png (Stored with Git LFS)
Normal file
BIN
content-org/images/yet-another-rss-reader-move/02-elfeed-search.png (Stored with Git LFS)
Normal file
Binary file not shown.
|
@ -0,0 +1,40 @@
|
|||
+++
|
||||
title = "About"
|
||||
author = ["Elia el Lazkani"]
|
||||
lastmod = 2021-06-27T23:58:31+02:00
|
||||
draft = false
|
||||
weight = 2002
|
||||
noauthor = true
|
||||
nocomment = true
|
||||
nodate = true
|
||||
nopaging = true
|
||||
noread = true
|
||||
[menu.main]
|
||||
weight = 2002
|
||||
identifier = "about"
|
||||
+++
|
||||
|
||||
## Who am I ? {#who-am-i}
|
||||
|
||||
I am a DevOps cloud engineer with a passion for technology, automation, Linux and OpenSource.
|
||||
I've been on Linux since the _early_ 2000's and have contributed, in some small capacity, to some open source projects along the way.
|
||||
|
||||
I dabble in this space and I blog about it. This is how I learn, this is how I evolve.
|
||||
|
||||
|
||||
## Contact Me {#contact-me}
|
||||
|
||||
If, for some reason, you'd like to get in touch you have sevaral options.
|
||||
|
||||
- Find me on [libera](https://libera.chat/) in `#LearnAndTeach`.
|
||||
- Email me at `blog[at]lazkani[dot]io`
|
||||
|
||||
If you use _GPG_ and you should, my public key is `2383 8945 E07E 670A 4BFE 39E6 FBD8 1F2B 1F48 8C2B`
|
||||
|
||||
|
||||
## Projects {#projects}
|
||||
|
||||
- [blog.lazkani.io](https://gitea.project42.io/Elia/blog.lazkani.io): The DevOps [blog](https://blog.lazkani.io)
|
||||
- [weenotify](https://gitlab.com/elazkani/weenotify): an official [weechat](https://weechat.org) notification plugin.
|
||||
- [go-cmw](https://gitlab.com/elazkani/go-cmw): a terminal weather application. It can be easily integrated into `tmux` or used in all sorts of ways.
|
||||
- [rundeck-resources](https://gitlab.com/elazkani/rundeck-resources): python tool to query resources from different sources and export them into a data structure that [Rundeck](https://www.rundeck.com/open-source) can consume. This tool can be found on [PyPI](https://pypi.org/project/rundeck-resources/).
|
|
@ -0,0 +1,46 @@
|
|||
+++
|
||||
title = "FAQ"
|
||||
author = ["Elia el Lazkani"]
|
||||
lastmod = 2021-06-27T23:58:28+02:00
|
||||
draft = false
|
||||
weight = 2001
|
||||
noauthor = true
|
||||
nocomment = true
|
||||
nodate = true
|
||||
nopaging = true
|
||||
noread = true
|
||||
[menu.main]
|
||||
weight = 2001
|
||||
identifier = "faq"
|
||||
+++
|
||||
|
||||
## What is this ? {#what-is-this}
|
||||
|
||||
This is my humble blog where I post things related to DevOps in hope that I or someone else might benefit from it.
|
||||
|
||||
|
||||
## Wait what ? What is DevOps ? {#wait-what-what-is-devops}
|
||||
|
||||
[Duckduckgo](https://duckduckgo.com/?q=what+is+devops+%3F&t=ffab&ia=web&iax=about) defines DevOps as:
|
||||
|
||||
> DevOps is a software engineering culture and practice that aims at unifying
|
||||
> software development and software operation. The main characteristic of the
|
||||
> DevOps movement is to strongly advocate automation and monitoring at all
|
||||
> steps of software construction, from integration, testing, releasing to
|
||||
> deployment and infrastructure management. DevOps aims at shorter development
|
||||
> cycles, increased deployment frequency, and more dependable releases,
|
||||
> in close alignment with business objectives.
|
||||
|
||||
In short, we build an infrastructure that is easily deployable, maintainable and, in all forms, makes the lives of the developers a breeze.
|
||||
|
||||
|
||||
## What do you blog about ? {#what-do-you-blog-about}
|
||||
|
||||
Anything and everything related to DevOps. The field is very big and complex with a lot of different tools and technologies implemented.
|
||||
|
||||
I try to blog about interesting and new things as much as possible, when time permits.
|
||||
|
||||
|
||||
## Does this blog have **RSS** ? {#does-this-blog-have-rss}
|
||||
|
||||
Yup, here's the [link](/posts/index.xml).
|
|
@ -0,0 +1,228 @@
|
|||
+++
|
||||
title = "A Python Environment Setup"
|
||||
author = ["Elia el Lazkani"]
|
||||
date = 2021-06-17T21:00:00+02:00
|
||||
lastmod = 2021-06-28T00:01:06+02:00
|
||||
tags = ["python", "pipx", "pyenv", "virtual-environment", "virtualfish"]
|
||||
categories = ["misc"]
|
||||
draft = false
|
||||
+++
|
||||
|
||||
I've been told that `python` package management is bad. I have seen some really bad practices online, asking you to run commands here and there without an understanding of the bigger picture, what they do and sometimes with escalated privileges.
|
||||
|
||||
Along the years, I have compiled a list of practices I follow, and a list of tools I use. I hope to be able to share some of the knowledge I've acquired and show you a different way of doing things. You might learn about a new tool, or a new use for a tool. Come along for the ride !
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
||||
## Python {#python}
|
||||
|
||||
As most know, [Python](https://www.python.org/) is an interpreted programming language. I am not going to go into the details of the language in this post, I will only talk about management.
|
||||
|
||||
If you want to develop in Python, you need to install libraries. You can find _some_ in your package manager but let's face it `pip` is your way.
|
||||
|
||||
The majority of _Linux_ distributions will have Python installed as a lot of system packages now rely on it, even some package managers.
|
||||
|
||||
Okay, this is the last time I actually use the system's Python. What ? Why ? You ask !
|
||||
|
||||
|
||||
## pyenv {#pyenv}
|
||||
|
||||
I introduce you to [pyenv](https://github.com/pyenv/pyenv). Pyenv is a Python version management tool, it allows you to install and manage different versions of Python as a _user_.
|
||||
|
||||
Beautiful, music to my ears.
|
||||
|
||||
Let's get it from the package manager, this is a great use of the package manager if it offers an up to date version of the package.
|
||||
|
||||
```bash
|
||||
sudo pacman -S pyenv
|
||||
```
|
||||
|
||||
If you're not using an _Archlinux_ based distribution follow the instructions on their [webpage](https://github.com/pyenv/pyenv#installation).
|
||||
|
||||
Alright ! Now that we've got ourselves pyenv, let's configure it real quickly.
|
||||
|
||||
Following the docs, I created `~/.config/fish/config.d/pyenv.fish` and in it I put the following.
|
||||
|
||||
```fish
|
||||
# Add pyenv executable to PATH by running
|
||||
# the following interactively:
|
||||
|
||||
set -Ux PYENV_ROOT $HOME/.pyenv
|
||||
set -U fish_user_paths $PYENV_ROOT/bin $fish_user_paths
|
||||
|
||||
# Load pyenv automatically by appending
|
||||
# the following to ~/.config/fish/config.fish:
|
||||
|
||||
status is-login; and pyenv init --path | source
|
||||
```
|
||||
|
||||
Open a new shell and you're all ready to continue along, you're all locked, loaded and ready to go!
|
||||
|
||||
|
||||
### Setup the environment {#setup-the-environment}
|
||||
|
||||
This is the first building block of my environment. We first start by querying for Python versions available for us.
|
||||
|
||||
```bash
|
||||
pyenv install --list
|
||||
```
|
||||
|
||||
Then, we install the latest Python version. Yes, even if it's an upgrade, I'll handle the upgrade, as well, as we go along.
|
||||
|
||||
Set everything up to use the new installed version.
|
||||
|
||||
First, we set the global Python version for our _user_.
|
||||
|
||||
```bash
|
||||
pyenv global 3.9.5
|
||||
```
|
||||
|
||||
Then, we switch our current shell's Python version, instead of opening a new shell.
|
||||
|
||||
```bash
|
||||
pyenv shell 3.9.5
|
||||
```
|
||||
|
||||
That was easy. We test that everything works as expected by checking the version.
|
||||
|
||||
```bash
|
||||
pyenv version
|
||||
```
|
||||
|
||||
Now, if you do a `which` on the `python` executable, you will find that it is in the `pyenv` shims' directory.
|
||||
|
||||
|
||||
### Upgrade {#upgrade}
|
||||
|
||||
In the **future**, the upgrade path is exactly the same as the setup path shown above. You query for the list of Python versions available, choose the latest and move on from there.
|
||||
Very easy, very simple.
|
||||
|
||||
|
||||
## pip {#pip}
|
||||
|
||||
[pip](https://pypi.org/project/pip/) is the package installer for Python.
|
||||
|
||||
At this stage, you have to understand that you are using a Python version installed by _pyenv_ as your _user_. The pip provided, if you do a `which`, is also in the same shims directory.
|
||||
|
||||
Using `pip` at this stage as a _user_ is better than running it as _root_ but it is also not touching your system; just your user. We can do **one** better. I'm going to use `pip` as a _user_ once !
|
||||
|
||||
I know, you will have a lot of questions at this point as to why. You will see, patience is a virtue.
|
||||
|
||||
|
||||
## pipx {#pipx}
|
||||
|
||||
Meet [pipx](https://github.com/pypa/pipx), this tool is the **amazing** companion for a _DevOps_, and _developer_ alike. Why ? You would ask.
|
||||
|
||||
It, basically, creates Python _virtual environments_ for packages you want to have access to _globally_. For example, I'd like to have access to a Python **LSP** server on the go.
|
||||
This way my text editor has access to it too and, of course, can make use of it freely. Anyway, let's cut this short and show you. You will understand better.
|
||||
|
||||
Let's use the only `pip` command as a _user_ to install `pipx`.
|
||||
|
||||
```bash
|
||||
pip install --user pipx
|
||||
```
|
||||
|
||||
<div class="admonition warning">
|
||||
<p class="admonition-title">warning</p>
|
||||
|
||||
You are setting yourself up for a **world of hurt** if you use `sudo` with `pip` or run it as `root`. **ONLY** run commands as `root` or with escalated privileges when you know what you're doing.
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
### LSP Server {#lsp-server}
|
||||
|
||||
As I gave the **LSP** server as an example, let's go ahead and install it with some other Python packages needed for global things like _emacs_.
|
||||
|
||||
```bash
|
||||
pipx install black
|
||||
pipx install ipython
|
||||
pipx install isort
|
||||
pipx install nose
|
||||
pipx install pytest
|
||||
pipx install python-lsp-server
|
||||
```
|
||||
|
||||
Now each one is in it's own happy little _virtual environment_ separated from any other dependency but its own. Isn't that lovely ?
|
||||
|
||||
If you try to run `ipython`, you will see that it will actually work. If you look deeper at it, you will see that it is pointing to `~/.local/bin/ipython` which is a symlink to the actual package in a _pipx_ _virtual environment_.
|
||||
|
||||
|
||||
### Upgrade {#upgrade}
|
||||
|
||||
After you **set** a new Python version with _pyenv_, you simply reinstall everything.
|
||||
|
||||
```bash
|
||||
pipx reinstall-all
|
||||
```
|
||||
|
||||
And like magic, everything get recreated using the new version of Python _newly_ set.
|
||||
|
||||
|
||||
## virtualfish {#virtualfish}
|
||||
|
||||
Now that _pipx_ is installed, let's go head and install something to manage our Python _virtual environments_ on-demand, for use whenever we need to, for targeted projects.
|
||||
|
||||
Some popular choices people use are [Pipenv](https://pipenv.pypa.io/en/latest/), [Poetry](https://python-poetry.org/), [virtualenv](https://virtualenv.pypa.io/en/latest/) and plain and simple python with the `venv` module.
|
||||
You're welcome to play with all of them. Considering I use _fish_ as my default _shell_, I like to use [virtualfish](https://virtualfish.readthedocs.io/en/latest/).
|
||||
|
||||
Let's install it.
|
||||
|
||||
```bash
|
||||
pipx install virtualfish
|
||||
```
|
||||
|
||||
This offers me a new command; `vf`. With `vf`, I can create Python _virtual environments_ and they will all be saved in a directory of my choosing.
|
||||
|
||||
|
||||
### Setup {#setup}
|
||||
|
||||
Let's create one for [Ansible](https://docs.ansible.com/ansible/latest/index.html).
|
||||
|
||||
```bash
|
||||
vf new ansible
|
||||
```
|
||||
|
||||
This should **activate** it. Then, we install _Ansible_.
|
||||
|
||||
```bash
|
||||
pip install ansible molecule docker
|
||||
```
|
||||
|
||||
At this stage, you will notice that you have `ansible` installed. You will also notice that all the _pipx_ packages are also still available.
|
||||
|
||||
If you want to tie _virtualfish_ to a specific directory, use `vf connect`.
|
||||
|
||||
|
||||
### Upgrade {#upgrade}
|
||||
|
||||
To _upgrade_ the Python version of all of our _virtual environments_, _virtualfish_ makes it as easy as
|
||||
|
||||
```bash
|
||||
vf upgrade
|
||||
```
|
||||
|
||||
And we're done !
|
||||
|
||||
|
||||
## Workflow {#workflow}
|
||||
|
||||
At this stage, you have an idea about the tools I use and where their scope falls. I like them because they are _limited_ to their own scope, each has its own little domain where it reigns.
|
||||
|
||||
- I use **pyenv** to install and manage different versions of Python for testing purposes while I stay on the latest.
|
||||
- I use **pipx** for the commands that I need access to _globally_ as a user.
|
||||
- I use **virtualfish** to create one or more _virtual environment_ per project I work on.
|
||||
|
||||
With this setup, I can test with different versions of Python by creating different _virtual environments_ with different version each, or two versions of the tool you're testing as you keep the Python version static.
|
||||
It could also be different versions of a library, testing forward compatibility for example.
|
||||
|
||||
At each step, I have an upgrade path to keep all my environments running the latest versions. I also have a lot of flexibility by using `requirements.txt` files and others for _development_ sometimes or even _testing_.
|
||||
|
||||
|
||||
## Conclusion {#conclusion}
|
||||
|
||||
As you can see, with a little bit of knowledge and by standing on the shoulders of giants, you can easily manage a Python environment entirely as a _user_.
|
||||
You have full access to a wide array of Python distributions to play with. Endless different versions of packages, _globally_ and _locally_ installed.
|
||||
If you create _virtual environments_ for each of your projects, you won't fall in the common pitfalls of versioning hell.
|
||||
Keep your _virtual environments_ numerous and dedicated to projects, small sets, and you won't face any major problems with keeping your system clean yet up to date.
|
|
@ -0,0 +1,167 @@
|
|||
+++
|
||||
title = "A Quick ZFS Overview on Linux"
|
||||
author = ["Elia el Lazkani"]
|
||||
date = 2020-01-27T21:00:00+01:00
|
||||
lastmod = 2021-06-28T00:01:00+02:00
|
||||
tags = ["zfs", "file-system"]
|
||||
categories = ["misc"]
|
||||
draft = false
|
||||
+++
|
||||
|
||||
I have, for years, been interested in _file systems_. Specifically a _file system_ to run my personal systems on. For most people **Ext4** is good enough and that is totally fine. But, as a power user, I like to have more control, more features and more options out of my file system.
|
||||
|
||||
I have played with most of file sytsems on Linux, and have been using **Btrfs** for a few years now. I have worked with NAS systems running on **ZFS** and have been very impressed by it. The only problem is that **ZFS** wasn't been well suppored on Linux at the time. **Btrfs** promissed to be the **ZFS** replacement for Linux nativetly, especially that it was backed up by a bunch of the giants like Oracle and RedHat. My decision at that point was made, and yes that was before RedHat's support for **XFS** which is impressive on its own. Recently though, a new project gave everyone hope. [OpenZFS](http://www.open-zfs.org/wiki/Main%5FPage) came to life and so did [ZFS on Linux](https://zfsonlinux.org/).
|
||||
|
||||
<!--more-->
|
||||
|
||||
Linux has had **ZFS** support for a while now but mostly to manage a **ZFS** _file system_, so I kept watching until I saw a blog post by **Ubuntu** entitled [Enhancing our ZFS support on Ubuntu 19.10 -- an introduction](https://ubuntu.com/blog/enhancing-our-zfs-support-on-ubuntu-19-10-an-introduction).
|
||||
|
||||
In the blog post above, I read the following:
|
||||
|
||||
> We want to support ZFS on root as an experimental installer option, initially for desktop, but keeping the layout extensible for server later on. The desktop will be the first beneficiary in Ubuntu 19.10. Note the use of the term ‘experimental' though!
|
||||
|
||||
My eyes widened at this point. I know that **Ubuntu** has had native **ZFS** support since 2016 but now I could install it with one click. At that point I was all in, and I went back to **Ubuntu**.
|
||||
|
||||
|
||||
## Ubuntu on root ZFS {#ubuntu-on-root-zfs}
|
||||
|
||||
You heard me right, the **Ubuntu** installer offers an 'experimental' install on **ZFS**. I made the decision based on the well tested stability of **ZFS** in production environments and its ability to offer me the flexibility and the ability to backup and recover my data easily.
|
||||
In other words, if **Ubuntu** doesn't work, **ZFS** is there and I can install whatever I like on top and if you are familiar with **ZFS** you know exactly what I mean and I have barely scratched the ice on its capabilities.
|
||||
|
||||
So here I was with **Ubuntu** installed on my laptop on root **ZFS**. So I had to do it.
|
||||
|
||||
```text
|
||||
# zpool status -v
|
||||
pool: bpool
|
||||
state: ONLINE
|
||||
status: The pool is formatted using a legacy on-disk format. The pool can
|
||||
still be used, but some features are unavailable.
|
||||
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
|
||||
pool will no longer be accessible on software that does not support
|
||||
feature flags.
|
||||
scan: none requested
|
||||
config:
|
||||
|
||||
NAME STATE READ WRITE CKSUM
|
||||
bpool ONLINE 0 0 0
|
||||
nvme0n1p4 ONLINE 0 0 0
|
||||
|
||||
errors: No known data errors
|
||||
|
||||
pool: rpool
|
||||
state: ONLINE
|
||||
scan: none requested
|
||||
config:
|
||||
|
||||
NAME STATE READ WRITE CKSUM
|
||||
rpool ONLINE 0 0 0
|
||||
nvme0n1p5 ONLINE 0 0 0
|
||||
|
||||
errors: No known data errors
|
||||
```
|
||||
|
||||
<div class="admonition note">
|
||||
<p class="admonition-title">Note</p>
|
||||
|
||||
I have read somewhere in a blog about **Ubuntu** that I should not run an upgrade on the boot pool.
|
||||
|
||||
</div>
|
||||
|
||||
and it's running on...
|
||||
|
||||
```text
|
||||
# uname -s -v -i -o
|
||||
Linux #28-Ubuntu SMP Wed Dec 18 05:37:46 UTC 2019 x86_64 GNU/Linux
|
||||
```
|
||||
|
||||
Well that was pretty easy.
|
||||
|
||||
|
||||
## ZFS Pools {#zfs-pools}
|
||||
|
||||
Let's take a look at how the installer has configured the _pools_.
|
||||
|
||||
```text
|
||||
# zpool list
|
||||
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
|
||||
bpool 1,88G 158M 1,72G - - - 8% 1.00x ONLINE -
|
||||
rpool 472G 7,91G 464G - - 0% 1% 1.00x ONLINE -
|
||||
```
|
||||
|
||||
So it creates a _boot_ pool and a _root_ pool. Maybe looking at the
|
||||
**datasets** would give us a better idea.
|
||||
|
||||
|
||||
## ZFS Datasets {#zfs-datasets}
|
||||
|
||||
Let's look at the sanitized version of the datasets.
|
||||
|
||||
```text
|
||||
# zfs list
|
||||
NAME USED AVAIL REFER MOUNTPOINT
|
||||
bpool 158M 1,60G 176K /boot
|
||||
bpool/BOOT 157M 1,60G 176K none
|
||||
bpool/BOOT/ubuntu_xxxxxx 157M 1,60G 157M /boot
|
||||
rpool 7,92G 449G 96K /
|
||||
rpool/ROOT 4,53G 449G 96K none
|
||||
rpool/ROOT/ubuntu_xxxxxx 4,53G 449G 3,37G /
|
||||
rpool/ROOT/ubuntu_xxxxxx/srv 96K 449G 96K /srv
|
||||
rpool/ROOT/ubuntu_xxxxxx/usr 208K 449G 96K /usr
|
||||
rpool/ROOT/ubuntu_xxxxxx/usr/local 112K 449G 112K /usr/local
|
||||
rpool/ROOT/ubuntu_xxxxxx/var 1,16G 449G 96K /var
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/games 96K 449G 96K /var/games
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/lib 1,15G 449G 1,04G /var/lib
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/lib/AccountServices 96K 449G 96K /var/lib/AccountServices
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/lib/NetworkManager 152K 449G 152K /var/lib/NetworkManager
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/lib/apt 75,2M 449G 75,2M /var/lib/apt
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/lib/dpkg 36,5M 449G 36,5M /var/lib/dpkg
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/log 11,0M 449G 11,0M /var/log
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/mail 96K 449G 96K /var/mail
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/snap 128K 449G 128K /var/snap
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/spool 112K 449G 112K /var/spool
|
||||
rpool/ROOT/ubuntu_xxxxxx/var/www 96K 449G 96K /var/www
|
||||
rpool/USERDATA 3,38G 449G 96K /
|
||||
rpool/USERDATA/user_yyyyyy 3,37G 449G 3,37G /home/user
|
||||
rpool/USERDATA/root_yyyyyy 7,52M 449G 7,52M /root
|
||||
```
|
||||
|
||||
<div class="admonition note">
|
||||
<p class="admonition-title">Note</p>
|
||||
|
||||
The installer have created some random IDs that I have not figured out if they are totally random or mapped to something so I have sanitized them.
|
||||
I also sanitized the user, of course. ;)
|
||||
|
||||
</div>
|
||||
|
||||
It looks like the installer created a bunch of datasets with their respective mountpoints.
|
||||
|
||||
|
||||
## ZFS Properties {#zfs-properties}
|
||||
|
||||
**ZFS** has a list of features and they are tunable in different ways, one of them is through the properties, let's have a look.
|
||||
|
||||
```text
|
||||
# zfs get all rpool
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
rpool type filesystem -
|
||||
rpool creation vr jan 24 23:04 2020 -
|
||||
rpool used 7,91G -
|
||||
rpool available 449G -
|
||||
rpool referenced 96K -
|
||||
rpool compressratio 1.43x -
|
||||
rpool mounted no -
|
||||
rpool quota none default
|
||||
rpool reservation none default
|
||||
rpool recordsize 128K default
|
||||
rpool mountpoint / local
|
||||
...
|
||||
```
|
||||
|
||||
This gives us an idea on properties set on the dataset specified, in this case, the _rpool_ root dataset.
|
||||
|
||||
|
||||
## Conclusion {#conclusion}
|
||||
|
||||
I read in a blog post that the **Ubuntu** team responsible for the **ZFS** support has followed all the **ZFS** best practices in the installer.
|
||||
I have no way of verifying that as I am not a **ZFS** expert but I'll be happy to take their word for it until I learn more.
|
||||
What is certain for now is that I am running on **ZFS**, and I will be enjoying its features to the fullest.
|
|
@ -0,0 +1,502 @@
|
|||
+++
|
||||
title = "Ansible testing with Molecule"
|
||||
author = ["Elia el Lazkani"]
|
||||
date = 2019-06-21T21:00:00+02:00
|
||||
lastmod = 2021-06-28T00:00:33+02:00
|
||||
tags = ["ansible", "molecule"]
|
||||
categories = ["configuration-management"]
|
||||
draft = false
|
||||
+++
|
||||
|
||||
When I first started using [ansible](https://www.ansible.com/), I did not know about [molecule](https://molecule.readthedocs.io/en/latest/). It was a bit daunting to start a _role_ from scratch and trying to develop it without having the ability to test it. Then a co-worker of mine told me about molecule and everything changed.
|
||||
|
||||
<!--more-->
|
||||
|
||||
I do not have any of the tools I need installed on this machine, so I will go through, step by step, how I set up ansible and molecule on any new machine I come across for writing ansible roles.
|
||||
|
||||
|
||||
## Requirements {#requirements}
|
||||
|
||||
What we are trying to achieve in this post, is a working ansible role that can be tested inside a docker container. To be able to achieve that, we need to install docker on the system. Follow the instructions on [installing docker](https://docs.docker.com/install/) found on the docker website.
|
||||
|
||||
|
||||
## Good Practices {#good-practices}
|
||||
|
||||
First thing's first. Let's start by making sure that we have python installed properly on the system.
|
||||
|
||||
```text
|
||||
$ python --version
|
||||
Python 3.7.1
|
||||
```
|
||||
|
||||
Because in this case I have _python3_ installed, I can create a _virtualenv_ easier without the use of external tools.
|
||||
|
||||
```text
|
||||
# Create the directory to work with
|
||||
$ mkdir -p sandbox/test-roles
|
||||
# Navigate to the directory
|
||||
$ cd sandbox/test-roles/
|
||||
# Create the virtualenv
|
||||
~/sandbox/test-roles $ python -m venv .ansible-venv
|
||||
# Activate the virtualenv
|
||||
~/sandbox/test-roles $ source .ansible-venv/bin/activate
|
||||
# Check that your virtualenv activated properly
|
||||
(.ansible-venv) ~/sandbox/test-roles $ which python
|
||||
/home/elijah/sandbox/test-roles/.ansible-venv/bin/python
|
||||
```
|
||||
|
||||
At this point, we can install the required dependencies.
|
||||
|
||||
```text
|
||||
$ pip install ansible molecule docker
|
||||
Collecting ansible
|
||||
Downloading https://files.pythonhosted.org/packages/56/fb/b661ae256c5e4a5c42859860f59f9a1a0b82fbc481306b30e3c5159d519d/ansible-2.7.5.tar.gz (11.8MB)
|
||||
100% |████████████████████████████████| 11.8MB 3.8MB/s
|
||||
Collecting molecule
|
||||
Downloading https://files.pythonhosted.org/packages/84/97/e5764079cb7942d0fa68b832cb9948274abb42b72d9b7fe4a214e7943786/molecule-2.19.0-py3-none-any.whl (180kB)
|
||||
100% |████████████████████████████████| 184kB 2.2MB/s
|
||||
|
||||
...
|
||||
|
||||
Successfully built ansible ansible-lint anyconfig cerberus psutil click-completion tabulate tree-format pathspec future pycparser arrow
|
||||
Installing collected packages: MarkupSafe, jinja2, PyYAML, six, pycparser, cffi, pynacl, idna, asn1crypto, cryptography, bcrypt, paramiko, ansible, pbr, git-url-parse, monotonic, fasteners, click, colorama, sh, python-gilt, ansible-lint, pathspec, yamllint, anyconfig, cerberus, psutil, more-itertools, py, attrs, pluggy, atomicwrites, pytest, testinfra, ptyprocess, pexpect, click-completion, tabulate, future, chardet, binaryornot, poyo, urllib3, certifi, requests, python-dateutil, arrow, jinja2-time, whichcraft, cookiecutter, tree-format, molecule, docker-pycreds, websocket-client, docker
|
||||
Successfully installed MarkupSafe-1.1.0 PyYAML-3.13 ansible-2.7.5 ansible-lint-3.4.23 anyconfig-0.9.7 arrow-0.13.0 asn1crypto-0.24.0 atomicwrites-1.2.1 attrs-18.2.0 bcrypt-3.1.5 binaryornot-0.4.4 cerberus-1.2 certifi-2018.11.29 cffi-1.11.5 chardet-3.0.4 click-6.7 click-completion-0.3.1 colorama-0.3.9 cookiecutter-1.6.0 cryptography-2.4.2 docker-3.7.0 docker-pycreds-0.4.0 fasteners-0.14.1 future-0.17.1 git-url-parse-1.1.0 idna-2.8 jinja2-2.10 jinja2-time-0.2.0 molecule-2.19.0 monotonic-1.5 more-itertools-5.0.0 paramiko-2.4.2 pathspec-0.5.9 pbr-4.1.0 pexpect-4.6.0 pluggy-0.8.1 poyo-0.4.2 psutil-5.4.6 ptyprocess-0.6.0 py-1.7.0 pycparser-2.19 pynacl-1.3.0 pytest-4.1.0 python-dateutil-2.7.5 python-gilt-1.2.1 requests-2.21.0 sh-1.12.14 six-1.11.0 tabulate-0.8.2 testinfra-1.16.0 tree-format-0.1.2 urllib3-1.24.1 websocket-client-0.54.0 whichcraft-0.5.2 yamllint-1.11.1
|
||||
```
|
||||
|
||||
|
||||
## Creating your first ansible role {#creating-your-first-ansible-role}
|
||||
|
||||
Once all the steps above are complete, we can start by creating our first ansible role.
|
||||
|
||||
```text
|
||||
$ molecule init role -r example-role
|
||||
--> Initializing new role example-role...
|
||||
Initialized role in /home/elijah/sandbox/test-roles/example-role successfully.
|
||||
|
||||
$ tree example-role/
|
||||
example-role/
|
||||
├── defaults
|
||||
│ └── main.yml
|
||||
├── handlers
|
||||
│ └── main.yml
|
||||
├── meta
|
||||
│ └── main.yml
|
||||
├── molecule
|
||||
│ └── default
|
||||
│ ├── Dockerfile.j2
|
||||
│ ├── INSTALL.rst
|
||||
│ ├── molecule.yml
|
||||
│ ├── playbook.yml
|
||||
│ └── tests
|
||||
│ ├── __pycache__
|
||||
│ │ └── test_default.cpython-37.pyc
|
||||
│ └── test_default.py
|
||||
├── README.md
|
||||
├── tasks
|
||||
│ └── main.yml
|
||||
└── vars
|
||||
└── main.yml
|
||||
|
||||
9 directories, 12 files
|
||||
```
|
||||
|
||||
You can find what each directory is for and how ansible works by visiting [docs.ansible.com](https://docs.ansible.com).
|
||||
|
||||
|
||||
### `meta/main.yml` {#meta-main-dot-yml}
|
||||
|
||||
The meta file needs to modified and filled with information about the role. This is not a required file to modify if you are keeping this for yourself, for example. But it is a good idea to have as much information as possible if this is going to be released. In my case, I don't need any fanciness as this is just sample code.
|
||||
|
||||
```yaml
|
||||
---
|
||||
galaxy_info:
|
||||
author: Elia el Lazkani
|
||||
description: This is an example ansible role to showcase molecule at work
|
||||
license: license (BDS-2)
|
||||
min_ansible_version: 2.7
|
||||
galaxy_tags: []
|
||||
dependencies: []
|
||||
```
|
||||
|
||||
|
||||
### `tasks/main.yml` {#tasks-main-dot-yml}
|
||||
|
||||
This is where the magic is set in motion. Tasks are the smallest entities in a role that do small and idempotent actions. Let's write a few simple tasks to create a user and install a service.
|
||||
|
||||
```yaml
|
||||
---
|
||||
# Create the user example
|
||||
- name: Create 'example' user
|
||||
user:
|
||||
name: example
|
||||
comment: Example user
|
||||
shell: /bin/bash
|
||||
state: present
|
||||
create_home: yes
|
||||
home: /home/example
|
||||
|
||||
# Install nginx
|
||||
- name: Install nginx
|
||||
apt:
|
||||
name: nginx
|
||||
state: present
|
||||
update_cache: yes
|
||||
notify: Restart nginx
|
||||
```
|
||||
|
||||
|
||||
### `handlers/main.yml` {#handlers-main-dot-yml}
|
||||
|
||||
If you noticed, we are notifying a handler to be called after installing _nginx_. All handlers notified will run after all the tasks complete and each handler will only run once. This is a good way to make sure that you don't restart _nginx_ multiple times if you call the handler more than once.
|
||||
|
||||
```yaml
|
||||
---
|
||||
# Handler to restart nginx
|
||||
- name: Restart nginx
|
||||
service:
|
||||
name: nginx
|
||||
state: restarted
|
||||
```
|
||||
|
||||
|
||||
### `molecule/default/molecule.yml` {#molecule-default-molecule-dot-yml}
|
||||
|
||||
It's time to configure molecule to do what we need. We need to start an ubuntu docker container, so we need to specify that in the molecule YAML file. All we need to do is change the image line to specify that we want an `ubuntu:bionic` image.
|
||||
|
||||
```yaml
|
||||
---
|
||||
dependency:
|
||||
name: galaxy
|
||||
driver:
|
||||
name: docker
|
||||
lint:
|
||||
name: yamllint
|
||||
platforms:
|
||||
- name: instance
|
||||
image: ubuntu:bionic
|
||||
provisioner:
|
||||
name: ansible
|
||||
lint:
|
||||
name: ansible-lint
|
||||
scenario:
|
||||
name: default
|
||||
verifier:
|
||||
name: testinfra
|
||||
lint:
|
||||
name: flake8
|
||||
```
|
||||
|
||||
|
||||
### `molecule/default/playbook.yml` {#molecule-default-playbook-dot-yml}
|
||||
|
||||
This is the playbook that molecule will run. Make sure that you have all the steps that you need here. I will keep this as is.
|
||||
|
||||
```yaml
|
||||
---
|
||||
- name: Converge
|
||||
hosts: all
|
||||
roles:
|
||||
- role: example-role
|
||||
```
|
||||
|
||||
|
||||
## First Role Pass {#first-role-pass}
|
||||
|
||||
This is time to test our role and see what's going on.
|
||||
|
||||
```text
|
||||
(.ansible-role) ~/sandbox/test-roles/example-role/ $ molecule converge
|
||||
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
|
||||
Validation completed successfully.
|
||||
--> Test matrix
|
||||
|
||||
└── default
|
||||
├── dependency
|
||||
├── create
|
||||
├── prepare
|
||||
└── converge
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'dependency'
|
||||
Skipping, missing the requirements file.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'create'
|
||||
|
||||
PLAY [Create] ******************************************************************
|
||||
|
||||
TASK [Log into a Docker registry] **********************************************
|
||||
skipping: [localhost] => (item=None)
|
||||
|
||||
TASK [Create Dockerfiles from image names] *************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Discover local Docker images] ********************************************
|
||||
ok: [localhost] => (item=None)
|
||||
ok: [localhost]
|
||||
|
||||
TASK [Build an Ansible compatible image] ***************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Create docker network(s)] ************************************************
|
||||
|
||||
TASK [Create molecule instance(s)] *********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Wait for instance(s) creation to complete] *******************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=5 changed=4 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'prepare'
|
||||
Skipping, prepare playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'converge'
|
||||
|
||||
PLAY [Converge] ****************************************************************
|
||||
|
||||
TASK [Gathering Facts] *********************************************************
|
||||
ok: [instance]
|
||||
|
||||
TASK [example-role : Create 'example' user] ************************************
|
||||
changed: [instance]
|
||||
|
||||
TASK [example-role : Install nginx] ********************************************
|
||||
changed: [instance]
|
||||
|
||||
RUNNING HANDLER [example-role : Restart nginx] *********************************
|
||||
changed: [instance]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
instance : ok=4 changed=3 unreachable=0 failed=0
|
||||
```
|
||||
|
||||
It looks like the **converge** step succeeded.
|
||||
|
||||
|
||||
## Writing Tests {#writing-tests}
|
||||
|
||||
It is always a good practice to write unittests when you're writing code. Ansible roles should not be an exception. Molecule offers a way to run tests, which you can think of as unittest, to make sure that what the role gives you is what you were expecting. This helps future development of the role and keeps you from falling in previously solved traps.
|
||||
|
||||
|
||||
### `molecule/default/tests/test_default.py` {#molecule-default-tests-test-default-dot-py}
|
||||
|
||||
Molecule leverages the [testinfra](https://testinfra.readthedocs.io/en/latest/) project to run its tests. You can use other tools if you so wish, and there are many. In this example we will be using _testinfra_.
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
import testinfra.utils.ansible_runner
|
||||
|
||||
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
|
||||
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
|
||||
|
||||
|
||||
def test_hosts_file(host):
|
||||
f = host.file('/etc/hosts')
|
||||
|
||||
assert f.exists
|
||||
assert f.user == 'root'
|
||||
assert f.group == 'root'
|
||||
|
||||
|
||||
def test_user_created(host):
|
||||
user = host.user("example")
|
||||
assert user.name == "example"
|
||||
assert user.home == "/home/example"
|
||||
|
||||
|
||||
def test_user_home_exists(host):
|
||||
user_home = host.file("/home/example")
|
||||
assert user_home.exists
|
||||
assert user_home.is_directory
|
||||
|
||||
|
||||
def test_nginx_is_installed(host):
|
||||
nginx = host.package("nginx")
|
||||
assert nginx.is_installed
|
||||
|
||||
|
||||
def test_nginx_running_and_enabled(host):
|
||||
nginx = host.service("nginx")
|
||||
assert nginx.is_running
|
||||
```
|
||||
|
||||
<div class="admonition warning">
|
||||
<p class="admonition-title">warning</p>
|
||||
|
||||
Uncomment `truthy: disable` in `.yamllint` found at the base of the role.
|
||||
|
||||
</div>
|
||||
|
||||
```text
|
||||
(.ansible_venv) ~/sandbox/test-roles/example-role $ molecule test
|
||||
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
|
||||
Validation completed successfully.
|
||||
--> Test matrix
|
||||
|
||||
└── default
|
||||
├── lint
|
||||
├── destroy
|
||||
├── dependency
|
||||
├── syntax
|
||||
├── create
|
||||
├── prepare
|
||||
├── converge
|
||||
├── idempotence
|
||||
├── side_effect
|
||||
├── verify
|
||||
└── destroy
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'lint'
|
||||
--> Executing Yamllint on files found in /home/elijah/sandbox/test-roles/example-role/...
|
||||
Lint completed successfully.
|
||||
--> Executing Flake8 on files found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
|
||||
/home/elijah/.virtualenvs/world/lib/python3.7/site-packages/pycodestyle.py:113: FutureWarning: Possible nested set at position 1
|
||||
EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]')
|
||||
Lint completed successfully.
|
||||
--> Executing Ansible Lint on /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml...
|
||||
Lint completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'destroy'
|
||||
|
||||
PLAY [Destroy] *****************************************************************
|
||||
|
||||
TASK [Destroy molecule instance(s)] ********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Wait for instance(s) deletion to complete] *******************************
|
||||
ok: [localhost] => (item=None)
|
||||
ok: [localhost]
|
||||
|
||||
TASK [Delete docker network(s)] ************************************************
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=2 changed=1 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'dependency'
|
||||
Skipping, missing the requirements file.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'syntax'
|
||||
|
||||
playbook: /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'create'
|
||||
|
||||
PLAY [Create] ******************************************************************
|
||||
|
||||
TASK [Log into a Docker registry] **********************************************
|
||||
skipping: [localhost] => (item=None)
|
||||
|
||||
TASK [Create Dockerfiles from image names] *************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Discover local Docker images] ********************************************
|
||||
ok: [localhost] => (item=None)
|
||||
ok: [localhost]
|
||||
|
||||
TASK [Build an Ansible compatible image] ***************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Create docker network(s)] ************************************************
|
||||
|
||||
TASK [Create molecule instance(s)] *********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Wait for instance(s) creation to complete] *******************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=5 changed=4 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'prepare'
|
||||
Skipping, prepare playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'converge'
|
||||
|
||||
PLAY [Converge] ****************************************************************
|
||||
|
||||
TASK [Gathering Facts] *********************************************************
|
||||
ok: [instance]
|
||||
|
||||
TASK [example-role : Create 'example' user] ************************************
|
||||
changed: [instance]
|
||||
|
||||
TASK [example-role : Install nginx] ********************************************
|
||||
changed: [instance]
|
||||
|
||||
RUNNING HANDLER [example-role : Restart nginx] *********************************
|
||||
changed: [instance]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
instance : ok=4 changed=3 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'idempotence'
|
||||
Idempotence completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'side_effect'
|
||||
Skipping, side effect playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'verify'
|
||||
--> Executing Testinfra tests found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
|
||||
============================= test session starts ==============================
|
||||
platform linux -- Python 3.7.1, pytest-4.1.0, py-1.7.0, pluggy-0.8.1
|
||||
rootdir: /home/elijah/sandbox/test-roles/example-role/molecule/default, inifile:
|
||||
plugins: testinfra-1.16.0
|
||||
collected 5 items
|
||||
|
||||
tests/test_default.py ..... [100%]
|
||||
|
||||
=============================== warnings summary ===============================
|
||||
|
||||
...
|
||||
|
||||
==================== 5 passed, 7 warnings in 27.37 seconds =====================
|
||||
Verifier completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'destroy'
|
||||
|
||||
PLAY [Destroy] *****************************************************************
|
||||
|
||||
TASK [Destroy molecule instance(s)] ********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Wait for instance(s) deletion to complete] *******************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Delete docker network(s)] ************************************************
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=2 changed=2 unreachable=0 failed=0
|
||||
```
|
||||
|
||||
I have a few warning messages (that's likely because I am using _python 3.7_ and some of the libraries still don't fully support the new standards released with it) but all my tests passed.
|
||||
|
||||
|
||||
## Conclusion {#conclusion}
|
||||
|
||||
Molecule is a great tool to test ansible roles quickly and while developing
|
||||
them. It also comes bundled with a bunch of other features from different
|
||||
projects that will test all aspects of your ansible code. I suggest you start
|
||||
using it when writing new ansible roles.
|
|
@ -0,0 +1,248 @@
|
|||
+++
|
||||
title = "Automating Borg"
|
||||
author = ["Elia el Lazkani"]
|
||||
date = 2020-02-02T21:00:00+01:00
|
||||
lastmod = 2021-06-28T00:00:27+02:00
|
||||
tags = ["borgmatic", "borgbackup", "borg"]
|
||||
categories = ["backup"]
|
||||
draft = false
|
||||
+++
|
||||
|
||||
In the previous blog post entitle [BorgBackup]({{< relref "borgbackup" >}}), I talked about **borg**.
|
||||
If you read that post, you would've noticed that **borg** has a lot of features.
|
||||
With a lot of features come a lot of automation.
|
||||
|
||||
If you were thinking about using **borg**, you should either make a _simple cron_ or you're gonna have to write an elaborate script to take care of all the different steps.
|
||||
|
||||
What if I told you there's another way ? An easier way ! The **Borgmatic** way... What would you say ?
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
||||
## Borgmatic {#borgmatic}
|
||||
|
||||
**Borgmatic** is defined on their [website](https://torsion.org/borgmatic/) as follows.
|
||||
|
||||
> borgmatic is simple, configuration-driven backup software for servers
|
||||
> and workstations. Protect your files with client-side encryption.
|
||||
> Backup your databases too. Monitor it all with integrated third-party
|
||||
> services.
|
||||
|
||||
If you go down to it, **borgmatic** uses **borg**'s _API_ to automate a list of configurable _tasks_.
|
||||
This way, it saves you the trouble of writing your own scripts to automate these steps.
|
||||
|
||||
**Borgmatic** uses a _YAML_ configuration file. Let's configure a few tasks.
|
||||
|
||||
|
||||
## Location {#location}
|
||||
|
||||
First, let's start by configuring the locations that **borg** is going to be working with.
|
||||
|
||||
```yaml
|
||||
location:
|
||||
source_directories:
|
||||
- /home/
|
||||
|
||||
repositories:
|
||||
- user@backupserver:sourcehostname.borg
|
||||
|
||||
one_file_system: true
|
||||
|
||||
exclude_patterns:
|
||||
- /home/*/.cache
|
||||
- '*.pyc'
|
||||
```
|
||||
|
||||
This tells **borg** that we need to backup our `/home` directories excluding a few patterns.
|
||||
Let's not forget that we told **borg** where the repository is located at.
|
||||
|
||||
|
||||
## Storage {#storage}
|
||||
|
||||
We need to configure the storage next.
|
||||
|
||||
```yaml
|
||||
storage:
|
||||
# Recommended
|
||||
# encryption_passcommand: secret-tool lookup borg-repository repo-name
|
||||
|
||||
encryption_passphrase: "ReallyStrongPassphrase"
|
||||
compression: zstd,15
|
||||
ssh_command: ssh -i /path/to/private/key
|
||||
borg_security_directory: /path/to/base/config/security
|
||||
archive_name_format: 'borgmatic-{hostname}-{now}'
|
||||
```
|
||||
|
||||
In this section, we tell borg a little big of information about our repository.
|
||||
What are the credentials, where it can find them, etc.
|
||||
|
||||
The easy way is to go with a `passphrase`, but I recommend using an `encryption_passcommand` instead.
|
||||
I also use `zstd` for encryption instead of `lz4`, you better do your research before you change the default.
|
||||
I also recommend, just as they do, the use of a security directory as well.
|
||||
|
||||
|
||||
## Retention {#retention}
|
||||
|
||||
We can configure a retention for our backups, if we like.
|
||||
|
||||
```yaml
|
||||
retention:
|
||||
keep_hourly: 7
|
||||
keep_daily: 7
|
||||
keep_weekly: 4
|
||||
keep_monthly: 6
|
||||
keep_yearly: 2
|
||||
|
||||
prefix: "borgmatic-"
|
||||
```
|
||||
|
||||
The part of what to keep from _hourly_ to _daily_ is self explanatory.
|
||||
I would like to point out the `prefix` part as it is important.
|
||||
This is the _prefix_ that **borgmatic** uses to consider backups for **pruning**.
|
||||
|
||||
<div class="admonition warning">
|
||||
<p class="admonition-title">warning</p>
|
||||
|
||||
Watch out for the retention `prefix`
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
## Consistency {#consistency}
|
||||
|
||||
After the updates, we'd like to check our backups.
|
||||
|
||||
```yaml
|
||||
consistency:
|
||||
checks:
|
||||
- repository
|
||||
- archives
|
||||
|
||||
check_last: 3
|
||||
|
||||
prefix: "borgmatic-"
|
||||
```
|
||||
|
||||
<div class="admonition warning">
|
||||
<p class="admonition-title">warning</p>
|
||||
|
||||
Watch out, again, for the consistency `prefix`
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
## Hooks {#hooks}
|
||||
|
||||
Finally, hooks.
|
||||
|
||||
I'm going to talk about hooks a bit. Hooks can be used to backup **MySQL**, **PostgreSQL** or **MariaDB**.
|
||||
They can also be hooks for `on_error`, `before_backup`, `after_backup`, `before_everything` and `after_everything`.
|
||||
You can also hook to third party services which you can check on their webpage.
|
||||
|
||||
I deployed my own, so I configured my own.
|
||||
|
||||
|
||||
## Borgmatic Configuration {#borgmatic-configuration}
|
||||
|
||||
Let's put everything together now.
|
||||
|
||||
```yaml
|
||||
location:
|
||||
source_directories:
|
||||
- /home/
|
||||
|
||||
repositories:
|
||||
- user@backupserver:sourcehostname.borg
|
||||
|
||||
one_file_system: true
|
||||
|
||||
exclude_patterns:
|
||||
- /home/*/.cache
|
||||
- '*.pyc'
|
||||
|
||||
storage:
|
||||
# Recommended
|
||||
# encryption_passcommand: secret-tool lookup borg-repository repo-name
|
||||
|
||||
encryption_passphrase: "ReallyStrongPassphrase"
|
||||
compression: zstd,15
|
||||
ssh_command: ssh -i /path/to/private/key
|
||||
borg_security_directory: /path/to/base/config/security
|
||||
archive_name_format: 'borgmatic-{hostname}-{now}'
|
||||
|
||||
retention:
|
||||
keep_hourly: 7
|
||||
keep_daily: 7
|
||||
keep_weekly: 4
|
||||
keep_monthly: 6
|
||||
keep_yearly: 2
|
||||
|
||||
prefix: "borgmatic-"
|
||||
|
||||
consistency:
|
||||
checks:
|
||||
- repository
|
||||
- archives
|
||||
|
||||
check_last: 3
|
||||
|
||||
prefix: "borgmatic-"
|
||||
```
|
||||
|
||||
Now that we have everything together, let's save it in `/etc/borgmatic.d/home.yaml`.
|
||||
|
||||
|
||||
## Usage {#usage}
|
||||
|
||||
If you have **borg** and **borgmatic** already installed on your system and the **borgmatic** configuration file in place, you can test it out.
|
||||
|
||||
You can create the repository.
|
||||
|
||||
```text
|
||||
# borgmatic init -v 2
|
||||
```
|
||||
|
||||
You can list the backups for the repository.
|
||||
|
||||
```text
|
||||
# borgmatic list --last 5
|
||||
borgmatic-home-2020-01-30T22:01:30 Thu, 2020-01-30 22:01:42 [0000000000000000000000000000000000000000000000000000000000000000]
|
||||
borgmatic-home-2020-01-31T22:02:12 Fri, 2020-01-31 22:02:24 [0000000000000000000000000000000000000000000000000000000000000000]
|
||||
borgmatic-home-2020-02-01T22:01:34 Sat, 2020-02-01 22:01:45 [0000000000000000000000000000000000000000000000000000000000000000]
|
||||
borgmatic-home-2020-02-02T16:01:22 Sun, 2020-02-02 16:01:32 [0000000000000000000000000000000000000000000000000000000000000000]
|
||||
borgmatic-home-2020-02-02T18:01:36 Sun, 2020-02-02 18:01:47 [0000000000000000000000000000000000000000000000000000000000000000]
|
||||
```
|
||||
|
||||
You could run a check.
|
||||
|
||||
```text
|
||||
# borgmatic check -v 1
|
||||
/etc/borgmatic.d/home.yaml: Pinging Healthchecks start
|
||||
/borg/home: Running consistency checks
|
||||
Remote: Starting repository check
|
||||
Remote: Starting repository index check
|
||||
Remote: Completed repository check, no problems found.
|
||||
Starting archive consistency check...
|
||||
Analyzing archive borgmatic-home-2020-02-01T22:01:34 (1/3)
|
||||
Analyzing archive borgmatic-home-2020-02-02T16:01:22 (2/3)
|
||||
Analyzing archive borgmatic-home-2020-02-02T18:01:36 (3/3)
|
||||
Orphaned objects check skipped (needs all archives checked).
|
||||
Archive consistency check complete, no problems found.
|
||||
|
||||
summary:
|
||||
/etc/borgmatic.d/home.yaml: Successfully ran configuration file
|
||||
```
|
||||
|
||||
But most of all, if you simply run `borgmatic` without any parameters, it will run through the whole configuration and apply all the steps.
|
||||
|
||||
At this point, you can simply add the `borgmatic` command in a **cron** to run on an interval.
|
||||
The other options would be to configure a `systemd` **timer** and **service** to run this on an interval.
|
||||
The latter is usually provided to you if you used your **package manager** to install **borgmatic**.
|
||||
|
||||
|
||||
## Conclusion {#conclusion}
|
||||
|
||||
If you've checked **borg** and found it too much work to script, give **borgmatic** a try.
|
||||
I've been using borgmatic for few weeks now with no issues at all.
|
||||
I recently hooked it to a monitoring system so I will have a better view on when it runs, how much time each run takes.
|
||||
Also, if any of my backups fail I get notified by email. I hope you enjoy **borg** and **borgmatic** as much as I am.
|
|
@ -0,0 +1,134 @@
|
|||
+++
|
||||
title = "Bookmark with Org-capture"
|
||||
author = ["Elia el Lazkani"]
|
||||
date = 2021-05-27T21:00:00+02:00
|
||||
lastmod = 2021-06-28T00:01:54+02:00
|
||||
tags = ["org-mode", "emacs", "org-capture", "org-web-tools", "org-cliplink"]
|
||||
categories = ["text-editors"]
|
||||
draft = false
|
||||
+++
|
||||
|
||||
I was reading, and watching, [Mike Zamansky](https://cestlaz.github.io/about/)'s blog post [series](https://cestlaz.github.io/stories/emacs/) about _org-capture_ and how he manages his bookmarks. His blog and video series are a big recommendation from me, he is teaching me tons every time I watch his videos. His inspirational videos were what made me dig down on how I could do what he's doing but... my way...
|
||||
|
||||
I stumbled across [this](https://dewaka.com/blog/2020/04/08/bookmarking-with-org-mode/) blog post that describes the process of using `org-cliplink` to insert the _title_ of the post into an _org-mode_ link. Basically, what I wanted to do is provide a link and get an _org-mode_ link. Sounds simple enough. Let's dig in.
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
||||
## Org Capture Templates {#org-capture-templates}
|
||||
|
||||
I will assume that you went through Mike's [part 1](https://cestlaz.github.io/posts/using-emacs-23-capture-1/) and [part 2](https://cestlaz.github.io/posts/using-emacs-24-capture-2/) posts to understand what `org-capture-templates` are and how they work. I essentially learned it from him and I do not think I can do a better job than a teacher.
|
||||
|
||||
Now that we understand where we need to start from, let's explain the situation. We need to find a way to call `org-capture` and provide it with a _template_. This _template_ will need to take a _url_ and add an _org-mode_ _url_ in our bookmarks. It will look something like the following.
|
||||
|
||||
```emacs-lisp
|
||||
(setq org-capture-templates
|
||||
'(("b" "Bookmark (Clipboard)" entry (file+headline "~/path/to/bookmarks.org" "Bookmarks")
|
||||
"** %(some-function-here-to-call)\n:PROPERTIES:\n:TIMESTAMP: %t\n:END:%?\n" :empty-lines 1 :prepend t)))
|
||||
```
|
||||
|
||||
I formatted it a bit so it would have some properties. I simply used the `%t` to put the _timestamp_ of when I took the bookmark. I used the `%?` to drop me at the end for editing. Then `some-function-here-to-call` a function to call to generate our _bookmark section_ with a title.
|
||||
|
||||
The blog post I eluded to earlier solved it by using [org-cliplink](https://github.com/rexim/org-cliplink). While `org-cliplink` is great for getting _titles_ and manipulating them, I don't really need that functionality. I can do it manually. Sometimes, though, I would like to copy a page... Maybe if there is a project that _could_ attempt to do someth... Got it... [org-web-tools](https://github.com/alphapapa/org-web-tools).
|
||||
|
||||
|
||||
### Configuring _org-capture_ with _org-web-tools_ {#configuring-org-capture-with-org-web-tools}
|
||||
|
||||
You would assume that you would be able to just pop `(org-web-tools-insert-link-for-url)` in the previous block and you're all done. But uhhh....
|
||||
|
||||
```text
|
||||
Wrong number of arguments: (1 . 1), 0
|
||||
```
|
||||
|
||||
No dice. What would seem to be the problem ?
|
||||
|
||||
We look at the definition and we find this.
|
||||
|
||||
```emacs-lisp
|
||||
(defun org-web-tools-insert-link-for-url (url)
|
||||
"Insert Org link to URL using title of HTML page at URL.
|
||||
If URL is not given, look for first URL in `kill-ring'."
|
||||
(interactive (list (org-web-tools--get-first-url)))
|
||||
(insert (org-web-tools--org-link-for-url url)))
|
||||
```
|
||||
|
||||
I don't know why, exactly, it doesn't work by calling it straight away because I do not know _emacs-lisp_ at all. If you do, let me know. I suspect it has something to do with `(interactive)` and the list provided to it as arguments.
|
||||
|
||||
Anyway, I can see it is using `org-web-tools--org-link-for-url`, which the documentation suggests does the same thing as `org-web-tools-insert-link-for-url`, but is not exposed with `(interactive)`. Okay, we have bits and pieces of the puzzle. Let's put it together.
|
||||
|
||||
First, we create the function.
|
||||
|
||||
```emacs-lisp
|
||||
(defun org-web-tools-insert-link-for-clipboard-url ()
|
||||
"Extend =org-web-tools-inster-link-for-url= to take URL from clipboard or kill-ring"
|
||||
(interactive)
|
||||
(org-web-tools--org-link-for-url (org-web-tools--get-first-url)))
|
||||
```
|
||||
|
||||
Then, we set our `org-capture-templates` variable to the list of our _only_ item.
|
||||
|
||||
```emacs-lisp
|
||||
(setq org-capture-templates
|
||||
'(("b" "Bookmark (Clipboard)" entry (file+headline "~/path/to/bookmarks.org" "Bookmarks")
|
||||
"** %(org-web-tools-insert-link-for-clipboard-url)\n:PROPERTIES:\n:TIMESTAMP: %t\n:END:%?\n" :empty-lines 1 :prepend t)))
|
||||
```
|
||||
|
||||
Now if we copy a link into the _clipboard_ and then call `org-capture` with the option `b`, we get prompted to edit the following before adding it to our _bookmarks_.
|
||||
|
||||
```org
|
||||
** [[https://cestlaz.github.io/stories/emacs/][Using Emacs Series - C'est la Z]]
|
||||
:PROPERTIES:
|
||||
:TIMESTAMP: <2020-09-17 do>
|
||||
:END:
|
||||
```
|
||||
|
||||
Works like a charm.
|
||||
|
||||
|
||||
### Custom URL {#custom-url}
|
||||
|
||||
What if we need to modify the url in some way before providing it. I have that use case. All I needed to do is create a function that takes _input_ from the user and provide it to `org-web-tools--org-link-for-url`. How hard can that be ?! uhoh! I said the curse phrase didn't I ?
|
||||
|
||||
```emacs-lisp
|
||||
(defun org-web-tools-insert-link-for-given-url ()
|
||||
"Extend =org-web-tools-inster-link-for-url= to take a user given URL"
|
||||
(interactive)
|
||||
(let ((url (read-string "Link: ")))
|
||||
(org-web-tools--org-link-for-url url)))
|
||||
```
|
||||
|
||||
We can, then, hook the whole thing up to our `org-capture-templates` and we get.
|
||||
|
||||
```emacs-lisp
|
||||
(setq org-capture-templates
|
||||
'(("b" "Bookmark (Clipboard)" entry (file+headline "~/path/to/bookmarks.org" "Bookmarks")
|
||||
"** %(org-web-tools-insert-link-for-clipboard-url)\n:PROPERTIES:\n:TIMESTAMP: %t\n:END:%?\n" :empty-lines 1 :prepend t)
|
||||
("B" "Bookmark (Paste)" entry (file+headline "~/path/to/bookmarks.org" "Bookmarks")
|
||||
"** %(org-web-tools-insert-link-for-given-url)\n:PROPERTIES:\n:TIMESTAMP: %t\n:END:%?\n" :empty-lines 1 :prepend t)))
|
||||
```
|
||||
|
||||
if we use the `B`, this time, it will prompt us for input.
|
||||
|
||||
|
||||
### Configure _org-capture_ with _org-cliplink_ {#configure-org-capture-with-org-cliplink}
|
||||
|
||||
Recently, this setup has started to fail and I got contacted by a friend pointing me to my own blog post. So I decided to fix it.
|
||||
My old setup used to use _org-cliplink_ but I moved away from it for some reason. I cannot remember why. It is time to move back to it.
|
||||
|
||||
In this setup, I got rid of the _custom function_ to get the link manually. I believe that is why I moved but I cannot be certain.
|
||||
Anyway, nothing worked so why keep something not working right ?
|
||||
|
||||
All this means is that we only need to setup our `org-capture-templates`. We can do so as follows.
|
||||
|
||||
```emacs-lisp
|
||||
(setq org-capture-templates
|
||||
'(("b" "Bookmark (Clipboard)" entry (file+headline "~/path/to/bookmarks.org" "Bookmarks")
|
||||
"** %(org-cliplink)\n:PROPERTIES:\n:TIMESTAMP: %t\n:END:%?\n" :empty-lines 1 :prepend t)
|
||||
```
|
||||
|
||||
Now, you should have a working setup... `org-cliplink` willing !
|
||||
|
||||
|
||||
## Conclusion {#conclusion}
|
||||
|
||||
I thought this was going to be harder to pull off but, alas, it was simple, even for someone who doesn't know _emacs-lisp_, to figure out. I hope I'd get more familiar with _emacs-lisp_ with time and be able to do more. Until next time, I recommend you hook `org-capture` into your workflow. Make sure it fits your work style, otherwise you will not use it, and make your path a more productive one.
|
|
@ -0,0 +1,119 @@
|
|||
+++
|
||||
title = "BorgBackup"
|
||||
author = ["Elia el Lazkani"]
|
||||
date = 2020-01-30T21:00:00+01:00
|
||||
lastmod = 2021-06-28T00:00:24+02:00
|
||||
tags = ["borg", "borgbackup"]
|
||||
categories = ["backup"]
|
||||
draft = false
|
||||
+++
|
||||
|
||||
I usually lurk around **Freenode** in a few projects that I use, can learn from and/or help with. This is a great opportunity to learn new things _all the time_.
|
||||
|
||||
This story is familiar in that manner, but that's where similarities diverge. Someone asked around `#Weechat` a question that caught my attention because it was, sort of, out of topic. The question was around how do you backup your stuff ?
|
||||
|
||||
<!--more-->
|
||||
|
||||
I mean if I were asked that, I would've mentioned revision controlled off-site repositories for the code that I have.
|
||||
For the personal stuff on the other hand, I would've admitted simple rudimentary solutions like `rsync`, `tar` and external drives.
|
||||
So I was sort of happy with my backup solution, it has worked. Plain and simple.
|
||||
|
||||
I have to admit that, by modern standards it might not offer the ability to go back in time to a certain point.
|
||||
But I use _file systems_ that offer _snapshot_ capabilities. I can recover from previous snapshots and send them somewhere safe.
|
||||
Archiving and encrypting those is not a simple process, wish it was. That limits storage possibilities if you care to keep your data private.
|
||||
|
||||
But if you know me, you'd know that I'm always open to new ways of doing things.
|
||||
|
||||
I can't remember exactly the conversation but the name **BorgBackup** was mentioned (thank you however you are). That's when things changed.
|
||||
|
||||
|
||||
## BorgBackup {#borgbackup}
|
||||
|
||||
[Borg](https://www.borgbackup.org/) is defined as a
|
||||
|
||||
> Deduplicating archiver with compression and encryption
|
||||
|
||||
Although this is a very accurate and encompassing definition, it doesn't really show you how _AWESOME_ this thing is.
|
||||
|
||||
I had to go to the docs first before I stumbled upon this video.
|
||||
|
||||
[](https://asciinema.org/a/133292)
|
||||
|
||||
It can be a bit difficult to follow the video, I understand.
|
||||
|
||||
This is why I decided to write this post, to sort of explain to you how **Borg** can backup your stuff.
|
||||
|
||||
|
||||
## Encryption {#encryption}
|
||||
|
||||
Oh yeah, that's the **first** thing I look at when I consider any suggested backup solution. **Borg** offers built-in _encryption_ and _authentication_. You can read about it in details in the [docs](https://borgbackup.readthedocs.io/en/stable/usage/init.html#encryption-modes).
|
||||
|
||||
So that's a check.
|
||||
|
||||
|
||||
## Compression {#compression}
|
||||
|
||||
This is another thing I look for in a suggested backup solution. And I'm happy to report that **Borg** has this under the belt as well.
|
||||
**Borg** currently supports _LZ4_, _zlib_, _LZMA_ and _zstd_. You can also tune the level of compression. Pretty neat !
|
||||
|
||||
|
||||
## Full Backup {#full-backup}
|
||||
|
||||
I've watched a few videos and read a bit of their documentation and they talk about **FULL BACKUP**.
|
||||
Which means every time you run **Borg**, it will take a full backup of your stuff. A full backup at that point in time, don't forget.
|
||||
The implication of this is that you have a versioned list of your backups, and you can go back in time to any of them.
|
||||
|
||||
Yes, you read that right. **Borg** does a full backup every time you run it. That's a pretty neat feature.
|
||||
|
||||
If you're a bit ahead of me, you were gonna say woooow there bud ! I have **Gigabytes** of data, what do you mean **FULL BACKUP**, you keep saying **FULL BACKUP**.
|
||||
|
||||
I mean **FULL BACKUP**, wait until you hear about the next feature.
|
||||
|
||||
|
||||
## Deduplication {#deduplication}
|
||||
|
||||
Booyah ! It has deduplication. Ain't that awesome. I've watched a presentation by the project's original maintainer explain this.
|
||||
I have one thing to say. It's pretty good. How good, you may ask ?
|
||||
|
||||
My answer would be, good enough to fool me into thinking that it was taking snapshots of my data.
|
||||
|
||||
```text
|
||||
-----------------------------------------------------------------------------
|
||||
Original size Compressed size Deduplicated size
|
||||
All archives: 34.59 GB 9.63 GB 1.28 GB
|
||||
Unique chunks Total chunks
|
||||
Chunk index: 47772 469277
|
||||
```
|
||||
|
||||
It wasn't until I dug in deeper into the matter that I understood that it was a full backup and the deduping taking care of the rest.
|
||||
|
||||
|
||||
## Check {#check}
|
||||
|
||||
**Borg** offers a way to vefiry the consistency of the repository and the archives within. This way, you can make sure that your backups haven't been corrupted.
|
||||
|
||||
This is a very good feature, and a must in my opinion from a backup solution. **Borg** has _YOU_ covered.
|
||||
|
||||
|
||||
## Restore {#restore}
|
||||
|
||||
A backup solution is nothing if you can't get your data backup.
|
||||
**Borg** has a few ways for you to get your data.
|
||||
You can either create an _archive_ file out of a backup. You can export a file, a directory or the whole directory tree from a backup.
|
||||
You can also, if you like, mount a backup and get stuff out.
|
||||
|
||||
<div class="admonition warning">
|
||||
<p class="admonition-title">warning</p>
|
||||
|
||||
Mounting a **Borg** backup is done using _fuse_
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
## Conclusion {#conclusion}
|
||||
|
||||
**Borg** is a great tool for backup. It comes in an easily installable self-contained binary so you can use it, pretty much, anywhere giving you no excuse _whatsoever_ not to use it.
|
||||
Their documentation is very good, and **Borg** is easy to use.
|
||||
It offers you all the features you need to do off-site and on-site backups of all your important data.
|
||||
|
||||
I'll be testing **Borg** moving forward for my data. I'll make sure to report back anything I find, in the future, related to the subject.
|
|
@ -0,0 +1,166 @@
|
|||
+++
|
||||
title = "Building k3s on a Pi"
|
||||
author = ["Elia el Lazkani"]
|
||||
date = 2020-08-09T21:00:00+02:00
|
||||
lastmod = 2021-06-28T00:00:45+02:00
|
||||
tags = ["arm", "kubernetes"]
|
||||
categories = ["k3s"]
|
||||
draft = false
|
||||
+++
|
||||
|
||||
I have had a **Pi** laying around used for a simple task for a while now.
|
||||
A few days ago, I was browsing the web, learning more about privacy, when I stumbled upon [AdGuard Home](https://adguard.com/en/welcome.html).
|
||||
|
||||
I have been using it as my internal DNS on top of the security and privacy layers I add to my machine.
|
||||
Its benefits can be argued but it is a DNS after all and I wanted to see what else it can do for me.
|
||||
Anyway, I digress. I searched to see if I could find a container for **AdGuard Home** and I did.
|
||||
|
||||
At this point, I started thinking about what I could do to make the [Pi](https://www.raspberrypi.org/) more useful.
|
||||
|
||||
That's when [k3s](https://k3s.io/) came into the picture.
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
||||
## Pre-requisites {#pre-requisites}
|
||||
|
||||
As this is not a **Pi** tutorial, I am going to be assuming that you have a _Raspberry Pi_ with **Raspberry Pi OS** _Buster_ installed on it.
|
||||
The assumption does not mean you cannot install any other OS on the Pi and run this setup.
|
||||
It only means that I have tested this on _Buster_ and that your milage will vary.
|
||||
|
||||
|
||||
## Prepare the Pi {#prepare-the-pi}
|
||||
|
||||
Now that you have _Buster_ already installed, let's go ahead and [fix](https://rancher.com/docs/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) a small default configuration issue with it.
|
||||
|
||||
**K3s** uses `iptables` to route things around correctly. _Buster_ uses `nftables` by default, let's switch it to `iptables`.
|
||||
|
||||
```text
|
||||
$ sudo iptables -F
|
||||
$ sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
|
||||
$ sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
|
||||
$ sudo reboot
|
||||
```
|
||||
|
||||