Second commit
4
.gitignore
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
__pycache__/
|
||||
output/
|
||||
cache/
|
||||
.doit.db
|
25
LICENSE
Normal file
|
@ -0,0 +1,25 @@
|
|||
BSD 2-Clause License
|
||||
|
||||
Copyright (c) 2019, Elia El Lazkani
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
4
README.rst
Normal file
|
@ -0,0 +1,4 @@
|
|||
The DevOps blog
|
||||
===============
|
||||
|
||||
This is the source code for `The DevOps blog <https://blog.lazkani.io>`_.
|
25
files/assets/css/custom.css
Normal file
|
@ -0,0 +1,25 @@
|
|||
.literal {
|
||||
border: 1px solid #ccc;
|
||||
color: #999;
|
||||
background-color: #272822;
|
||||
border-radius: 3px;
|
||||
font-family: Monaco, Menlo, Consolas, "Courier New", monospace;
|
||||
white-space: nowrap;
|
||||
font-size: 12px;
|
||||
padding: 2px 4px;
|
||||
}
|
||||
|
||||
div.note {
|
||||
word-wrap: break-word;
|
||||
background-color: rgb(34,34,34);
|
||||
border: 1px solid #258cd1;
|
||||
}
|
||||
|
||||
div.admonition, div.hint, div.important, div.note, div.tip, div.sidebar, div.attention, div.caution, div.danger, div.error, div.warning, div.system-message {
|
||||
background-color: rgb(34,34,34);
|
||||
}
|
||||
|
||||
div.note p.admonition-title {
|
||||
background-color: #258cd1 !important;
|
||||
border-bottom: 1px solid #258cd1;
|
||||
}
|
BIN
images/local_kubernetes_cluster_on_kvm/01-add_cluster.png
Normal file
After Width: | Height: | Size: 96 KiB |
BIN
images/local_kubernetes_cluster_on_kvm/02-custom_cluster.png
Normal file
After Width: | Height: | Size: 70 KiB |
After Width: | Height: | Size: 54 KiB |
After Width: | Height: | Size: 46 KiB |
BIN
images/local_kubernetes_cluster_on_kvm/05-customer_nodes.png
Normal file
After Width: | Height: | Size: 65 KiB |
BIN
images/local_kubernetes_cluster_on_kvm/06-registered_nodes.png
Normal file
After Width: | Height: | Size: 63 KiB |
BIN
images/local_kubernetes_cluster_on_kvm/07-kubernetes_cluster.png
Normal file
After Width: | Height: | Size: 91 KiB |
BIN
images/weechat_ssh_and_notification/01-weechat_weenotify.png
Normal file
After Width: | Height: | Size: 17 KiB |
20
pages/about_me.rst
Normal file
|
@ -0,0 +1,20 @@
|
|||
.. title: About me
|
||||
.. date: 2019-06-21
|
||||
.. status: published
|
||||
.. authors: Elijah Lazkani
|
||||
|
||||
I am a DevOps engineer with a passion for technology, automation, Linux and OpenSource. I love learning new tricks and challenging myself with new tools being released on a monthly bases around *kubernetes* and/or *configuration management*. On my free time, I like to write automation tools and packages which can be found on PyPi. Or, I might as well tinker with new things around *kubernetes*. I blog about all that *here*. I think if I can write a blog about it, I understand it enough to have an opinion about it. It all comes in handy when the business need arises. I play around with technologies all day long by deploying, configuring, managing and maintaining all parts of the infrastructure below the application layer. I dabbled with "architecting" parts of different infrastructures, from end to end and I can say I have a knack for it and I like it when possible.
|
||||
|
||||
Experience
|
||||
==========
|
||||
|
||||
Here's a quick and dirty list of some of the technologies I've had my hands dirty with.
|
||||
|
||||
- **Neworking**: Configuring routers and switches (Brocade, CISCO, Dell).
|
||||
- **Infrastructure**: Finding, automating, deploying and managing infrastructure key services. Too many to mention.
|
||||
- **Virtualization**: Building infrastructures for virtualization (HyperV, libvirt, proxmox, RHEV, VMWare).
|
||||
- **Configuration Management**: Ansible, Chef, Puppet.
|
||||
- **CI/CD**: Gitlab-CI, Jenkins.
|
||||
- **Cloud**: AWS.
|
||||
- **Development**: Python packages for plugging in different technologies together for automation.
|
||||
- **Containers**: Docker and Kubernetes deployment, management and supporting team deployments.
|
51
pages/index.rst
Normal file
|
@ -0,0 +1,51 @@
|
|||
.. title: Welcome to the DevOps blog
|
||||
.. slug: index
|
||||
.. date: 2019-06-23
|
||||
.. tags:
|
||||
.. category:
|
||||
.. description:
|
||||
.. type: text
|
||||
|
||||
|
||||
What is this ?
|
||||
==============
|
||||
|
||||
This is my humble blog where I post things related to DevOps in hope that I or someone else might benefit from it.
|
||||
|
||||
|
||||
Wait what ? What is DevOps ?
|
||||
============================
|
||||
|
||||
`Duckduckgo <https://duckduckgo.com/?q=what+is+devops+%3F&t=ffab&ia=web&iax=about>`_ define DevOps as:
|
||||
|
||||
DevOps is a software engineering culture and practice that aims at unifying
|
||||
software development and software operation. The main characteristic of the
|
||||
DevOps movement is to strongly advocate automation and monitoring at all
|
||||
steps of software construction, from integration, testing, releasing to
|
||||
deployment and infrastructure management. DevOps aims at shorter development
|
||||
cycles, increased deployment frequency, and more dependable releases,
|
||||
in close alignment with business objectives.
|
||||
|
||||
In short, we build an infrastructure that is easily deployable, maintainable and, in all forms, makes the lives of the developers a breeze.
|
||||
|
||||
|
||||
What do you blog about ?
|
||||
========================
|
||||
|
||||
Anything and everything related to DevOps. The field is very big and complex with a lot of different tools and technologies implemented.
|
||||
I try to blog about interesting and new things as much as possible, when time permits.
|
||||
|
||||
Here's a short list of the latest posts.
|
||||
|
||||
.. post-list::
|
||||
:start: 0
|
||||
:stop: 3
|
||||
|
||||
|
||||
Projects
|
||||
========
|
||||
|
||||
- `blog.lazkani.io <https://gitlab.com/elazkani/blog.lazkani.io>`_: The DevOps `blog <https://blog.lazkani.io>`_.
|
||||
- `weenotify <https://gitlab.com/elazkani/weenotify>`_: an official `weechat <https://weechat.org>`_ notification plugin.
|
||||
- `rundeck-resources <https://gitlab.com/elazkani/rundeck-resources>`_: python tool to query resources from different sources and export them into a data structure that `Rundeck <https://www.rundeck.com/open-source>`_ can consume. This tool can be found on `PyPI <https://pypi.org/project/rundeck-resources/>`_.
|
||||
- `get_k8s_resources <https://gitlab.com/elazkani/get-k8s-resources>`_: a small python script that returns a list of kubernetes resources.
|
500
posts/configuration-management/ansible_testing_with_molecule.rst
Normal file
|
@ -0,0 +1,500 @@
|
|||
.. title: Ansible testing with Molecule
|
||||
.. date: 2019-01-11
|
||||
.. slug: ansible-testing-with-molecule
|
||||
.. updated: 2019-06-21
|
||||
.. status: published
|
||||
.. tags: configuration management, ansible, molecule,
|
||||
.. category: configuration management
|
||||
.. authors: Elijah Lazkani
|
||||
.. description: A fast way to create a testable ansible role using molecule.
|
||||
.. type: text
|
||||
|
||||
|
||||
When I first started using `ansible <https://www.ansible.com/>`_, I did not know about `molecule <https://molecule.readthedocs.io/en/latest/>`_. It was a bit daunting to start a *role* from scratch and trying to develop it without having the ability to test it. Then a co-worker of mine told me about molecule and everything changed.
|
||||
|
||||
.. TEASER_END
|
||||
|
||||
I do not have any of the tools I need installed on this machine, so I will go through, step by step, how I set up ansible and molecule on any new machine I come across for writing ansible roles.
|
||||
|
||||
Requirements
|
||||
============
|
||||
|
||||
What we are trying to achieve in this post, is a working ansible role that can be tested inside a docker container. To be able to achieve that, we need to install docker on the system. Follow the instructions on `installing docker <https://docs.docker.com/install/>`_ found on the docker website.
|
||||
|
||||
Good Practices
|
||||
==============
|
||||
|
||||
First thing's first. Let's start by making sure that we have python installed properly on the system.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ python --version
|
||||
Python 3.7.1
|
||||
|
||||
Because in this case I have *python3* installed, I can create a *virtualenv* easier without the use of external tools.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# Create the directory to work with
|
||||
$ mkdir -p sandbox/test-roles
|
||||
# Navigate to the directory
|
||||
$ cd sandbox/test-roles/
|
||||
# Create the virtualenv
|
||||
~/sandbox/test-roles $ python -m venv .ansible-venv
|
||||
# Activate the virtualenv
|
||||
~/sandbox/test-roles $ source .ansible-venv/bin/activate
|
||||
# Check that your virtualenv activated properly
|
||||
(.ansible-venv) ~/sandbox/test-roles $ which python
|
||||
/home/elijah/sandbox/test-roles/.ansible-venv/bin/python
|
||||
|
||||
At this point, we can install the required dependencies.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ pip install ansible molecule docker
|
||||
Collecting ansible
|
||||
Downloading https://files.pythonhosted.org/packages/56/fb/b661ae256c5e4a5c42859860f59f9a1a0b82fbc481306b30e3c5159d519d/ansible-2.7.5.tar.gz (11.8MB)
|
||||
100% |████████████████████████████████| 11.8MB 3.8MB/s
|
||||
Collecting molecule
|
||||
Downloading https://files.pythonhosted.org/packages/84/97/e5764079cb7942d0fa68b832cb9948274abb42b72d9b7fe4a214e7943786/molecule-2.19.0-py3-none-any.whl (180kB)
|
||||
100% |████████████████████████████████| 184kB 2.2MB/s
|
||||
|
||||
...
|
||||
|
||||
Successfully built ansible ansible-lint anyconfig cerberus psutil click-completion tabulate tree-format pathspec future pycparser arrow
|
||||
Installing collected packages: MarkupSafe, jinja2, PyYAML, six, pycparser, cffi, pynacl, idna, asn1crypto, cryptography, bcrypt, paramiko, ansible, pbr, git-url-parse, monotonic, fasteners, click, colorama, sh, python-gilt, ansible-lint, pathspec, yamllint, anyconfig, cerberus, psutil, more-itertools, py, attrs, pluggy, atomicwrites, pytest, testinfra, ptyprocess, pexpect, click-completion, tabulate, future, chardet, binaryornot, poyo, urllib3, certifi, requests, python-dateutil, arrow, jinja2-time, whichcraft, cookiecutter, tree-format, molecule, docker-pycreds, websocket-client, docker
|
||||
Successfully installed MarkupSafe-1.1.0 PyYAML-3.13 ansible-2.7.5 ansible-lint-3.4.23 anyconfig-0.9.7 arrow-0.13.0 asn1crypto-0.24.0 atomicwrites-1.2.1 attrs-18.2.0 bcrypt-3.1.5 binaryornot-0.4.4 cerberus-1.2 certifi-2018.11.29 cffi-1.11.5 chardet-3.0.4 click-6.7 click-completion-0.3.1 colorama-0.3.9 cookiecutter-1.6.0 cryptography-2.4.2 docker-3.7.0 docker-pycreds-0.4.0 fasteners-0.14.1 future-0.17.1 git-url-parse-1.1.0 idna-2.8 jinja2-2.10 jinja2-time-0.2.0 molecule-2.19.0 monotonic-1.5 more-itertools-5.0.0 paramiko-2.4.2 pathspec-0.5.9 pbr-4.1.0 pexpect-4.6.0 pluggy-0.8.1 poyo-0.4.2 psutil-5.4.6 ptyprocess-0.6.0 py-1.7.0 pycparser-2.19 pynacl-1.3.0 pytest-4.1.0 python-dateutil-2.7.5 python-gilt-1.2.1 requests-2.21.0 sh-1.12.14 six-1.11.0 tabulate-0.8.2 testinfra-1.16.0 tree-format-0.1.2 urllib3-1.24.1 websocket-client-0.54.0 whichcraft-0.5.2 yamllint-1.11.1
|
||||
|
||||
Creating your first ansible role
|
||||
================================
|
||||
|
||||
Once all the steps above are complete, we can start by creating our first ansible role.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ molecule init role -r example-role
|
||||
--> Initializing new role example-role...
|
||||
Initialized role in /home/elijah/sandbox/test-roles/example-role successfully.
|
||||
|
||||
$ tree example-role/
|
||||
example-role/
|
||||
├── defaults
|
||||
│ └── main.yml
|
||||
├── handlers
|
||||
│ └── main.yml
|
||||
├── meta
|
||||
│ └── main.yml
|
||||
├── molecule
|
||||
│ └── default
|
||||
│ ├── Dockerfile.j2
|
||||
│ ├── INSTALL.rst
|
||||
│ ├── molecule.yml
|
||||
│ ├── playbook.yml
|
||||
│ └── tests
|
||||
│ ├── __pycache__
|
||||
│ │ └── test_default.cpython-37.pyc
|
||||
│ └── test_default.py
|
||||
├── README.md
|
||||
├── tasks
|
||||
│ └── main.yml
|
||||
└── vars
|
||||
└── main.yml
|
||||
|
||||
9 directories, 12 files
|
||||
|
||||
You can find what each directory is for and how ansible works by visiting docs.ansible.com.
|
||||
|
||||
``meta/main.yml``
|
||||
-----------------
|
||||
|
||||
The meta file needs to modified and filled with information about the role. This is not a required file to modify if you are keeping this for yourself, for example. But it is a good idea to have as much information as possible if this is going to be released. In my case, I don't need any fanciness as this is just sample code.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
galaxy_info:
|
||||
author: Elijah Lazkani
|
||||
description: This is an example ansible role to showcase molecule at work
|
||||
license: license (BDS-2)
|
||||
min_ansible_version: 2.7
|
||||
galaxy_tags: []
|
||||
dependencies: []
|
||||
|
||||
``tasks/main.yml``
|
||||
------------------
|
||||
|
||||
This is where the magic is set in motion. Tasks are the smallest entities in a role that do small and idempotent actions. Let's write a few simple tasks to create a user and install a service.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
# Create the user example
|
||||
- name: Create 'example' user
|
||||
user:
|
||||
name: example
|
||||
comment: Example user
|
||||
shell: /bin/bash
|
||||
state: present
|
||||
create_home: yes
|
||||
home: /home/example
|
||||
|
||||
# Install nginx
|
||||
- name: Install nginx
|
||||
apt:
|
||||
name: nginx
|
||||
state: present
|
||||
update_cache: yes
|
||||
notify: Restart nginx
|
||||
|
||||
``handlers/main.yml``
|
||||
---------------------
|
||||
|
||||
If you noticed, we are notifying a handler to be called after installing *nginx*. All handlers notified will run after all the tasks complete and each handler will only run once. This is a good way to make sure that you don't restart *nginx* multiple times if you call the handler more than once.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
# Handler to restart nginx
|
||||
- name: Restart nginx
|
||||
service:
|
||||
name: nginx
|
||||
state: restarted
|
||||
|
||||
``molecule/default/molecule.yml``
|
||||
---------------------------------
|
||||
|
||||
It's time to configure molecule to do what we need. We need to start an ubuntu docker container, so we need to specify that in the molecule YAML file. All we need to do is change the image line to specify that we want an ``ubuntu:bionic`` image.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
dependency:
|
||||
name: galaxy
|
||||
driver:
|
||||
name: docker
|
||||
lint:
|
||||
name: yamllint
|
||||
platforms:
|
||||
- name: instance
|
||||
image: ubuntu:bionic
|
||||
provisioner:
|
||||
name: ansible
|
||||
lint:
|
||||
name: ansible-lint
|
||||
scenario:
|
||||
name: default
|
||||
verifier:
|
||||
name: testinfra
|
||||
lint:
|
||||
name: flake8
|
||||
|
||||
``molecule/default/playbook.yml``
|
||||
---------------------------------
|
||||
|
||||
This is the playbook that molecule will run. Make sure that you have all the steps that you need here. I will keep this as is.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
- name: Converge
|
||||
hosts: all
|
||||
roles:
|
||||
- role: example-role
|
||||
|
||||
First Role Pass
|
||||
===============
|
||||
|
||||
This is time to test our role and see what's going on.
|
||||
|
||||
.. code:: text
|
||||
|
||||
(.ansible-role) ~/sandbox/test-roles/example-role/ $ molecule converge
|
||||
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
|
||||
Validation completed successfully.
|
||||
--> Test matrix
|
||||
|
||||
└── default
|
||||
├── dependency
|
||||
├── create
|
||||
├── prepare
|
||||
└── converge
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'dependency'
|
||||
Skipping, missing the requirements file.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'create'
|
||||
|
||||
PLAY [Create] ******************************************************************
|
||||
|
||||
TASK [Log into a Docker registry] **********************************************
|
||||
skipping: [localhost] => (item=None)
|
||||
|
||||
TASK [Create Dockerfiles from image names] *************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Discover local Docker images] ********************************************
|
||||
ok: [localhost] => (item=None)
|
||||
ok: [localhost]
|
||||
|
||||
TASK [Build an Ansible compatible image] ***************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Create docker network(s)] ************************************************
|
||||
|
||||
TASK [Create molecule instance(s)] *********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Wait for instance(s) creation to complete] *******************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=5 changed=4 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'prepare'
|
||||
Skipping, prepare playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'converge'
|
||||
|
||||
PLAY [Converge] ****************************************************************
|
||||
|
||||
TASK [Gathering Facts] *********************************************************
|
||||
ok: [instance]
|
||||
|
||||
TASK [example-role : Create 'example' user] ************************************
|
||||
changed: [instance]
|
||||
|
||||
TASK [example-role : Install nginx] ********************************************
|
||||
changed: [instance]
|
||||
|
||||
RUNNING HANDLER [example-role : Restart nginx] *********************************
|
||||
changed: [instance]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
instance : ok=4 changed=3 unreachable=0 failed=0
|
||||
|
||||
It looks like the **converge** step succeeded.
|
||||
|
||||
Writing Tests
|
||||
=============
|
||||
|
||||
It is always a good practice to write unittests when you're writing code. Ansible roles should not be an exception. Molecule offers a way to run tests, which you can think of as unittest, to make sure that what the role gives you is what you were expecting. This helps future development of the role and keeps you from falling in previously solved traps.
|
||||
|
||||
``molecule/default/tests/test_default.py``
|
||||
------------------------------------------
|
||||
|
||||
Molecule leverages the `testinfra <https://testinfra.readthedocs.io/en/latest/>`_ project to run its tests. You can use other tools if you so wish, and there are many. In this example we will be using *testinfra*.
|
||||
|
||||
.. code:: python
|
||||
|
||||
import os
|
||||
|
||||
import testinfra.utils.ansible_runner
|
||||
|
||||
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
|
||||
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
|
||||
|
||||
|
||||
def test_hosts_file(host):
|
||||
f = host.file('/etc/hosts')
|
||||
|
||||
assert f.exists
|
||||
assert f.user == 'root'
|
||||
assert f.group == 'root'
|
||||
|
||||
|
||||
def test_user_created(host):
|
||||
user = host.user("example")
|
||||
assert user.name == "example"
|
||||
assert user.home == "/home/example"
|
||||
|
||||
|
||||
def test_user_home_exists(host):
|
||||
user_home = host.file("/home/example")
|
||||
assert user_home.exists
|
||||
assert user_home.is_directory
|
||||
|
||||
|
||||
def test_nginx_is_installed(host):
|
||||
nginx = host.package("nginx")
|
||||
assert nginx.is_installed
|
||||
|
||||
|
||||
def test_nginx_running_and_enabled(host):
|
||||
nginx = host.service("nginx")
|
||||
assert nginx.is_running
|
||||
|
||||
|
||||
|
||||
.. warning::
|
||||
|
||||
Uncomment ``truthy: disable`` in ``.yamllint`` found at the base of the role.
|
||||
|
||||
.. code:: text
|
||||
|
||||
(.ansible_venv) ~/sandbox/test-roles/example-role $ molecule test
|
||||
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
|
||||
Validation completed successfully.
|
||||
--> Test matrix
|
||||
|
||||
└── default
|
||||
├── lint
|
||||
├── destroy
|
||||
├── dependency
|
||||
├── syntax
|
||||
├── create
|
||||
├── prepare
|
||||
├── converge
|
||||
├── idempotence
|
||||
├── side_effect
|
||||
├── verify
|
||||
└── destroy
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'lint'
|
||||
--> Executing Yamllint on files found in /home/elijah/sandbox/test-roles/example-role/...
|
||||
Lint completed successfully.
|
||||
--> Executing Flake8 on files found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
|
||||
/home/elijah/.virtualenvs/world/lib/python3.7/site-packages/pycodestyle.py:113: FutureWarning: Possible nested set at position 1
|
||||
EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]')
|
||||
Lint completed successfully.
|
||||
--> Executing Ansible Lint on /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml...
|
||||
Lint completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'destroy'
|
||||
|
||||
PLAY [Destroy] *****************************************************************
|
||||
|
||||
TASK [Destroy molecule instance(s)] ********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Wait for instance(s) deletion to complete] *******************************
|
||||
ok: [localhost] => (item=None)
|
||||
ok: [localhost]
|
||||
|
||||
TASK [Delete docker network(s)] ************************************************
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=2 changed=1 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'dependency'
|
||||
Skipping, missing the requirements file.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'syntax'
|
||||
|
||||
playbook: /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'create'
|
||||
|
||||
PLAY [Create] ******************************************************************
|
||||
|
||||
TASK [Log into a Docker registry] **********************************************
|
||||
skipping: [localhost] => (item=None)
|
||||
|
||||
TASK [Create Dockerfiles from image names] *************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Discover local Docker images] ********************************************
|
||||
ok: [localhost] => (item=None)
|
||||
ok: [localhost]
|
||||
|
||||
TASK [Build an Ansible compatible image] ***************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Create docker network(s)] ************************************************
|
||||
|
||||
TASK [Create molecule instance(s)] *********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Wait for instance(s) creation to complete] *******************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=5 changed=4 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'prepare'
|
||||
Skipping, prepare playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'converge'
|
||||
|
||||
PLAY [Converge] ****************************************************************
|
||||
|
||||
TASK [Gathering Facts] *********************************************************
|
||||
ok: [instance]
|
||||
|
||||
TASK [example-role : Create 'example' user] ************************************
|
||||
changed: [instance]
|
||||
|
||||
TASK [example-role : Install nginx] ********************************************
|
||||
changed: [instance]
|
||||
|
||||
RUNNING HANDLER [example-role : Restart nginx] *********************************
|
||||
changed: [instance]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
instance : ok=4 changed=3 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'idempotence'
|
||||
Idempotence completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'side_effect'
|
||||
Skipping, side effect playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'verify'
|
||||
--> Executing Testinfra tests found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
|
||||
============================= test session starts ==============================
|
||||
platform linux -- Python 3.7.1, pytest-4.1.0, py-1.7.0, pluggy-0.8.1
|
||||
rootdir: /home/elijah/sandbox/test-roles/example-role/molecule/default, inifile:
|
||||
plugins: testinfra-1.16.0
|
||||
collected 5 items
|
||||
|
||||
tests/test_default.py ..... [100%]
|
||||
|
||||
=============================== warnings summary ===============================
|
||||
|
||||
...
|
||||
|
||||
==================== 5 passed, 7 warnings in 27.37 seconds =====================
|
||||
Verifier completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'destroy'
|
||||
|
||||
PLAY [Destroy] *****************************************************************
|
||||
|
||||
TASK [Destroy molecule instance(s)] ********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Wait for instance(s) deletion to complete] *******************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
TASK [Delete docker network(s)] ************************************************
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=2 changed=2 unreachable=0 failed=0
|
||||
|
||||
I have a few warning messages (that's likely because I am using python 3.7 and some of the libraries still don't fully support the new standards released with it) but all my tests passed
|
||||
|
||||
Conclusion
|
||||
==========
|
||||
|
||||
Molecule is a great tool to test ansible roles quickly and while developing them. It also comes bundled with a bunch of other features from different projects that will test all aspects of your ansible code. I suggest you start using it when writing new ansible roles.
|
99
posts/irc/weechat_ssh_and_notification.rst
Normal file
|
@ -0,0 +1,99 @@
|
|||
.. title: Weechat, SSH and Notification
|
||||
.. date: 2019-01-01
|
||||
.. updated: 2019-06-21
|
||||
.. status: published
|
||||
.. tags: irc, ssh, weechat, notification,
|
||||
.. category: irc
|
||||
.. slug: weechat-ssh-and-notification
|
||||
.. authors: Elijah Lazkani
|
||||
.. description: A way to patch weechat notifications through your system's libnotify over ssh.
|
||||
.. type: text
|
||||
|
||||
|
||||
I have been on IRC for as long as I have been using *Linux* and that is a long time. Throughout the years, I have moved between *terminal IRC* clients. In this current iteration, I am using `Weechat <https://weechat.org/>`_.
|
||||
|
||||
There are many ways one can use *weechat* and the one I chose is to run it in *tmux* on a *cloud server*. In other words, I have a *Linux* server running on one of the many cloud providers on which I have *tmux* and *weechat* installed and configured the way I like them. If you run a setup like mine, then you might face the same issue I have with IRC notifications.
|
||||
|
||||
.. TEASER_END
|
||||
|
||||
Why?
|
||||
====
|
||||
|
||||
*Weechat* can cause a terminal bell which will show on some *terminals* and *window managers* as a notification. But you only know that *weechat* pinged. Furthermore, if this is happening on a server that you are *ssh*'ing to, and with various shell configurations, this system might not even work. I wanted something more useful than that so I went on the hunt for the plugins available to see if any one of them could offer me a solution. I found many official plugins that did things in a similar fashion and each in a different and interesting way but none the way I want them to work.
|
||||
|
||||
Solution
|
||||
========
|
||||
|
||||
After trying multiple solutions offered online which included various plugins, I decided to write my own. That's when *weenotify* was born. If you know my background then you know, already, that I am big on open source so *weenotify* was first released on `Gitlab <https://gitlab.com/elazkani/weenotify>`_. After a few changes, requested by a weechat developer (**FlashCode** in **#weechat** on `Freenode <https://freenode.net/>`_), *weenotify* became as an `official weechat plugin <https://weechat.org/scripts/source/weenotify.py.html/>`_.
|
||||
|
||||
Weenotify
|
||||
=========
|
||||
|
||||
Without getting into too many details, *weenotify* acts as both a weechat plugin and a server. The main function is to intercept weechat notifications and patch them through the system's notification system. In simple terms, if someone mentions your name, you will get a pop-up notification on your system with information about that. The script can be configured to work locally, if you run weechat on your own machine, or to open a socket and send the notification to *weenotify* running as a server. In the latter configuration, *weenotify* will display the notification on the system the server is is running on.
|
||||
|
||||
Configuration
|
||||
=============
|
||||
|
||||
Let's look at the configuration to accomplish this... As mentioned in the beginning of the post, I run weechat in *tmux* on a server. So I *ssh* to the server before attaching *tmux*. The safest way to do this is to **port forward over ssh** and this can be done easily by *ssh*'ing using the following example.
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ ssh -R 5431:localhost:5431 server.example.com
|
||||
|
||||
At this point, you should have port **5431** forwarded between the server and your machine.
|
||||
|
||||
Once the previous step is done, you can test if it works by trying to run the *weenotify* script in server mode on your machine using the following command.
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ python weenotify.py -s
|
||||
Starting server...
|
||||
Server listening locally on port 5431...
|
||||
|
||||
The server is now running, you can test port forwarding from the server to make sure everything is working as expected.
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ telnet localhost 5431
|
||||
Trying ::1...
|
||||
Connected to localhost.
|
||||
Escape character is '^]'.
|
||||
|
||||
If the connection is successful then you know that port forwarding is working as expected. You can close the connection by hitting **Ctrl + ]**.
|
||||
|
||||
Now we are ready to install the plugin in weechat and configure it. In weechat, run the following command.
|
||||
|
||||
.. code-block::
|
||||
|
||||
/script search weenotify
|
||||
|
||||
At which point, you should be greeted with the buffer shown in the screenshot below.
|
||||
|
||||
.. thumbnail:: /images/weechat_ssh_and_notification/01-weechat_weenotify.png
|
||||
:align: center
|
||||
:alt: weenotify
|
||||
|
||||
You can install the plugin with **Alt + i** and make sure it autoloads with **Alt + A**. You can get more information about working with weechat scripts by reading the help menu. You can get the scripts help menu by running the following in weechat.
|
||||
|
||||
.. code-block::
|
||||
|
||||
/help script
|
||||
|
||||
The *weenotify* plugin is installed at this stage and only needs to be configured. The plugin has a list of values that can be configured. My configuration looks like the following.
|
||||
|
||||
.. code-block::
|
||||
|
||||
plugins.var.python.weenotify.enable string "on"
|
||||
plugins.var.python.weenotify.host string "localhost"
|
||||
plugins.var.python.weenotify.mode string "remote"
|
||||
plugins.var.python.weenotify.port string "5431"
|
||||
|
||||
Each one of those configuration options can be set as shown in the example below in weechat.
|
||||
|
||||
.. code-block::
|
||||
|
||||
/set plugins.var.python.weenotify.enable on
|
||||
|
||||
Make sure that the plugin **enable** value is **on** and that the **mode** is **remote**, if you're following this post and using ssh with port forwarding. Otherwise, If you want the plugin to work locally, make sure you set the **mode** to **local**.
|
||||
|
||||
If you followed this post so far, then whenever someone highlights you on weechat you should get a pop-up on your system notifying you about it.
|
145
posts/kubernetes/deploying_helm_in_your_kubernetes_cluster.rst
Normal file
|
@ -0,0 +1,145 @@
|
|||
.. title: Deploying Helm in your Kubernetes Cluster
|
||||
.. date: 2019-03-16
|
||||
.. updated: 2019-06-21
|
||||
.. status: published
|
||||
.. tags: kubernetes, helm, tiller,
|
||||
.. category: kubernetes
|
||||
.. slug: deploying-helm-in-your-kubernetes-cluster
|
||||
.. authors: Elijah Lazkani
|
||||
.. description: Post explaining how to deploy helm in your kubernetes cluster.
|
||||
.. type: text
|
||||
|
||||
|
||||
In the previous post in the *kubernetes* series, we deployed a small *kubernetes* cluster locally on *KVM*. In future posts we will be deploying more things into the cluster. This will enable us to test different projects, ingresses, service meshes, and more from the open source community, build specifically for *kubernetes*. To help with this future quest, we will be leveraging a kubernetes package manager. You've read it right, helm is a kubernetes package manager. Let's get started shall we ?
|
||||
|
||||
.. TEASER_END
|
||||
|
||||
Helm
|
||||
====
|
||||
|
||||
As mentioned above, helm is a kubernetes package manager. You can read more about the helm project on their `homepage <https://helm.sh/>`_. It offers a way to Go template the deployments of service and package them into a portable package that can be installed using the helm command line.
|
||||
|
||||
Generally, you would install the helm binary on your machine and install it into the cluster. In our case, the *RBACs* deployed in the kubernetes cluster by rancher prevent the default installation from working. Not a problem, we can go around the problem and we will in this post. This is a win for us because this will give us the opportunity to learn more about helm and kubernetes.
|
||||
|
||||
.. note::
|
||||
|
||||
This is not a production recommended way to deploy helm. I would **NOT** deploy helm this way on a production cluster. I would restrict the permissions of any ``ServiceAccount`` deployed in the cluster to its bare minimum requirements.
|
||||
|
||||
What are we going to do ?
|
||||
=========================
|
||||
|
||||
We need to understand a bit of what's going on and what we are trying to do. To be able to do that, we need to understand how *helm* works. From a high level, the ``helm`` command line tool will deploy a service called *Tiller* as a ``Deployment``.
|
||||
|
||||
The *Tiller* service talks to the *kubernetes* *API* and manages the deployment process while the ``helm`` command line tool talks to *Tiller* from its end. So a proper deployment of *Tiller* in a *kubernetes* sense is to create a ``ServiceAccount``, give the ``ServiceAccount`` the proper permissions to be able to do what it needs to do and you got yourself a working *Tiller*.
|
||||
|
||||
Service Account
|
||||
===============
|
||||
|
||||
This is where we start by creating a ``ServiceAccount``. The ``ServiceAccount`` looks like this.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: tiller
|
||||
namespace: kube-system
|
||||
|
||||
We de deploy the ``ServiceAccount`` to the cluster. Save it to ``ServiceAccount.yaml``.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
$ kubectl apply -f ServiceAccount.yaml
|
||||
serviceaccount/tiller created
|
||||
|
||||
.. note::
|
||||
|
||||
To read more about ``ServiceAccount`` and their uses please visit the *kubernetes* documentation page on the `topic <https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/>`_.
|
||||
|
||||
Cluster Role Binding
|
||||
====================
|
||||
|
||||
We have *Tiller* (``ServiceAccount``) deployed in ``kube-system`` (``namespace``). We need to give it access.
|
||||
|
||||
Option 1
|
||||
--------
|
||||
|
||||
We have the option of either creating a ``Role`` which would restrict *Tiller* to the current ``namespace``, then tie them together with a ``RoleBinding``.
|
||||
|
||||
This option will restrict *Tiller* to that ``namespace`` and that ``namespace`` only.
|
||||
|
||||
Option 2
|
||||
--------
|
||||
|
||||
Another option is to create a ``ClusterRole`` and tie the ``ServiceAccount`` to that ``ClusterRole`` with a ``ClusterRoleBinding`` and this will give *Tiller* access across *namespaces*.
|
||||
|
||||
Option 3
|
||||
--------
|
||||
|
||||
In our case, we already know that ``ClustRole`` ``cluster-admin`` already exists in the cluster so we are going to give *Tiller* ``cluster-admin`` access.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: tiller
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: tiller
|
||||
namespace: kube-system
|
||||
|
||||
Save the following in ``ClusterRoleBinding.yaml`` and then
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ kubectl apply -f ClusterRoleBinding.yaml
|
||||
clusterrolebinding.rbac.authorization.k8s.io/tiller created
|
||||
|
||||
|
||||
Deploying Tiller
|
||||
================
|
||||
|
||||
Now that we have all the basics deployed, we can finally deploy *Tiller* in the cluster.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ helm init --service-account tiller --tiller-namespace kube-system --history-max 10
|
||||
Creating ~/.helm
|
||||
Creating ~/.helm/repository
|
||||
Creating ~/.helm/repository/cache
|
||||
Creating ~/.helm/repository/local
|
||||
Creating ~/.helm/plugins
|
||||
Creating ~/.helm/starters
|
||||
Creating ~/.helm/cache/archive
|
||||
Creating ~/.helm/repository/repositories.yaml
|
||||
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
|
||||
Adding local repo with URL: http://127.0.0.1:8879/charts
|
||||
$HELM_HOME has been configured at ~/.helm.
|
||||
|
||||
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
|
||||
|
||||
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
|
||||
To prevent this, run `helm init` with the --tiller-tls-verify flag.
|
||||
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
|
||||
Happy Helming!
|
||||
|
||||
.. note::
|
||||
|
||||
Please make sure you read the helm installation documentation if you are deploying this in a production environment. You can find how you can make it more secure `there <https://helm.sh/docs/using_helm/#securing-your-helm-installation>`_.
|
||||
|
||||
After a few minutes, your *Tiller* deployment or as it's commonly known as a ``helm install`` or a ``helm init``. If you want to check that everything has been deployed properly you can run.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ helm version
|
||||
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
|
||||
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
|
||||
|
||||
Everything seems to be working properly. In future posts, we will be leveraging the power and convenience of helm to expand our cluster's capabilities and learn more about what we can do with kubernetes.
|
223
posts/kubernetes/local_kubernetes_cluster_on_kvm.rst
Normal file
|
@ -0,0 +1,223 @@
|
|||
.. title: Local Kubernetes Cluster on KVM
|
||||
.. date: 2019-02-17
|
||||
.. updated: 2019-06-21
|
||||
.. status: published
|
||||
.. tags: kubernetes, rancher, rancheros, kvm, libvirt,
|
||||
.. category: kubernetes
|
||||
.. slug: local-kubernetes-cluster-on-kvm
|
||||
.. authors: Elijah Lazkani
|
||||
.. description: Deploying a kubernetes cluster locally on KVM.
|
||||
.. type: text
|
||||
|
||||
|
||||
I wanted to explore *kubernetes* even more for myself and for this blog. I've worked on pieces of this at work but not the totality of the work which I would like to understand for myself. I wanted, also to explore new tools and ways to leverage the power of *kubernetes*.
|
||||
|
||||
So far, I have been using *minikube* to do the deployments but there is an inherit restriction that comes with using a single bundled node. Sure, it is easy to get it up and running but at some point I had to use ``nodePort`` to go around the IP restriction. This is a restriction that you will have in an actual *kubernetes* cluster but I will show you later how to go around it. For now, let's just get a local cluster up and running.
|
||||
|
||||
.. TEASER_END
|
||||
|
||||
Objective
|
||||
=========
|
||||
|
||||
I needed a local *kubernetes* cluster using all open source tools and easy to deploy. So I went with using *KVM* as the hypervisor layer and installed ``virt-manager`` for shallow management. As an OS, I wanted something light and made for *kubernetes*. As I already know of Rancher (being an easy way to deploy *kubernetes* and they have done a great job so far since the launch of their Rancer 2.0) I decided to try *RancherOS*. So let's see how all that works together.
|
||||
|
||||
Requirements
|
||||
============
|
||||
|
||||
Let's start by thinking about what we actually need. Rancher, the dashboard they offer is going to need a VM by itself and they `recommend <https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/>`_ *4GB of RAM*. I only have *16GB of RAM* on my machine so I'll have to do the math to see how much I can afford to give this *dashboard* and *manager*. By looking at the *RancherOS* hardware `requirements <https://rancher.com/docs/os/v1.x/en/>`_, I can tell that by giving a each node *2GB* of RAM I should be able to host a *3 node cluster* and with *2* more for the *dashboard* that puts me right on *8GB of RAM*. So we need to create *4 VMs* with *2GB of RAM* each.
|
||||
|
||||
Installing RancherOS
|
||||
====================
|
||||
|
||||
Once all 4 nodes have been created, when you boot into the *RancherOS* `ISO <https://rancher.com/docs/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/>`_ do the following.
|
||||
|
||||
.. note::
|
||||
|
||||
Because I was using *libvirt*, I was able to do ``virsh console <vm>`` and run these commands.
|
||||
|
||||
Virsh Console
|
||||
=============
|
||||
|
||||
If you are running these VMs on *libvirt*, then you can console into the box and run ``vi``.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# virsh list
|
||||
Id Name State
|
||||
-------------------------
|
||||
21 kube01 running
|
||||
22 kube02 running
|
||||
23 kube03 running
|
||||
24 rancher running
|
||||
|
||||
# virsh console rancher
|
||||
|
||||
Configuration
|
||||
=============
|
||||
|
||||
If you read the *RancherOS* `documentation <https://rancher.com/docs/os/v1.x/en/>`_, you'll find out that you can configure the *OS* with a ``YAML`` configuration file so let's do that.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ vi cloud-config.yml
|
||||
|
||||
And that file should hold.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
hostname: rancher.kube.loco
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAA...
|
||||
rancher:
|
||||
network:
|
||||
interfaces:
|
||||
eth0:
|
||||
address: 192.168.122.5/24
|
||||
dhcp: false
|
||||
gateway: 192.168.122.1
|
||||
mtu: 1500
|
||||
|
||||
Make sure that your **public** *ssh key* is replaced in the example before and if you have a different network configuration for your VMs, change the network configuration here.
|
||||
|
||||
After you save that file, install the *OS*.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ sudo ros install -c cloud-config.yml -d /dev/sda
|
||||
|
||||
Do the same for the rest of the servers and their names and IPs should be as follows (if you are following this tutorial):
|
||||
|
||||
.. code:: text
|
||||
|
||||
192.168.122.5 rancher.kube.loco
|
||||
192.168.122.10 kube01.kube.loco
|
||||
192.168.122.11 kube02.kube.loco
|
||||
192.168.122.12 kube03.kube.loco
|
||||
|
||||
Post Installation Configuration
|
||||
===============================
|
||||
|
||||
After *RancherOS* has been installed, one will need to configure ``/etc/hosts`` and it should look like the following if one is working off of the *Rancher* box.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ sudo vi /etc/hosts
|
||||
|
||||
.. code:: text
|
||||
|
||||
127.0.0.1 rancher.kube.loco
|
||||
192.168.122.5 rancher.kube.loco
|
||||
192.168.122.10 kube01.kube.loco
|
||||
192.168.122.11 kube02.kube.loco
|
||||
192.168.122.12 kube03.kube.loco
|
||||
|
||||
Do the same on the rest of the servers while changing the ``127.0.0.1`` hostname to the host of the server.
|
||||
|
||||
Installing Rancher
|
||||
==================
|
||||
|
||||
At this point, I have to stress a few facts:
|
||||
|
||||
- This is not the Rancher recommended way to deploy *kubernetes*.
|
||||
|
||||
- The recommended way is of course `RKE <https://rancher.com/docs/rke/v0.1.x/en/>`_.
|
||||
|
||||
- This is for testing, so I did not take into consideration backup of anything.
|
||||
|
||||
- There are ways to backup Rancher configuration by mounting storage from the ``rancher`` docker container.
|
||||
|
||||
If those points are understood, let's go ahead and deploy Rancher.
|
||||
First, ``$ ssh rancher@192.168.122.5`` then:
|
||||
|
||||
.. code:: text
|
||||
|
||||
[rancher@rancher ~]$ docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
|
||||
|
||||
Give it a few minutes for the container to come up and the application as well. Meanwhile configure your ``/etc/hosts`` file on your machine.
|
||||
|
||||
.. code:: text
|
||||
|
||||
192.168.122.5 rancher.kube.loco
|
||||
|
||||
Now that all that is out of the way, you can login to https://rancher.kube.loco and set your ``admin`` password and the ``url`` for Rancher.
|
||||
|
||||
Deploying Kubernetes
|
||||
====================
|
||||
|
||||
Now that everything is ready, let's deploy *kubernetes* the easy way.
|
||||
|
||||
At this point you should be greeted with a page that looks like the following.
|
||||
|
||||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/01-add_cluster.png
|
||||
:alt: Add Cluster Page
|
||||
|
||||
|
||||
Click on the **Add Cluser**
|
||||
|
||||
|
||||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/02-custom_cluster.png
|
||||
:align: center
|
||||
:alt: Custom Cluster Page
|
||||
|
||||
|
||||
Make sure you choose **Custom** as a *provider*. Then fill in the **Cluser Name** in our case we'll call it **kube**.
|
||||
|
||||
|
||||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/03-calico_networkProvider.png
|
||||
:align: center
|
||||
:alt: Network Provider: Calico (Optional)
|
||||
|
||||
|
||||
Optionally, you can choose your **Network Providor**, in my case I chose **Calico**. Then I clicked on **show advanced** at the bottom right corner then expanded the *newly shown tab* **Advanced Cluster Options**.
|
||||
|
||||
|
||||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/04-nginx_ingressDisabled.png
|
||||
:align: center
|
||||
:alt: Nginx Ingress: Disabled
|
||||
|
||||
|
||||
We will disable the **Nginx Ingress** and the **Pod Security Policy Support** for the time being. This will become more apparent why in the future, hopefully. Then hit **Next**.
|
||||
|
||||
|
||||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/05-customer_nodes.png
|
||||
:align: center
|
||||
:alt: Customize Nodes
|
||||
|
||||
|
||||
Make sure that you select all **3 Node Roles**. Set the **Public Address** and the **Node Name** to the first node and then copy the command and paste it on the *first* node.
|
||||
|
||||
Do the same for *all the rest*. Once the first docker image gets downloaded and ran you should see a message pop at the bottom.
|
||||
|
||||
|
||||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/06-registered_nodes.png
|
||||
:align: center
|
||||
:alt: Registered Nodes
|
||||
|
||||
|
||||
.. warning::
|
||||
|
||||
Do **NOT** click *done* until you see all *3 nodes registered*.
|
||||
|
||||
|
||||
Finalizing
|
||||
==========
|
||||
|
||||
Now that you have *3 registered nodes*, click **Done** and go grab yourself a cup of coffee. Maybe take a long walk, this will take time. Or if you are curious like me, you'd be looking at the logs, checking the containers in a quad pane ``tmux`` session.
|
||||
|
||||
After a long time has passed, our story ends with a refresh and a welcome with this page.
|
||||
|
||||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/07-kubernetes_cluster.png
|
||||
:align: center
|
||||
:alt: Kubernetes Cluster
|
||||
|
||||
|
||||
Welcome to your Kubernetes Cluster.
|
||||
|
||||
Conclusion
|
||||
==========
|
||||
|
||||
At this point, you can check that all the nodes are healthy and you got yourself a kubernetes cluster. In future blog posts we will explore an avenue to deploy *multiple ingress controllers* on the same cluster on the same ``port: 80`` by giving them each an IP external to the cluster.
|
||||
|
||||
But for now, you got yourself a kubernetes cluster to play with. Enjoy.
|
||||
|
179
posts/kubernetes/minikube_setup.rst
Normal file
|
@ -0,0 +1,179 @@
|
|||
.. title: Minikube Setup
|
||||
.. date: 2019-02-09
|
||||
.. updated: 2019-06-21
|
||||
.. status: published
|
||||
.. tags: minikube, kubernetes, ingress, ingress-controller,
|
||||
.. category: kubernetes
|
||||
.. slug: minikube-setup
|
||||
.. authors: Elijah Lazkani
|
||||
.. description: A quick and dirty minikube setup.
|
||||
.. type: text
|
||||
|
||||
|
||||
If you have ever worked with *kubernetes*, you'd know that minikube out of the box does not give you what you need for a quick setup. I'm sure you can go ``minikube start``, everything's up... Great... ``kubectl get pods -n kube-system``... It works, let's move on...
|
||||
|
||||
But what if it's not let's move on to something else. We need to look at this as a local test environment in capabilities. We can learn so much from it before applying to the lab. But, as always, there are a few tweaks we need to perform to give it the magic it needs to be a real environment.
|
||||
|
||||
.. TEASER_END
|
||||
|
||||
Prerequisites
|
||||
=============
|
||||
|
||||
If you are looking into *kubernetes*, I would suppose that you know your linux's ABCs and you can install and configure *minikube* and its prerequisites prior to the beginning of this tutorial.
|
||||
|
||||
You can find the guide to install *minikube* and configure it on the *minikube* `webpage <https://kubernetes.io/docs/setup/minikube/>`_.
|
||||
|
||||
Anyway, make sure you have *minikube* installed, *kubectl* and whatever driver dependencies you need to run it under that driver. In my case, I am using kvm2 which will be reflected in the commands given to start *minikube*.
|
||||
|
||||
Starting *minikube*
|
||||
===================
|
||||
|
||||
Let's start minikube.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ minikube start --vm-driver=kvm2
|
||||
Starting local Kubernetes v1.13.2 cluster...
|
||||
Starting VM...
|
||||
Getting VM IP address...
|
||||
Moving files into cluster...
|
||||
Setting up certs...
|
||||
Connecting to cluster...
|
||||
Setting up kubeconfig...
|
||||
Stopping extra container runtimes...
|
||||
Starting cluster components...
|
||||
Verifying apiserver health ...
|
||||
Kubectl is now configured to use the cluster.
|
||||
Loading cached images from config file.
|
||||
|
||||
|
||||
Everything looks great. Please enjoy minikube!
|
||||
|
||||
Great... At this point we have a cluster that's running, let's verify.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# Id Name State
|
||||
--------------------------
|
||||
3 minikube running
|
||||
|
||||
For me, I can check ``virsh``. If you used *VirtualBox* you can check that.
|
||||
|
||||
We can also test with ``kubectl``.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ kubectl version
|
||||
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
|
||||
|
||||
Now what ? Well, now we deploy a few addons that we need to deploy in production as well for a functioning *kubernetes* cluster.
|
||||
|
||||
Let's check the list of add-ons available out of the box.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ minikube addons list
|
||||
- addon-manager: enabled
|
||||
- dashboard: enabled
|
||||
- default-storageclass: enabled
|
||||
- efk: disabled
|
||||
- freshpod: disabled
|
||||
- gvisor: disabled
|
||||
- heapster: enabled
|
||||
- ingress: enabled
|
||||
- kube-dns: disabled
|
||||
- metrics-server: enabled
|
||||
- nvidia-driver-installer: disabled
|
||||
- nvidia-gpu-device-plugin: disabled
|
||||
- registry: disabled
|
||||
- registry-creds: disabled
|
||||
- storage-provisioner: enabled
|
||||
- storage-provisioner-gluster: disabled
|
||||
|
||||
Make sure you have *dashboard*, *heapster*, *ingress* and *metrics-server* **enabled**. You can enable add-ons with ``kubectl addons enable``.
|
||||
|
||||
What's the problem then ?
|
||||
=========================
|
||||
|
||||
Here's the problem that comes next. How do you access the dashboard or anything running in the cluster ? Everyone online suggests you proxy a port and you access the dashboard. Is that really how it should work ? Is that how production system do it ?
|
||||
|
||||
The answer is of course not. They use different types of *ingresses* at their disposal. In this case, *minikube* was kind enough to provide one for us, the default *kubernetes ingress controller*, It's a great option for an ingress controller that's solid enough for production use. Fine, a lot of babble. Yes sure but this babble is important. So how do we access stuff on a cluster ?
|
||||
|
||||
To answer that question we need to understand a few things. Yes, you can use a ``NodePort`` on your service and access it that way. But do you really want to manage these ports ? What's in use and what's not ? Besides, wouldn't it be better if you can use one port for all of the services ? How you may ask ?
|
||||
|
||||
We've been doing it for years, and by we I mean *ops* and *devops* people. You have to understand that the kubernetes ingress controller is simply an *nginx* under the covers. We've always been able to configure *nginx* to listen for a specific *hostname* and redirect it where we want to. It shouldn't be that hard to do right ?
|
||||
|
||||
Well this is what an ingress controller does. It uses the default ports to route traffic from the outside according to hostname called. Let's look at our cluster and see what we need.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ kubectl get services --all-namespaces
|
||||
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
default kubernetes ClusterIP 10.96.0.1 443/TCP 17m
|
||||
kube-system default-http-backend NodePort 10.96.77.15 80:30001/TCP 17m
|
||||
kube-system heapster ClusterIP 10.100.193.109 80/TCP 17m
|
||||
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 17m
|
||||
kube-system kubernetes-dashboard ClusterIP 10.106.156.91 80/TCP 17m
|
||||
kube-system metrics-server ClusterIP 10.103.137.86 443/TCP 17m
|
||||
kube-system monitoring-grafana NodePort 10.109.127.87 80:30002/TCP 17m
|
||||
kube-system monitoring-influxdb ClusterIP 10.106.174.177 8083/TCP,8086/TCP 17m
|
||||
|
||||
In my case, you can see that I have a few things that are in ``NodePort`` configuration and you can access them on those ports. But the *kubernetes-dashboard* is a ``ClusterIP`` and we can't get to it. So let's change that by adding an ingress to the service.
|
||||
|
||||
Ingress
|
||||
=======
|
||||
|
||||
An ingress is an object of kind ``ingress`` that configures the ingress controller of your choice.
|
||||
|
||||
.. code:: text
|
||||
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
rules:
|
||||
- host: dashboard.kube.local
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: kubernetes-dashboard
|
||||
servicePort: 80
|
||||
|
||||
Save that to a file ``kube-dashboard-ingress.yaml`` or something then run.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ kubectl apply -f kube-bashboard-ingress.yaml
|
||||
ingress.extensions/kubernetes-dashboard created
|
||||
|
||||
And now we get this.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ kubectl get ingress --all-namespaces
|
||||
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
|
||||
kube-system kubernetes-dashboard dashboard.kube.local 80 17s
|
||||
|
||||
Now all we need to know is the IP of our kubernetes cluster of *one*. Don't worry *minikube* makes it easy for us.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ minikube ip
|
||||
192.168.39.79
|
||||
|
||||
Now let's add that host to our ``/etc/hosts`` file.
|
||||
|
||||
.. code:: text
|
||||
|
||||
192.168.39.79 dashboard.kube.local
|
||||
|
||||
Now if you go to http://dashboard.kube.local in your browser, you will be welcomed with the dashboard. How is that so ? Well as I explained, point it to the nodes of the cluster with the proper hostname and it works.
|
||||
|
||||
You can deploy multiple services that can be accessed this way, you can also integrate this with a service mesh or a service discovery which could find the up and running nodes that can redirect you to point to at all times. But this is the clean way to expose services outside the cluster.
|
389
posts/kubernetes/your_first_minikube_helm_deployment.rst
Normal file
|
@ -0,0 +1,389 @@
|
|||
.. title: Your First Minikube Helm Deployment
|
||||
.. date: 2019-02-10
|
||||
.. updated: 2019-06-21
|
||||
.. status: published
|
||||
.. tags: minikube, kubernetes, ingress, helm, prometheus, grafana,
|
||||
.. category: kubernetes
|
||||
.. slug: your-first-minikube-helm-deployment
|
||||
.. authors: Elijah Lazkani
|
||||
.. description: Deploying your first minikube helm charts.
|
||||
.. type: text
|
||||
|
||||
|
||||
In the last post, we have configured a basic *minikube* cluster. In this post we will deploy a few items we will need in a cluster and maybe in the future, experiment with it a bit.
|
||||
|
||||
.. TEASER_END
|
||||
|
||||
Prerequisite
|
||||
============
|
||||
|
||||
During this post and probably during future posts, we will be using *helm* to deploy to our *minikube* cluster. Some offered by the helm team, others by the community and maybe our own. We need to install ``helm`` on our machine. It should be as easy as downloading the binary but if you can find it in your package manager go that route.
|
||||
|
||||
Deploying Tiller
|
||||
================
|
||||
|
||||
Before we can start with the deployments using ``helm``, we need to deploy *tiller*. It's a service that manages communications with the client and deployments.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ helm init --history-max=10
|
||||
Creating ~/.helm
|
||||
Creating ~/.helm/repository
|
||||
Creating ~/.helm/repository/cache
|
||||
Creating ~/.helm/repository/local
|
||||
Creating ~/.helm/plugins
|
||||
Creating ~/.helm/starters
|
||||
Creating ~/.helm/cache/archive
|
||||
Creating ~/.helm/repository/repositories.yaml
|
||||
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
|
||||
Adding local repo with URL: http://127.0.0.1:8879/charts
|
||||
$HELM_HOME has been configured at ~/.helm.
|
||||
|
||||
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
|
||||
|
||||
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
|
||||
To prevent this, run ``helm init`` with the --tiller-tls-verify flag.
|
||||
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
|
||||
|
||||
*Tiller* is deployed, give it a few minutes for the pods to come up.
|
||||
|
||||
Deploy Prometheus
|
||||
=================
|
||||
|
||||
We often need to monitor multiple aspects of the cluster easily. Sometimes maybe even write our applications to (let's say) publish metrics to prometheus. And I said 'let's say' because technically we offer an endpoint that a prometheus exporter will consume regularly and publish to the prometheus server. Anyway, let's deploy prometheus.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ helm install stable/prometheus-operator --name prometheus-operator --namespace kube-prometheus
|
||||
NAME: prometheus-operator
|
||||
LAST DEPLOYED: Sat Feb 9 18:09:43 2019
|
||||
NAMESPACE: kube-prometheus
|
||||
STATUS: DEPLOYED
|
||||
|
||||
RESOURCES:
|
||||
==> v1/Secret
|
||||
NAME TYPE DATA AGE
|
||||
prometheus-operator-grafana Opaque 3 4s
|
||||
alertmanager-prometheus-operator-alertmanager Opaque 1 4s
|
||||
|
||||
==> v1beta1/ClusterRole
|
||||
NAME AGE
|
||||
prometheus-operator-kube-state-metrics 3s
|
||||
psp-prometheus-operator-kube-state-metrics 3s
|
||||
psp-prometheus-operator-prometheus-node-exporter 3s
|
||||
|
||||
==> v1/Service
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
prometheus-operator-grafana ClusterIP 10.107.125.114 80/TCP 3s
|
||||
prometheus-operator-kube-state-metrics ClusterIP 10.99.250.30 8080/TCP 3s
|
||||
prometheus-operator-prometheus-node-exporter ClusterIP 10.111.99.199 9100/TCP 3s
|
||||
prometheus-operator-alertmanager ClusterIP 10.96.49.73 9093/TCP 3s
|
||||
prometheus-operator-coredns ClusterIP None 9153/TCP 3s
|
||||
prometheus-operator-kube-controller-manager ClusterIP None 10252/TCP 3s
|
||||
prometheus-operator-kube-etcd ClusterIP None 4001/TCP 3s
|
||||
prometheus-operator-kube-scheduler ClusterIP None 10251/TCP 3s
|
||||
prometheus-operator-operator ClusterIP 10.101.253.101 8080/TCP 3s
|
||||
prometheus-operator-prometheus ClusterIP 10.107.117.120 9090/TCP 3s
|
||||
|
||||
==> v1beta1/DaemonSet
|
||||
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
|
||||
prometheus-operator-prometheus-node-exporter 1 1 0 1 0 3s
|
||||
|
||||
==> v1/Deployment
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
prometheus-operator-operator 1 1 1 0 3s
|
||||
|
||||
==> v1/ServiceMonitor
|
||||
NAME AGE
|
||||
prometheus-operator-alertmanager 2s
|
||||
prometheus-operator-coredns 2s
|
||||
prometheus-operator-apiserver 2s
|
||||
prometheus-operator-kube-controller-manager 2s
|
||||
prometheus-operator-kube-etcd 2s
|
||||
prometheus-operator-kube-scheduler 2s
|
||||
prometheus-operator-kube-state-metrics 2s
|
||||
prometheus-operator-kubelet 2s
|
||||
prometheus-operator-node-exporter 2s
|
||||
prometheus-operator-operator 2s
|
||||
prometheus-operator-prometheus 2s
|
||||
|
||||
==> v1/Pod(related)
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
prometheus-operator-prometheus-node-exporter-fntpx 0/1 ContainerCreating 0 3s
|
||||
prometheus-operator-grafana-8559d7df44-vrm8d 0/3 ContainerCreating 0 2s
|
||||
prometheus-operator-kube-state-metrics-7769f5bd54-6znvh 0/1 ContainerCreating 0 2s
|
||||
prometheus-operator-operator-7967865bf5-cbd6r 0/1 ContainerCreating 0 2s
|
||||
|
||||
==> v1beta1/PodSecurityPolicy
|
||||
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
|
||||
prometheus-operator-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
|
||||
prometheus-operator-kube-state-metrics false RunAsAny MustRunAsNonRoot MustRunAs MustRunAs false secret
|
||||
prometheus-operator-prometheus-node-exporter false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim,hostPath
|
||||
prometheus-operator-alertmanager false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
|
||||
prometheus-operator-operator false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
|
||||
prometheus-operator-prometheus false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
|
||||
|
||||
==> v1/ConfigMap
|
||||
NAME DATA AGE
|
||||
prometheus-operator-grafana-config-dashboards 1 4s
|
||||
prometheus-operator-grafana 1 4s
|
||||
prometheus-operator-grafana-datasource 1 4s
|
||||
prometheus-operator-etcd 1 4s
|
||||
prometheus-operator-grafana-coredns-k8s 1 4s
|
||||
prometheus-operator-k8s-cluster-rsrc-use 1 4s
|
||||
prometheus-operator-k8s-node-rsrc-use 1 4s
|
||||
prometheus-operator-k8s-resources-cluster 1 4s
|
||||
prometheus-operator-k8s-resources-namespace 1 4s
|
||||
prometheus-operator-k8s-resources-pod 1 4s
|
||||
prometheus-operator-nodes 1 4s
|
||||
prometheus-operator-persistentvolumesusage 1 4s
|
||||
prometheus-operator-pods 1 4s
|
||||
prometheus-operator-statefulset 1 4s
|
||||
|
||||
==> v1/ClusterRoleBinding
|
||||
NAME AGE
|
||||
prometheus-operator-grafana-clusterrolebinding 3s
|
||||
prometheus-operator-alertmanager 3s
|
||||
prometheus-operator-operator 3s
|
||||
prometheus-operator-operator-psp 3s
|
||||
prometheus-operator-prometheus 3s
|
||||
prometheus-operator-prometheus-psp 3s
|
||||
|
||||
==> v1beta1/Role
|
||||
NAME AGE
|
||||
prometheus-operator-grafana 3s
|
||||
|
||||
==> v1beta1/Deployment
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
prometheus-operator-kube-state-metrics 1 1 1 0 3s
|
||||
|
||||
==> v1/Alertmanager
|
||||
NAME AGE
|
||||
prometheus-operator-alertmanager 3s
|
||||
|
||||
==> v1/ServiceAccount
|
||||
NAME SECRETS AGE
|
||||
prometheus-operator-grafana 1 4s
|
||||
prometheus-operator-kube-state-metrics 1 4s
|
||||
prometheus-operator-prometheus-node-exporter 1 4s
|
||||
prometheus-operator-alertmanager 1 4s
|
||||
prometheus-operator-operator 1 4s
|
||||
prometheus-operator-prometheus 1 4s
|
||||
|
||||
==> v1/ClusterRole
|
||||
NAME AGE
|
||||
prometheus-operator-grafana-clusterrole 4s
|
||||
prometheus-operator-alertmanager 3s
|
||||
prometheus-operator-operator 3s
|
||||
prometheus-operator-operator-psp 3s
|
||||
prometheus-operator-prometheus 3s
|
||||
prometheus-operator-prometheus-psp 3s
|
||||
|
||||
==> v1/Role
|
||||
NAME AGE
|
||||
prometheus-operator-prometheus-config 3s
|
||||
prometheus-operator-prometheus 2s
|
||||
prometheus-operator-prometheus 2s
|
||||
|
||||
==> v1beta1/RoleBinding
|
||||
NAME AGE
|
||||
prometheus-operator-grafana 3s
|
||||
|
||||
==> v1beta2/Deployment
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
prometheus-operator-grafana 1 1 1 0 3s
|
||||
|
||||
==> v1/Prometheus
|
||||
NAME AGE
|
||||
prometheus-operator-prometheus 2s
|
||||
|
||||
==> v1beta1/ClusterRoleBinding
|
||||
NAME AGE
|
||||
prometheus-operator-kube-state-metrics 3s
|
||||
psp-prometheus-operator-kube-state-metrics 3s
|
||||
psp-prometheus-operator-prometheus-node-exporter 3s
|
||||
|
||||
==> v1/RoleBinding
|
||||
NAME AGE
|
||||
prometheus-operator-prometheus-config 3s
|
||||
prometheus-operator-prometheus 2s
|
||||
prometheus-operator-prometheus 2s
|
||||
|
||||
==> v1/PrometheusRule
|
||||
NAME AGE
|
||||
prometheus-operator-alertmanager.rules 2s
|
||||
prometheus-operator-etcd 2s
|
||||
prometheus-operator-general.rules 2s
|
||||
prometheus-operator-k8s.rules 2s
|
||||
prometheus-operator-kube-apiserver.rules 2s
|
||||
prometheus-operator-kube-prometheus-node-alerting.rules 2s
|
||||
prometheus-operator-kube-prometheus-node-recording.rules 2s
|
||||
prometheus-operator-kube-scheduler.rules 2s
|
||||
prometheus-operator-kubernetes-absent 2s
|
||||
prometheus-operator-kubernetes-apps 2s
|
||||
prometheus-operator-kubernetes-resources 2s
|
||||
prometheus-operator-kubernetes-storage 2s
|
||||
prometheus-operator-kubernetes-system 2s
|
||||
prometheus-operator-node.rules 2s
|
||||
prometheus-operator-prometheus-operator 2s
|
||||
prometheus-operator-prometheus.rules 2s
|
||||
|
||||
|
||||
NOTES:
|
||||
The Prometheus Operator has been installed. Check its status by running:
|
||||
kubectl --namespace kube-prometheus get pods -l "release=prometheus-operator"
|
||||
|
||||
Visit https://github.com/coreos/prometheus-operator for instructions on how
|
||||
to create & configure Alertmanager and Prometheus instances using the Operator.
|
||||
|
||||
At this point, prometheus has been deployed to the cluster. Give it a few minutes for all the pods to come up. Let's keep on working to get access to the rest of the consoles offered by the prometheus deployment.
|
||||
|
||||
Prometheus Console
|
||||
==================
|
||||
|
||||
Let's write an ingress configuration to expose the prometheus console. First off we need to list all the service deployed for prometheus.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ kubectl get service prometheus-operator-prometheus -o yaml -n kube-prometheus
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
creationTimestamp: "2019-02-09T23:09:55Z"
|
||||
labels:
|
||||
app: prometheus-operator-prometheus
|
||||
chart: prometheus-operator-2.1.6
|
||||
heritage: Tiller
|
||||
release: prometheus-operator
|
||||
name: prometheus-operator-prometheus
|
||||
namespace: kube-prometheus
|
||||
resourceVersion: "10996"
|
||||
selfLink: /api/v1/namespaces/kube-prometheus/services/prometheus-operator-prometheus
|
||||
uid: d038d6fa-2cbf-11e9-b74f-48ea5bb87c0b
|
||||
spec:
|
||||
clusterIP: 10.107.117.120
|
||||
ports:
|
||||
- name: web
|
||||
port: 9090
|
||||
protocol: TCP
|
||||
targetPort: web
|
||||
selector:
|
||||
app: prometheus
|
||||
prometheus: prometheus-operator-prometheus
|
||||
sessionAffinity: None
|
||||
type: ClusterIP
|
||||
status:
|
||||
loadBalancer: {}
|
||||
|
||||
As we can see from the service above, its name is ``prometheus-operator-prometheus`` and it's listening on port ``9090``. So let's write the ingress configuration for it.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: prometheus-dashboard
|
||||
namespace: kube-prometheus
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
rules:
|
||||
- host: prometheus.kube.local
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: prometheus-operator-prometheus
|
||||
servicePort: 9090
|
||||
|
||||
Save the file as ``kube-prometheus-ingress.yaml`` or some such and deploy.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ kubectl apply -f kube-prometheus-ingress.yaml
|
||||
ingress.extensions/prometheus-dashboard created
|
||||
|
||||
And then add the service host to our ``/etc/hosts``.
|
||||
|
||||
.. code:: text
|
||||
|
||||
192.168.39.78 prometheus.kube.local
|
||||
|
||||
Now you can access http://prometheus.kube.local from your browser.
|
||||
|
||||
Grafana Console
|
||||
===============
|
||||
|
||||
Much like what we did with the prometheus console previously, we need to do the same to the grafana dashboard.
|
||||
|
||||
First step, let's check the service.
|
||||
|
||||
.. code:: text
|
||||
|
||||
$ kubectl get service prometheus-operator-grafana -o yaml -n kube-prometheus
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
creationTimestamp: "2019-02-09T23:09:55Z"
|
||||
labels:
|
||||
app: grafana
|
||||
chart: grafana-1.25.0
|
||||
heritage: Tiller
|
||||
release: prometheus-operator
|
||||
name: prometheus-operator-grafana
|
||||
namespace: kube-prometheus
|
||||
resourceVersion: "10973"
|
||||
selfLink: /api/v1/namespaces/kube-prometheus/services/prometheus-operator-grafana
|
||||
uid: cffe169b-2cbf-11e9-b74f-48ea5bb87c0b
|
||||
spec:
|
||||
clusterIP: 10.107.125.114
|
||||
ports:
|
||||
- name: service
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: 3000
|
||||
selector:
|
||||
app: grafana
|
||||
release: prometheus-operator
|
||||
sessionAffinity: None
|
||||
type: ClusterIP
|
||||
status:
|
||||
loadBalancer: {}
|
||||
|
||||
We get ``prometheus-operator-grafana`` and port ``80``. Next is the ingress configuration.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: prometheus-grafana
|
||||
namespace: kube-prometheus
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
rules:
|
||||
- host: grafana.kube.local
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: prometheus-operator-grafana
|
||||
servicePort: 80
|
||||
|
||||
Then we deploy.
|
||||
|
||||
.. code:: text
|
||||
|
||||
kubectl apply -f kube-grafana-ingress.yaml
|
||||
ingress.extensions/prometheus-grafana created
|
||||
|
||||
And let's not forget ``/etc/hosts``.
|
||||
|
||||
.. code:: text
|
||||
|
||||
192.168.39.78 grafana.kube.local
|
||||
|
||||
And the grafana dashboard should appear if you visit http://grafana.kube.local.
|
3
requirements.txt
Normal file
|
@ -0,0 +1,3 @@
|
|||
nikola
|
||||
aiohttp
|
||||
watchdog
|
10751
themes/custom/assets/css/bootstrap.css
vendored
Normal file
12
themes/custom/assets/css/bootstrap.min.css
vendored
Normal file
12
themes/custom/custom.theme
Normal file
|
@ -0,0 +1,12 @@
|
|||
[Theme]
|
||||
engine = mako
|
||||
parent = bootstrap4
|
||||
author = The Nikola Contributors
|
||||
author_url = https://getnikola.com/
|
||||
license = MIT
|
||||
based_on = Bootstrap 4 <http://getbootstrap.com/>
|
||||
tags = bootstrap
|
||||
|
||||
[Family]
|
||||
family = bootstrap4
|
||||
|