Fixing my name to my original name.

This commit is contained in:
Elia el Lazkani 2019-08-31 11:40:11 +02:00 committed by Elia El Lazkani
parent 7371635e10
commit 6f0596c2e0
No known key found for this signature in database
GPG key ID: FBD81F2B1F488C2B
8 changed files with 177 additions and 177 deletions

View file

@ -16,7 +16,7 @@ import time
# Data about this site # Data about this site
BLOG_AUTHOR = "Elijah Lazkani" # (translatable) BLOG_AUTHOR = "Elia El Lazkani" # (translatable)
BLOG_TITLE = "The DevOps Blog" # (translatable) BLOG_TITLE = "The DevOps Blog" # (translatable)
# This is the main URL for your site. It will be used # This is the main URL for your site. It will be used
# in a prominent link. Don't forget the protocol (http/https)! # in a prominent link. Don't forget the protocol (http/https)!

View file

@ -1,7 +1,7 @@
.. title: About me .. title: About me
.. date: 2019-06-21 .. date: 2019-06-21
.. status: published .. status: published
.. authors: Elijah Lazkani .. authors: Elia El Lazkani
I am a DevOps engineer with a passion for technology, automation, Linux and OpenSource. I love learning new tricks and challenging myself with new tools being released on a monthly bases around *kubernetes* and/or *configuration management*. On my free time, I like to write automation tools and packages which can be found on PyPI. Or, I might as well tinker with new things around *kubernetes*. I blog about all that *here*. I think if I can write a blog about it, I understand it enough to have an opinion about it. It all comes in handy when the business need arises. I play around with technologies all day long by deploying, configuring, managing and maintaining all parts of the infrastructure below the application layer. I dabbled with "architecting" parts of different infrastructures, from end to end and I can say I have a knack for it and I like it when possible. I am a DevOps engineer with a passion for technology, automation, Linux and OpenSource. I love learning new tricks and challenging myself with new tools being released on a monthly bases around *kubernetes* and/or *configuration management*. On my free time, I like to write automation tools and packages which can be found on PyPI. Or, I might as well tinker with new things around *kubernetes*. I blog about all that *here*. I think if I can write a blog about it, I understand it enough to have an opinion about it. It all comes in handy when the business need arises. I play around with technologies all day long by deploying, configuring, managing and maintaining all parts of the infrastructure below the application layer. I dabbled with "architecting" parts of different infrastructures, from end to end and I can say I have a knack for it and I like it when possible.
@ -12,7 +12,7 @@ Here's a quick and dirty list of some of the technologies I've had my hands dirt
- **Neworking**: Configuring routers and switches (Brocade, CISCO, Dell). - **Neworking**: Configuring routers and switches (Brocade, CISCO, Dell).
- **Infrastructure**: Finding, automating, deploying and managing infrastructure key services. Too many to mention. - **Infrastructure**: Finding, automating, deploying and managing infrastructure key services. Too many to mention.
- **Virtualization**: Building infrastructures for virtualization (HyperV, libvirt, proxmox, RHEV, VMWare). - **Virtualization**: Building infrastructures for virtualization (HyperV, libvirt, proxmox, RHEV, VMWare).
- **Configuration Management**: Ansible, Chef, Puppet. - **Configuration Management**: Ansible, Chef, Puppet.
- **CI/CD**: Gitlab-CI, Jenkins. - **CI/CD**: Gitlab-CI, Jenkins.
- **Cloud**: AWS. - **Cloud**: AWS.

View file

@ -5,7 +5,7 @@
.. status: published .. status: published
.. tags: configuration management, ansible, molecule, .. tags: configuration management, ansible, molecule,
.. category: configuration management .. category: configuration management
.. authors: Elijah Lazkani .. authors: Elia El Lazkani
.. description: A fast way to create a testable ansible role using molecule. .. description: A fast way to create a testable ansible role using molecule.
.. type: text .. type: text
@ -40,7 +40,7 @@ Because in this case I have *python3* installed, I can create a *virtualenv* eas
# Navigate to the directory # Navigate to the directory
$ cd sandbox/test-roles/ $ cd sandbox/test-roles/
# Create the virtualenv # Create the virtualenv
~/sandbox/test-roles $ python -m venv .ansible-venv ~/sandbox/test-roles $ python -m venv .ansible-venv
# Activate the virtualenv # Activate the virtualenv
~/sandbox/test-roles $ source .ansible-venv/bin/activate ~/sandbox/test-roles $ source .ansible-venv/bin/activate
# Check that your virtualenv activated properly # Check that your virtualenv activated properly
@ -58,12 +58,12 @@ At this point, we can install the required dependencies.
Collecting molecule Collecting molecule
Downloading https://files.pythonhosted.org/packages/84/97/e5764079cb7942d0fa68b832cb9948274abb42b72d9b7fe4a214e7943786/molecule-2.19.0-py3-none-any.whl (180kB) Downloading https://files.pythonhosted.org/packages/84/97/e5764079cb7942d0fa68b832cb9948274abb42b72d9b7fe4a214e7943786/molecule-2.19.0-py3-none-any.whl (180kB)
100% |████████████████████████████████| 184kB 2.2MB/s 100% |████████████████████████████████| 184kB 2.2MB/s
... ...
Successfully built ansible ansible-lint anyconfig cerberus psutil click-completion tabulate tree-format pathspec future pycparser arrow Successfully built ansible ansible-lint anyconfig cerberus psutil click-completion tabulate tree-format pathspec future pycparser arrow
Installing collected packages: MarkupSafe, jinja2, PyYAML, six, pycparser, cffi, pynacl, idna, asn1crypto, cryptography, bcrypt, paramiko, ansible, pbr, git-url-parse, monotonic, fasteners, click, colorama, sh, python-gilt, ansible-lint, pathspec, yamllint, anyconfig, cerberus, psutil, more-itertools, py, attrs, pluggy, atomicwrites, pytest, testinfra, ptyprocess, pexpect, click-completion, tabulate, future, chardet, binaryornot, poyo, urllib3, certifi, requests, python-dateutil, arrow, jinja2-time, whichcraft, cookiecutter, tree-format, molecule, docker-pycreds, websocket-client, docker Installing collected packages: MarkupSafe, jinja2, PyYAML, six, pycparser, cffi, pynacl, idna, asn1crypto, cryptography, bcrypt, paramiko, ansible, pbr, git-url-parse, monotonic, fasteners, click, colorama, sh, python-gilt, ansible-lint, pathspec, yamllint, anyconfig, cerberus, psutil, more-itertools, py, attrs, pluggy, atomicwrites, pytest, testinfra, ptyprocess, pexpect, click-completion, tabulate, future, chardet, binaryornot, poyo, urllib3, certifi, requests, python-dateutil, arrow, jinja2-time, whichcraft, cookiecutter, tree-format, molecule, docker-pycreds, websocket-client, docker
Successfully installed MarkupSafe-1.1.0 PyYAML-3.13 ansible-2.7.5 ansible-lint-3.4.23 anyconfig-0.9.7 arrow-0.13.0 asn1crypto-0.24.0 atomicwrites-1.2.1 attrs-18.2.0 bcrypt-3.1.5 binaryornot-0.4.4 cerberus-1.2 certifi-2018.11.29 cffi-1.11.5 chardet-3.0.4 click-6.7 click-completion-0.3.1 colorama-0.3.9 cookiecutter-1.6.0 cryptography-2.4.2 docker-3.7.0 docker-pycreds-0.4.0 fasteners-0.14.1 future-0.17.1 git-url-parse-1.1.0 idna-2.8 jinja2-2.10 jinja2-time-0.2.0 molecule-2.19.0 monotonic-1.5 more-itertools-5.0.0 paramiko-2.4.2 pathspec-0.5.9 pbr-4.1.0 pexpect-4.6.0 pluggy-0.8.1 poyo-0.4.2 psutil-5.4.6 ptyprocess-0.6.0 py-1.7.0 pycparser-2.19 pynacl-1.3.0 pytest-4.1.0 python-dateutil-2.7.5 python-gilt-1.2.1 requests-2.21.0 sh-1.12.14 six-1.11.0 tabulate-0.8.2 testinfra-1.16.0 tree-format-0.1.2 urllib3-1.24.1 websocket-client-0.54.0 whichcraft-0.5.2 yamllint-1.11.1 Successfully installed MarkupSafe-1.1.0 PyYAML-3.13 ansible-2.7.5 ansible-lint-3.4.23 anyconfig-0.9.7 arrow-0.13.0 asn1crypto-0.24.0 atomicwrites-1.2.1 attrs-18.2.0 bcrypt-3.1.5 binaryornot-0.4.4 cerberus-1.2 certifi-2018.11.29 cffi-1.11.5 chardet-3.0.4 click-6.7 click-completion-0.3.1 colorama-0.3.9 cookiecutter-1.6.0 cryptography-2.4.2 docker-3.7.0 docker-pycreds-0.4.0 fasteners-0.14.1 future-0.17.1 git-url-parse-1.1.0 idna-2.8 jinja2-2.10 jinja2-time-0.2.0 molecule-2.19.0 monotonic-1.5 more-itertools-5.0.0 paramiko-2.4.2 pathspec-0.5.9 pbr-4.1.0 pexpect-4.6.0 pluggy-0.8.1 poyo-0.4.2 psutil-5.4.6 ptyprocess-0.6.0 py-1.7.0 pycparser-2.19 pynacl-1.3.0 pytest-4.1.0 python-dateutil-2.7.5 python-gilt-1.2.1 requests-2.21.0 sh-1.12.14 six-1.11.0 tabulate-0.8.2 testinfra-1.16.0 tree-format-0.1.2 urllib3-1.24.1 websocket-client-0.54.0 whichcraft-0.5.2 yamllint-1.11.1
Creating your first ansible role Creating your first ansible role
================================ ================================
@ -75,7 +75,7 @@ Once all the steps above are complete, we can start by creating our first ansibl
$ molecule init role -r example-role $ molecule init role -r example-role
--> Initializing new role example-role... --> Initializing new role example-role...
Initialized role in /home/elijah/sandbox/test-roles/example-role successfully. Initialized role in /home/elijah/sandbox/test-roles/example-role successfully.
$ tree example-role/ $ tree example-role/
example-role/ example-role/
├── defaults ├── defaults
@ -99,7 +99,7 @@ Once all the steps above are complete, we can start by creating our first ansibl
│ └── main.yml │ └── main.yml
└── vars └── vars
└── main.yml └── main.yml
9 directories, 12 files 9 directories, 12 files
You can find what each directory is for and how ansible works by visiting docs.ansible.com. You can find what each directory is for and how ansible works by visiting docs.ansible.com.
@ -113,7 +113,7 @@ The meta file needs to modified and filled with information about the role. This
--- ---
galaxy_info: galaxy_info:
author: Elijah Lazkani author: Elia El Lazkani
description: This is an example ansible role to showcase molecule at work description: This is an example ansible role to showcase molecule at work
license: license (BDS-2) license: license (BDS-2)
min_ansible_version: 2.7 min_ansible_version: 2.7
@ -137,7 +137,7 @@ This is where the magic is set in motion. Tasks are the smallest entities in a r
state: present state: present
create_home: yes create_home: yes
home: /home/example home: /home/example
# Install nginx # Install nginx
- name: Install nginx - name: Install nginx
apt: apt:
@ -206,76 +206,76 @@ First Role Pass
This is time to test our role and see what's going on. This is time to test our role and see what's going on.
.. code:: text .. code:: text
(.ansible-role) ~/sandbox/test-roles/example-role/ $ molecule converge (.ansible-role) ~/sandbox/test-roles/example-role/ $ molecule converge
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml. --> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
Validation completed successfully. Validation completed successfully.
--> Test matrix --> Test matrix
└── default └── default
├── dependency ├── dependency
├── create ├── create
├── prepare ├── prepare
└── converge └── converge
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'dependency' --> Action: 'dependency'
Skipping, missing the requirements file. Skipping, missing the requirements file.
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'create' --> Action: 'create'
PLAY [Create] ****************************************************************** PLAY [Create] ******************************************************************
TASK [Log into a Docker registry] ********************************************** TASK [Log into a Docker registry] **********************************************
skipping: [localhost] => (item=None) skipping: [localhost] => (item=None)
TASK [Create Dockerfiles from image names] ************************************* TASK [Create Dockerfiles from image names] *************************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
TASK [Discover local Docker images] ******************************************** TASK [Discover local Docker images] ********************************************
ok: [localhost] => (item=None) ok: [localhost] => (item=None)
ok: [localhost] ok: [localhost]
TASK [Build an Ansible compatible image] *************************************** TASK [Build an Ansible compatible image] ***************************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
TASK [Create docker network(s)] ************************************************ TASK [Create docker network(s)] ************************************************
TASK [Create molecule instance(s)] ********************************************* TASK [Create molecule instance(s)] *********************************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
TASK [Wait for instance(s) creation to complete] ******************************* TASK [Wait for instance(s) creation to complete] *******************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
PLAY RECAP ********************************************************************* PLAY RECAP *********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=0 localhost : ok=5 changed=4 unreachable=0 failed=0
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'prepare' --> Action: 'prepare'
Skipping, prepare playbook not configured. Skipping, prepare playbook not configured.
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'converge' --> Action: 'converge'
PLAY [Converge] **************************************************************** PLAY [Converge] ****************************************************************
TASK [Gathering Facts] ********************************************************* TASK [Gathering Facts] *********************************************************
ok: [instance] ok: [instance]
TASK [example-role : Create 'example' user] ************************************ TASK [example-role : Create 'example' user] ************************************
changed: [instance] changed: [instance]
TASK [example-role : Install nginx] ******************************************** TASK [example-role : Install nginx] ********************************************
changed: [instance] changed: [instance]
RUNNING HANDLER [example-role : Restart nginx] ********************************* RUNNING HANDLER [example-role : Restart nginx] *********************************
changed: [instance] changed: [instance]
PLAY RECAP ********************************************************************* PLAY RECAP *********************************************************************
instance : ok=4 changed=3 unreachable=0 failed=0 instance : ok=4 changed=3 unreachable=0 failed=0
@ -294,43 +294,43 @@ Molecule leverages the `testinfra <https://testinfra.readthedocs.io/en/latest/>`
.. code:: python .. code:: python
import os import os
import testinfra.utils.ansible_runner import testinfra.utils.ansible_runner
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner( testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all') os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
def test_hosts_file(host): def test_hosts_file(host):
f = host.file('/etc/hosts') f = host.file('/etc/hosts')
assert f.exists assert f.exists
assert f.user == 'root' assert f.user == 'root'
assert f.group == 'root' assert f.group == 'root'
def test_user_created(host): def test_user_created(host):
user = host.user("example") user = host.user("example")
assert user.name == "example" assert user.name == "example"
assert user.home == "/home/example" assert user.home == "/home/example"
def test_user_home_exists(host): def test_user_home_exists(host):
user_home = host.file("/home/example") user_home = host.file("/home/example")
assert user_home.exists assert user_home.exists
assert user_home.is_directory assert user_home.is_directory
def test_nginx_is_installed(host): def test_nginx_is_installed(host):
nginx = host.package("nginx") nginx = host.package("nginx")
assert nginx.is_installed assert nginx.is_installed
def test_nginx_running_and_enabled(host): def test_nginx_running_and_enabled(host):
nginx = host.service("nginx") nginx = host.service("nginx")
assert nginx.is_running assert nginx.is_running
.. warning:: .. warning::
@ -342,7 +342,7 @@ Molecule leverages the `testinfra <https://testinfra.readthedocs.io/en/latest/>`
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml. --> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
Validation completed successfully. Validation completed successfully.
--> Test matrix --> Test matrix
└── default └── default
├── lint ├── lint
├── destroy ├── destroy
@ -355,140 +355,140 @@ Molecule leverages the `testinfra <https://testinfra.readthedocs.io/en/latest/>`
├── side_effect ├── side_effect
├── verify ├── verify
└── destroy └── destroy
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'lint' --> Action: 'lint'
--> Executing Yamllint on files found in /home/elijah/sandbox/test-roles/example-role/... --> Executing Yamllint on files found in /home/elijah/sandbox/test-roles/example-role/...
Lint completed successfully. Lint completed successfully.
--> Executing Flake8 on files found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/... --> Executing Flake8 on files found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
/home/elijah/.virtualenvs/world/lib/python3.7/site-packages/pycodestyle.py:113: FutureWarning: Possible nested set at position 1 /home/elijah/.virtualenvs/world/lib/python3.7/site-packages/pycodestyle.py:113: FutureWarning: Possible nested set at position 1
EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]') EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]')
Lint completed successfully. Lint completed successfully.
--> Executing Ansible Lint on /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml... --> Executing Ansible Lint on /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml...
Lint completed successfully. Lint completed successfully.
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'destroy' --> Action: 'destroy'
PLAY [Destroy] ***************************************************************** PLAY [Destroy] *****************************************************************
TASK [Destroy molecule instance(s)] ******************************************** TASK [Destroy molecule instance(s)] ********************************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
TASK [Wait for instance(s) deletion to complete] ******************************* TASK [Wait for instance(s) deletion to complete] *******************************
ok: [localhost] => (item=None) ok: [localhost] => (item=None)
ok: [localhost] ok: [localhost]
TASK [Delete docker network(s)] ************************************************ TASK [Delete docker network(s)] ************************************************
PLAY RECAP ********************************************************************* PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 localhost : ok=2 changed=1 unreachable=0 failed=0
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'dependency' --> Action: 'dependency'
Skipping, missing the requirements file. Skipping, missing the requirements file.
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'syntax' --> Action: 'syntax'
playbook: /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml playbook: /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'create' --> Action: 'create'
PLAY [Create] ****************************************************************** PLAY [Create] ******************************************************************
TASK [Log into a Docker registry] ********************************************** TASK [Log into a Docker registry] **********************************************
skipping: [localhost] => (item=None) skipping: [localhost] => (item=None)
TASK [Create Dockerfiles from image names] ************************************* TASK [Create Dockerfiles from image names] *************************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
TASK [Discover local Docker images] ******************************************** TASK [Discover local Docker images] ********************************************
ok: [localhost] => (item=None) ok: [localhost] => (item=None)
ok: [localhost] ok: [localhost]
TASK [Build an Ansible compatible image] *************************************** TASK [Build an Ansible compatible image] ***************************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
TASK [Create docker network(s)] ************************************************ TASK [Create docker network(s)] ************************************************
TASK [Create molecule instance(s)] ********************************************* TASK [Create molecule instance(s)] *********************************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
TASK [Wait for instance(s) creation to complete] ******************************* TASK [Wait for instance(s) creation to complete] *******************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
PLAY RECAP ********************************************************************* PLAY RECAP *********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=0 localhost : ok=5 changed=4 unreachable=0 failed=0
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'prepare' --> Action: 'prepare'
Skipping, prepare playbook not configured. Skipping, prepare playbook not configured.
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'converge' --> Action: 'converge'
PLAY [Converge] **************************************************************** PLAY [Converge] ****************************************************************
TASK [Gathering Facts] ********************************************************* TASK [Gathering Facts] *********************************************************
ok: [instance] ok: [instance]
TASK [example-role : Create 'example' user] ************************************ TASK [example-role : Create 'example' user] ************************************
changed: [instance] changed: [instance]
TASK [example-role : Install nginx] ******************************************** TASK [example-role : Install nginx] ********************************************
changed: [instance] changed: [instance]
RUNNING HANDLER [example-role : Restart nginx] ********************************* RUNNING HANDLER [example-role : Restart nginx] *********************************
changed: [instance] changed: [instance]
PLAY RECAP ********************************************************************* PLAY RECAP *********************************************************************
instance : ok=4 changed=3 unreachable=0 failed=0 instance : ok=4 changed=3 unreachable=0 failed=0
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'idempotence' --> Action: 'idempotence'
Idempotence completed successfully. Idempotence completed successfully.
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'side_effect' --> Action: 'side_effect'
Skipping, side effect playbook not configured. Skipping, side effect playbook not configured.
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'verify' --> Action: 'verify'
--> Executing Testinfra tests found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/... --> Executing Testinfra tests found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
============================= test session starts ============================== ============================= test session starts ==============================
platform linux -- Python 3.7.1, pytest-4.1.0, py-1.7.0, pluggy-0.8.1 platform linux -- Python 3.7.1, pytest-4.1.0, py-1.7.0, pluggy-0.8.1
rootdir: /home/elijah/sandbox/test-roles/example-role/molecule/default, inifile: rootdir: /home/elijah/sandbox/test-roles/example-role/molecule/default, inifile:
plugins: testinfra-1.16.0 plugins: testinfra-1.16.0
collected 5 items collected 5 items
tests/test_default.py ..... [100%] tests/test_default.py ..... [100%]
=============================== warnings summary =============================== =============================== warnings summary ===============================
... ...
==================== 5 passed, 7 warnings in 27.37 seconds ===================== ==================== 5 passed, 7 warnings in 27.37 seconds =====================
Verifier completed successfully. Verifier completed successfully.
--> Scenario: 'default' --> Scenario: 'default'
--> Action: 'destroy' --> Action: 'destroy'
PLAY [Destroy] ***************************************************************** PLAY [Destroy] *****************************************************************
TASK [Destroy molecule instance(s)] ******************************************** TASK [Destroy molecule instance(s)] ********************************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
TASK [Wait for instance(s) deletion to complete] ******************************* TASK [Wait for instance(s) deletion to complete] *******************************
changed: [localhost] => (item=None) changed: [localhost] => (item=None)
changed: [localhost] changed: [localhost]
TASK [Delete docker network(s)] ************************************************ TASK [Delete docker network(s)] ************************************************
PLAY RECAP ********************************************************************* PLAY RECAP *********************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0 localhost : ok=2 changed=2 unreachable=0 failed=0

View file

@ -5,7 +5,7 @@
.. tags: irc, ssh, weechat, notification, .. tags: irc, ssh, weechat, notification,
.. category: irc .. category: irc
.. slug: weechat-ssh-and-notification .. slug: weechat-ssh-and-notification
.. authors: Elijah Lazkani .. authors: Elia El Lazkani
.. description: A way to patch weechat notifications through your system's libnotify over ssh. .. description: A way to patch weechat notifications through your system's libnotify over ssh.
.. type: text .. type: text

View file

@ -5,7 +5,7 @@
.. tags: kubernetes, helm, tiller, .. tags: kubernetes, helm, tiller,
.. category: kubernetes .. category: kubernetes
.. slug: deploying-helm-in-your-kubernetes-cluster .. slug: deploying-helm-in-your-kubernetes-cluster
.. authors: Elijah Lazkani .. authors: Elia El Lazkani
.. description: Post explaining how to deploy helm in your kubernetes cluster. .. description: Post explaining how to deploy helm in your kubernetes cluster.
.. type: text .. type: text
@ -102,7 +102,7 @@ Save the following in ``ClusterRoleBinding.yaml`` and then
$ kubectl apply -f ClusterRoleBinding.yaml $ kubectl apply -f ClusterRoleBinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
Deploying Tiller Deploying Tiller
================ ================
@ -122,12 +122,12 @@ Now that we have all the basics deployed, we can finally deploy *Tiller* in the
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at ~/.helm. $HELM_HOME has been configured at ~/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag. To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming! Happy Helming!
.. note:: .. note::

View file

@ -5,7 +5,7 @@
.. tags: kubernetes, rancher, rancheros, kvm, libvirt, .. tags: kubernetes, rancher, rancheros, kvm, libvirt,
.. category: kubernetes .. category: kubernetes
.. slug: local-kubernetes-cluster-on-kvm .. slug: local-kubernetes-cluster-on-kvm
.. authors: Elijah Lazkani .. authors: Elia El Lazkani
.. description: Deploying a kubernetes cluster locally on KVM. .. description: Deploying a kubernetes cluster locally on KVM.
.. type: text .. type: text
@ -31,7 +31,7 @@ Installing RancherOS
Once all 4 nodes have been created, when you boot into the *RancherOS* `ISO <https://rancher.com/docs/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/>`_ do the following. Once all 4 nodes have been created, when you boot into the *RancherOS* `ISO <https://rancher.com/docs/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/>`_ do the following.
.. note:: .. note::
Because I was using *libvirt*, I was able to do ``virsh console <vm>`` and run these commands. Because I was using *libvirt*, I was able to do ``virsh console <vm>`` and run these commands.
@ -42,13 +42,13 @@ If you are running these VMs on *libvirt*, then you can console into the box and
.. code:: text .. code:: text
# virsh list # virsh list
Id Name State Id Name State
------------------------- -------------------------
21 kube01 running 21 kube01 running
22 kube02 running 22 kube02 running
23 kube03 running 23 kube03 running
24 rancher running 24 rancher running
# virsh console rancher # virsh console rancher
@ -58,7 +58,7 @@ Configuration
If you read the *RancherOS* `documentation <https://rancher.com/docs/os/v1.x/en/>`_, you'll find out that you can configure the *OS* with a ``YAML`` configuration file so let's do that. If you read the *RancherOS* `documentation <https://rancher.com/docs/os/v1.x/en/>`_, you'll find out that you can configure the *OS* with a ``YAML`` configuration file so let's do that.
.. code:: text .. code:: text
$ vi cloud-config.yml $ vi cloud-config.yml
And that file should hold. And that file should hold.
@ -66,17 +66,17 @@ And that file should hold.
.. code:: yaml .. code:: yaml
--- ---
hostname: rancher.kube.loco hostname: rancher.kube.loco
ssh_authorized_keys: ssh_authorized_keys:
- ssh-rsa AAA... - ssh-rsa AAA...
rancher: rancher:
network: network:
interfaces: interfaces:
eth0: eth0:
address: 192.168.122.5/24 address: 192.168.122.5/24
dhcp: false dhcp: false
gateway: 192.168.122.1 gateway: 192.168.122.1
mtu: 1500 mtu: 1500
Make sure that your **public** *ssh key* is replaced in the example before and if you have a different network configuration for your VMs, change the network configuration here. Make sure that your **public** *ssh key* is replaced in the example before and if you have a different network configuration for your VMs, change the network configuration here.
@ -89,10 +89,10 @@ After you save that file, install the *OS*.
Do the same for the rest of the servers and their names and IPs should be as follows (if you are following this tutorial): Do the same for the rest of the servers and their names and IPs should be as follows (if you are following this tutorial):
.. code:: text .. code:: text
192.168.122.5 rancher.kube.loco 192.168.122.5 rancher.kube.loco
192.168.122.10 kube01.kube.loco 192.168.122.10 kube01.kube.loco
192.168.122.11 kube02.kube.loco 192.168.122.11 kube02.kube.loco
192.168.122.12 kube03.kube.loco 192.168.122.12 kube03.kube.loco
Post Installation Configuration Post Installation Configuration
@ -106,10 +106,10 @@ After *RancherOS* has been installed, one will need to configure ``/etc/hosts``
.. code:: text .. code:: text
127.0.0.1 rancher.kube.loco 127.0.0.1 rancher.kube.loco
192.168.122.5 rancher.kube.loco 192.168.122.5 rancher.kube.loco
192.168.122.10 kube01.kube.loco 192.168.122.10 kube01.kube.loco
192.168.122.11 kube02.kube.loco 192.168.122.11 kube02.kube.loco
192.168.122.12 kube03.kube.loco 192.168.122.12 kube03.kube.loco
Do the same on the rest of the servers while changing the ``127.0.0.1`` hostname to the host of the server. Do the same on the rest of the servers while changing the ``127.0.0.1`` hostname to the host of the server.
@ -131,7 +131,7 @@ If those points are understood, let's go ahead and deploy Rancher.
First, ``$ ssh rancher@192.168.122.5`` then: First, ``$ ssh rancher@192.168.122.5`` then:
.. code:: text .. code:: text
[rancher@rancher ~]$ docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher [rancher@rancher ~]$ docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
Give it a few minutes for the container to come up and the application as well. Meanwhile configure your ``/etc/hosts`` file on your machine. Give it a few minutes for the container to come up and the application as well. Meanwhile configure your ``/etc/hosts`` file on your machine.
@ -171,7 +171,7 @@ Make sure you choose **Custom** as a *provider*. Then fill in the **Cluser Name*
Optionally, you can choose your **Network Providor**, in my case I chose **Calico**. Then I clicked on **show advanced** at the bottom right corner then expanded the *newly shown tab* **Advanced Cluster Options**. Optionally, you can choose your **Network Providor**, in my case I chose **Calico**. Then I clicked on **show advanced** at the bottom right corner then expanded the *newly shown tab* **Advanced Cluster Options**.
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/04-nginx_ingressDisabled.png .. thumbnail:: /images/local_kubernetes_cluster_on_kvm/04-nginx_ingressDisabled.png
:align: center :align: center
@ -180,7 +180,7 @@ Optionally, you can choose your **Network Providor**, in my case I chose **Calic
We will disable the **Nginx Ingress** and the **Pod Security Policy Support** for the time being. This will become more apparent why in the future, hopefully. Then hit **Next**. We will disable the **Nginx Ingress** and the **Pod Security Policy Support** for the time being. This will become more apparent why in the future, hopefully. Then hit **Next**.
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/05-customer_nodes.png .. thumbnail:: /images/local_kubernetes_cluster_on_kvm/05-customer_nodes.png
:align: center :align: center
:alt: Customize Nodes :alt: Customize Nodes
@ -194,7 +194,7 @@ Do the same for *all the rest*. Once the first docker image gets downloaded and
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/06-registered_nodes.png .. thumbnail:: /images/local_kubernetes_cluster_on_kvm/06-registered_nodes.png
:align: center :align: center
:alt: Registered Nodes :alt: Registered Nodes
.. warning:: .. warning::
@ -221,4 +221,4 @@ Conclusion
At this point, you can check that all the nodes are healthy and you got yourself a kubernetes cluster. In future blog posts we will explore an avenue to deploy *multiple ingress controllers* on the same cluster on the same ``port: 80`` by giving them each an IP external to the cluster. At this point, you can check that all the nodes are healthy and you got yourself a kubernetes cluster. In future blog posts we will explore an avenue to deploy *multiple ingress controllers* on the same cluster on the same ``port: 80`` by giving them each an IP external to the cluster.
But for now, you got yourself a kubernetes cluster to play with. Enjoy. But for now, you got yourself a kubernetes cluster to play with. Enjoy.

View file

@ -1,11 +1,11 @@
.. title: Minikube Setup .. title: Minikube Setup
.. date: 2019-02-09 .. date: 2019-02-09
.. updated: 2019-07-02 .. updated: 2019-07-02
.. status: published .. status: published
.. tags: minikube, kubernetes, ingress, ingress-controller, .. tags: minikube, kubernetes, ingress, ingress-controller,
.. category: kubernetes .. category: kubernetes
.. slug: minikube-setup .. slug: minikube-setup
.. authors: Elijah Lazkani .. authors: Elia El Lazkani
.. description: A quick and dirty minikube setup. .. description: A quick and dirty minikube setup.
.. type: text .. type: text
@ -45,15 +45,15 @@ Let's start minikube.
Verifying apiserver health ... Verifying apiserver health ...
Kubectl is now configured to use the cluster. Kubectl is now configured to use the cluster.
Loading cached images from config file. Loading cached images from config file.
Everything looks great. Please enjoy minikube! Everything looks great. Please enjoy minikube!
Great... At this point we have a cluster that's running, let's verify. Great... At this point we have a cluster that's running, let's verify.
.. code:: text .. code:: text
# Id Name State # Id Name State
-------------------------- --------------------------
3 minikube running 3 minikube running

View file

@ -5,7 +5,7 @@
.. tags: minikube, kubernetes, ingress, helm, prometheus, grafana, .. tags: minikube, kubernetes, ingress, helm, prometheus, grafana,
.. category: kubernetes .. category: kubernetes
.. slug: your-first-minikube-helm-deployment .. slug: your-first-minikube-helm-deployment
.. authors: Elijah Lazkani .. authors: Elia El Lazkani
.. description: Deploying your first minikube helm charts. .. description: Deploying your first minikube helm charts.
.. type: text .. type: text
@ -25,7 +25,7 @@ Deploying Tiller
Before we can start with the deployments using ``helm``, we need to deploy *tiller*. It's a service that manages communications with the client and deployments. Before we can start with the deployments using ``helm``, we need to deploy *tiller*. It's a service that manages communications with the client and deployments.
.. code:: text .. code:: text
$ helm init --history-max=10 $ helm init --history-max=10
Creating ~/.helm Creating ~/.helm
Creating ~/.helm/repository Creating ~/.helm/repository
@ -38,9 +38,9 @@ Before we can start with the deployments using ``helm``, we need to deploy *till
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at ~/.helm. $HELM_HOME has been configured at ~/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run ``helm init`` with the --tiller-tls-verify flag. To prevent this, run ``helm init`` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
@ -59,19 +59,19 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
LAST DEPLOYED: Sat Feb 9 18:09:43 2019 LAST DEPLOYED: Sat Feb 9 18:09:43 2019
NAMESPACE: kube-prometheus NAMESPACE: kube-prometheus
STATUS: DEPLOYED STATUS: DEPLOYED
RESOURCES: RESOURCES:
==> v1/Secret ==> v1/Secret
NAME TYPE DATA AGE NAME TYPE DATA AGE
prometheus-operator-grafana Opaque 3 4s prometheus-operator-grafana Opaque 3 4s
alertmanager-prometheus-operator-alertmanager Opaque 1 4s alertmanager-prometheus-operator-alertmanager Opaque 1 4s
==> v1beta1/ClusterRole ==> v1beta1/ClusterRole
NAME AGE NAME AGE
prometheus-operator-kube-state-metrics 3s prometheus-operator-kube-state-metrics 3s
psp-prometheus-operator-kube-state-metrics 3s psp-prometheus-operator-kube-state-metrics 3s
psp-prometheus-operator-prometheus-node-exporter 3s psp-prometheus-operator-prometheus-node-exporter 3s
==> v1/Service ==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-operator-grafana ClusterIP 10.107.125.114 80/TCP 3s prometheus-operator-grafana ClusterIP 10.107.125.114 80/TCP 3s
@ -84,15 +84,15 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
prometheus-operator-kube-scheduler ClusterIP None 10251/TCP 3s prometheus-operator-kube-scheduler ClusterIP None 10251/TCP 3s
prometheus-operator-operator ClusterIP 10.101.253.101 8080/TCP 3s prometheus-operator-operator ClusterIP 10.101.253.101 8080/TCP 3s
prometheus-operator-prometheus ClusterIP 10.107.117.120 9090/TCP 3s prometheus-operator-prometheus ClusterIP 10.107.117.120 9090/TCP 3s
==> v1beta1/DaemonSet ==> v1beta1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
prometheus-operator-prometheus-node-exporter 1 1 0 1 0 3s prometheus-operator-prometheus-node-exporter 1 1 0 1 0 3s
==> v1/Deployment ==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
prometheus-operator-operator 1 1 1 0 3s prometheus-operator-operator 1 1 1 0 3s
==> v1/ServiceMonitor ==> v1/ServiceMonitor
NAME AGE NAME AGE
prometheus-operator-alertmanager 2s prometheus-operator-alertmanager 2s
@ -106,14 +106,14 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
prometheus-operator-node-exporter 2s prometheus-operator-node-exporter 2s
prometheus-operator-operator 2s prometheus-operator-operator 2s
prometheus-operator-prometheus 2s prometheus-operator-prometheus 2s
==> v1/Pod(related) ==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
prometheus-operator-prometheus-node-exporter-fntpx 0/1 ContainerCreating 0 3s prometheus-operator-prometheus-node-exporter-fntpx 0/1 ContainerCreating 0 3s
prometheus-operator-grafana-8559d7df44-vrm8d 0/3 ContainerCreating 0 2s prometheus-operator-grafana-8559d7df44-vrm8d 0/3 ContainerCreating 0 2s
prometheus-operator-kube-state-metrics-7769f5bd54-6znvh 0/1 ContainerCreating 0 2s prometheus-operator-kube-state-metrics-7769f5bd54-6znvh 0/1 ContainerCreating 0 2s
prometheus-operator-operator-7967865bf5-cbd6r 0/1 ContainerCreating 0 2s prometheus-operator-operator-7967865bf5-cbd6r 0/1 ContainerCreating 0 2s
==> v1beta1/PodSecurityPolicy ==> v1beta1/PodSecurityPolicy
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
prometheus-operator-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim prometheus-operator-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
@ -122,7 +122,7 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
prometheus-operator-alertmanager false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim prometheus-operator-alertmanager false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
prometheus-operator-operator false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim prometheus-operator-operator false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
prometheus-operator-prometheus false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim prometheus-operator-prometheus false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
==> v1/ConfigMap ==> v1/ConfigMap
NAME DATA AGE NAME DATA AGE
prometheus-operator-grafana-config-dashboards 1 4s prometheus-operator-grafana-config-dashboards 1 4s
@ -139,7 +139,7 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
prometheus-operator-persistentvolumesusage 1 4s prometheus-operator-persistentvolumesusage 1 4s
prometheus-operator-pods 1 4s prometheus-operator-pods 1 4s
prometheus-operator-statefulset 1 4s prometheus-operator-statefulset 1 4s
==> v1/ClusterRoleBinding ==> v1/ClusterRoleBinding
NAME AGE NAME AGE
prometheus-operator-grafana-clusterrolebinding 3s prometheus-operator-grafana-clusterrolebinding 3s
@ -148,19 +148,19 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
prometheus-operator-operator-psp 3s prometheus-operator-operator-psp 3s
prometheus-operator-prometheus 3s prometheus-operator-prometheus 3s
prometheus-operator-prometheus-psp 3s prometheus-operator-prometheus-psp 3s
==> v1beta1/Role ==> v1beta1/Role
NAME AGE NAME AGE
prometheus-operator-grafana 3s prometheus-operator-grafana 3s
==> v1beta1/Deployment ==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
prometheus-operator-kube-state-metrics 1 1 1 0 3s prometheus-operator-kube-state-metrics 1 1 1 0 3s
==> v1/Alertmanager ==> v1/Alertmanager
NAME AGE NAME AGE
prometheus-operator-alertmanager 3s prometheus-operator-alertmanager 3s
==> v1/ServiceAccount ==> v1/ServiceAccount
NAME SECRETS AGE NAME SECRETS AGE
prometheus-operator-grafana 1 4s prometheus-operator-grafana 1 4s
@ -169,7 +169,7 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
prometheus-operator-alertmanager 1 4s prometheus-operator-alertmanager 1 4s
prometheus-operator-operator 1 4s prometheus-operator-operator 1 4s
prometheus-operator-prometheus 1 4s prometheus-operator-prometheus 1 4s
==> v1/ClusterRole ==> v1/ClusterRole
NAME AGE NAME AGE
prometheus-operator-grafana-clusterrole 4s prometheus-operator-grafana-clusterrole 4s
@ -178,37 +178,37 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
prometheus-operator-operator-psp 3s prometheus-operator-operator-psp 3s
prometheus-operator-prometheus 3s prometheus-operator-prometheus 3s
prometheus-operator-prometheus-psp 3s prometheus-operator-prometheus-psp 3s
==> v1/Role ==> v1/Role
NAME AGE NAME AGE
prometheus-operator-prometheus-config 3s prometheus-operator-prometheus-config 3s
prometheus-operator-prometheus 2s prometheus-operator-prometheus 2s
prometheus-operator-prometheus 2s prometheus-operator-prometheus 2s
==> v1beta1/RoleBinding ==> v1beta1/RoleBinding
NAME AGE NAME AGE
prometheus-operator-grafana 3s prometheus-operator-grafana 3s
==> v1beta2/Deployment ==> v1beta2/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
prometheus-operator-grafana 1 1 1 0 3s prometheus-operator-grafana 1 1 1 0 3s
==> v1/Prometheus ==> v1/Prometheus
NAME AGE NAME AGE
prometheus-operator-prometheus 2s prometheus-operator-prometheus 2s
==> v1beta1/ClusterRoleBinding ==> v1beta1/ClusterRoleBinding
NAME AGE NAME AGE
prometheus-operator-kube-state-metrics 3s prometheus-operator-kube-state-metrics 3s
psp-prometheus-operator-kube-state-metrics 3s psp-prometheus-operator-kube-state-metrics 3s
psp-prometheus-operator-prometheus-node-exporter 3s psp-prometheus-operator-prometheus-node-exporter 3s
==> v1/RoleBinding ==> v1/RoleBinding
NAME AGE NAME AGE
prometheus-operator-prometheus-config 3s prometheus-operator-prometheus-config 3s
prometheus-operator-prometheus 2s prometheus-operator-prometheus 2s
prometheus-operator-prometheus 2s prometheus-operator-prometheus 2s
==> v1/PrometheusRule ==> v1/PrometheusRule
NAME AGE NAME AGE
prometheus-operator-alertmanager.rules 2s prometheus-operator-alertmanager.rules 2s
@ -232,7 +232,7 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
NOTES: NOTES:
The Prometheus Operator has been installed. Check its status by running: The Prometheus Operator has been installed. Check its status by running:
kubectl --namespace kube-prometheus get pods -l "release=prometheus-operator" kubectl --namespace kube-prometheus get pods -l "release=prometheus-operator"
Visit https://github.com/coreos/prometheus-operator for instructions on how Visit https://github.com/coreos/prometheus-operator for instructions on how
to create & configure Alertmanager and Prometheus instances using the Operator. to create & configure Alertmanager and Prometheus instances using the Operator.