Fixing my name to my original name.
This commit is contained in:
parent
7371635e10
commit
6f0596c2e0
8 changed files with 177 additions and 177 deletions
2
conf.py
2
conf.py
|
@ -16,7 +16,7 @@ import time
|
|||
|
||||
|
||||
# Data about this site
|
||||
BLOG_AUTHOR = "Elijah Lazkani" # (translatable)
|
||||
BLOG_AUTHOR = "Elia El Lazkani" # (translatable)
|
||||
BLOG_TITLE = "The DevOps Blog" # (translatable)
|
||||
# This is the main URL for your site. It will be used
|
||||
# in a prominent link. Don't forget the protocol (http/https)!
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
.. title: About me
|
||||
.. date: 2019-06-21
|
||||
.. status: published
|
||||
.. authors: Elijah Lazkani
|
||||
.. authors: Elia El Lazkani
|
||||
|
||||
I am a DevOps engineer with a passion for technology, automation, Linux and OpenSource. I love learning new tricks and challenging myself with new tools being released on a monthly bases around *kubernetes* and/or *configuration management*. On my free time, I like to write automation tools and packages which can be found on PyPI. Or, I might as well tinker with new things around *kubernetes*. I blog about all that *here*. I think if I can write a blog about it, I understand it enough to have an opinion about it. It all comes in handy when the business need arises. I play around with technologies all day long by deploying, configuring, managing and maintaining all parts of the infrastructure below the application layer. I dabbled with "architecting" parts of different infrastructures, from end to end and I can say I have a knack for it and I like it when possible.
|
||||
|
||||
|
@ -12,7 +12,7 @@ Here's a quick and dirty list of some of the technologies I've had my hands dirt
|
|||
|
||||
- **Neworking**: Configuring routers and switches (Brocade, CISCO, Dell).
|
||||
- **Infrastructure**: Finding, automating, deploying and managing infrastructure key services. Too many to mention.
|
||||
- **Virtualization**: Building infrastructures for virtualization (HyperV, libvirt, proxmox, RHEV, VMWare).
|
||||
- **Virtualization**: Building infrastructures for virtualization (HyperV, libvirt, proxmox, RHEV, VMWare).
|
||||
- **Configuration Management**: Ansible, Chef, Puppet.
|
||||
- **CI/CD**: Gitlab-CI, Jenkins.
|
||||
- **Cloud**: AWS.
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
.. status: published
|
||||
.. tags: configuration management, ansible, molecule,
|
||||
.. category: configuration management
|
||||
.. authors: Elijah Lazkani
|
||||
.. authors: Elia El Lazkani
|
||||
.. description: A fast way to create a testable ansible role using molecule.
|
||||
.. type: text
|
||||
|
||||
|
@ -40,7 +40,7 @@ Because in this case I have *python3* installed, I can create a *virtualenv* eas
|
|||
# Navigate to the directory
|
||||
$ cd sandbox/test-roles/
|
||||
# Create the virtualenv
|
||||
~/sandbox/test-roles $ python -m venv .ansible-venv
|
||||
~/sandbox/test-roles $ python -m venv .ansible-venv
|
||||
# Activate the virtualenv
|
||||
~/sandbox/test-roles $ source .ansible-venv/bin/activate
|
||||
# Check that your virtualenv activated properly
|
||||
|
@ -58,12 +58,12 @@ At this point, we can install the required dependencies.
|
|||
Collecting molecule
|
||||
Downloading https://files.pythonhosted.org/packages/84/97/e5764079cb7942d0fa68b832cb9948274abb42b72d9b7fe4a214e7943786/molecule-2.19.0-py3-none-any.whl (180kB)
|
||||
100% |████████████████████████████████| 184kB 2.2MB/s
|
||||
|
||||
|
||||
...
|
||||
|
||||
|
||||
Successfully built ansible ansible-lint anyconfig cerberus psutil click-completion tabulate tree-format pathspec future pycparser arrow
|
||||
Installing collected packages: MarkupSafe, jinja2, PyYAML, six, pycparser, cffi, pynacl, idna, asn1crypto, cryptography, bcrypt, paramiko, ansible, pbr, git-url-parse, monotonic, fasteners, click, colorama, sh, python-gilt, ansible-lint, pathspec, yamllint, anyconfig, cerberus, psutil, more-itertools, py, attrs, pluggy, atomicwrites, pytest, testinfra, ptyprocess, pexpect, click-completion, tabulate, future, chardet, binaryornot, poyo, urllib3, certifi, requests, python-dateutil, arrow, jinja2-time, whichcraft, cookiecutter, tree-format, molecule, docker-pycreds, websocket-client, docker
|
||||
Successfully installed MarkupSafe-1.1.0 PyYAML-3.13 ansible-2.7.5 ansible-lint-3.4.23 anyconfig-0.9.7 arrow-0.13.0 asn1crypto-0.24.0 atomicwrites-1.2.1 attrs-18.2.0 bcrypt-3.1.5 binaryornot-0.4.4 cerberus-1.2 certifi-2018.11.29 cffi-1.11.5 chardet-3.0.4 click-6.7 click-completion-0.3.1 colorama-0.3.9 cookiecutter-1.6.0 cryptography-2.4.2 docker-3.7.0 docker-pycreds-0.4.0 fasteners-0.14.1 future-0.17.1 git-url-parse-1.1.0 idna-2.8 jinja2-2.10 jinja2-time-0.2.0 molecule-2.19.0 monotonic-1.5 more-itertools-5.0.0 paramiko-2.4.2 pathspec-0.5.9 pbr-4.1.0 pexpect-4.6.0 pluggy-0.8.1 poyo-0.4.2 psutil-5.4.6 ptyprocess-0.6.0 py-1.7.0 pycparser-2.19 pynacl-1.3.0 pytest-4.1.0 python-dateutil-2.7.5 python-gilt-1.2.1 requests-2.21.0 sh-1.12.14 six-1.11.0 tabulate-0.8.2 testinfra-1.16.0 tree-format-0.1.2 urllib3-1.24.1 websocket-client-0.54.0 whichcraft-0.5.2 yamllint-1.11.1
|
||||
Successfully installed MarkupSafe-1.1.0 PyYAML-3.13 ansible-2.7.5 ansible-lint-3.4.23 anyconfig-0.9.7 arrow-0.13.0 asn1crypto-0.24.0 atomicwrites-1.2.1 attrs-18.2.0 bcrypt-3.1.5 binaryornot-0.4.4 cerberus-1.2 certifi-2018.11.29 cffi-1.11.5 chardet-3.0.4 click-6.7 click-completion-0.3.1 colorama-0.3.9 cookiecutter-1.6.0 cryptography-2.4.2 docker-3.7.0 docker-pycreds-0.4.0 fasteners-0.14.1 future-0.17.1 git-url-parse-1.1.0 idna-2.8 jinja2-2.10 jinja2-time-0.2.0 molecule-2.19.0 monotonic-1.5 more-itertools-5.0.0 paramiko-2.4.2 pathspec-0.5.9 pbr-4.1.0 pexpect-4.6.0 pluggy-0.8.1 poyo-0.4.2 psutil-5.4.6 ptyprocess-0.6.0 py-1.7.0 pycparser-2.19 pynacl-1.3.0 pytest-4.1.0 python-dateutil-2.7.5 python-gilt-1.2.1 requests-2.21.0 sh-1.12.14 six-1.11.0 tabulate-0.8.2 testinfra-1.16.0 tree-format-0.1.2 urllib3-1.24.1 websocket-client-0.54.0 whichcraft-0.5.2 yamllint-1.11.1
|
||||
|
||||
Creating your first ansible role
|
||||
================================
|
||||
|
@ -75,7 +75,7 @@ Once all the steps above are complete, we can start by creating our first ansibl
|
|||
$ molecule init role -r example-role
|
||||
--> Initializing new role example-role...
|
||||
Initialized role in /home/elijah/sandbox/test-roles/example-role successfully.
|
||||
|
||||
|
||||
$ tree example-role/
|
||||
example-role/
|
||||
├── defaults
|
||||
|
@ -99,7 +99,7 @@ Once all the steps above are complete, we can start by creating our first ansibl
|
|||
│ └── main.yml
|
||||
└── vars
|
||||
└── main.yml
|
||||
|
||||
|
||||
9 directories, 12 files
|
||||
|
||||
You can find what each directory is for and how ansible works by visiting docs.ansible.com.
|
||||
|
@ -113,7 +113,7 @@ The meta file needs to modified and filled with information about the role. This
|
|||
|
||||
---
|
||||
galaxy_info:
|
||||
author: Elijah Lazkani
|
||||
author: Elia El Lazkani
|
||||
description: This is an example ansible role to showcase molecule at work
|
||||
license: license (BDS-2)
|
||||
min_ansible_version: 2.7
|
||||
|
@ -137,7 +137,7 @@ This is where the magic is set in motion. Tasks are the smallest entities in a r
|
|||
state: present
|
||||
create_home: yes
|
||||
home: /home/example
|
||||
|
||||
|
||||
# Install nginx
|
||||
- name: Install nginx
|
||||
apt:
|
||||
|
@ -206,76 +206,76 @@ First Role Pass
|
|||
|
||||
This is time to test our role and see what's going on.
|
||||
|
||||
.. code:: text
|
||||
.. code:: text
|
||||
|
||||
(.ansible-role) ~/sandbox/test-roles/example-role/ $ molecule converge
|
||||
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
|
||||
Validation completed successfully.
|
||||
--> Test matrix
|
||||
|
||||
|
||||
└── default
|
||||
├── dependency
|
||||
├── create
|
||||
├── prepare
|
||||
└── converge
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'dependency'
|
||||
Skipping, missing the requirements file.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'create'
|
||||
|
||||
|
||||
PLAY [Create] ******************************************************************
|
||||
|
||||
|
||||
TASK [Log into a Docker registry] **********************************************
|
||||
skipping: [localhost] => (item=None)
|
||||
|
||||
skipping: [localhost] => (item=None)
|
||||
|
||||
TASK [Create Dockerfiles from image names] *************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
TASK [Discover local Docker images] ********************************************
|
||||
ok: [localhost] => (item=None)
|
||||
ok: [localhost]
|
||||
|
||||
|
||||
TASK [Build an Ansible compatible image] ***************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
TASK [Create docker network(s)] ************************************************
|
||||
|
||||
|
||||
TASK [Create molecule instance(s)] *********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
TASK [Wait for instance(s) creation to complete] *******************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=5 changed=4 unreachable=0 failed=0
|
||||
|
||||
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'prepare'
|
||||
Skipping, prepare playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'converge'
|
||||
|
||||
|
||||
PLAY [Converge] ****************************************************************
|
||||
|
||||
|
||||
TASK [Gathering Facts] *********************************************************
|
||||
ok: [instance]
|
||||
|
||||
|
||||
TASK [example-role : Create 'example' user] ************************************
|
||||
changed: [instance]
|
||||
|
||||
|
||||
TASK [example-role : Install nginx] ********************************************
|
||||
changed: [instance]
|
||||
|
||||
|
||||
RUNNING HANDLER [example-role : Restart nginx] *********************************
|
||||
changed: [instance]
|
||||
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
instance : ok=4 changed=3 unreachable=0 failed=0
|
||||
|
||||
|
@ -294,43 +294,43 @@ Molecule leverages the `testinfra <https://testinfra.readthedocs.io/en/latest/>`
|
|||
.. code:: python
|
||||
|
||||
import os
|
||||
|
||||
|
||||
import testinfra.utils.ansible_runner
|
||||
|
||||
|
||||
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
|
||||
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
|
||||
|
||||
|
||||
|
||||
|
||||
def test_hosts_file(host):
|
||||
f = host.file('/etc/hosts')
|
||||
|
||||
|
||||
assert f.exists
|
||||
assert f.user == 'root'
|
||||
assert f.group == 'root'
|
||||
|
||||
|
||||
|
||||
|
||||
def test_user_created(host):
|
||||
user = host.user("example")
|
||||
assert user.name == "example"
|
||||
assert user.home == "/home/example"
|
||||
|
||||
|
||||
|
||||
|
||||
def test_user_home_exists(host):
|
||||
user_home = host.file("/home/example")
|
||||
assert user_home.exists
|
||||
assert user_home.is_directory
|
||||
|
||||
|
||||
|
||||
|
||||
def test_nginx_is_installed(host):
|
||||
nginx = host.package("nginx")
|
||||
assert nginx.is_installed
|
||||
|
||||
|
||||
|
||||
|
||||
def test_nginx_running_and_enabled(host):
|
||||
nginx = host.service("nginx")
|
||||
assert nginx.is_running
|
||||
|
||||
|
||||
|
||||
|
||||
.. warning::
|
||||
|
||||
|
@ -342,7 +342,7 @@ Molecule leverages the `testinfra <https://testinfra.readthedocs.io/en/latest/>`
|
|||
--> Validating schema /home/elijah/sandbox/test-roles/example-role/molecule/default/molecule.yml.
|
||||
Validation completed successfully.
|
||||
--> Test matrix
|
||||
|
||||
|
||||
└── default
|
||||
├── lint
|
||||
├── destroy
|
||||
|
@ -355,140 +355,140 @@ Molecule leverages the `testinfra <https://testinfra.readthedocs.io/en/latest/>`
|
|||
├── side_effect
|
||||
├── verify
|
||||
└── destroy
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'lint'
|
||||
--> Executing Yamllint on files found in /home/elijah/sandbox/test-roles/example-role/...
|
||||
Lint completed successfully.
|
||||
--> Executing Flake8 on files found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
|
||||
/home/elijah/.virtualenvs/world/lib/python3.7/site-packages/pycodestyle.py:113: FutureWarning: Possible nested set at position 1
|
||||
/home/elijah/.virtualenvs/world/lib/python3.7/site-packages/pycodestyle.py:113: FutureWarning: Possible nested set at position 1
|
||||
EXTRANEOUS_WHITESPACE_REGEX = re.compile(r'[[({] | []}),;:]')
|
||||
Lint completed successfully.
|
||||
--> Executing Ansible Lint on /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml...
|
||||
Lint completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'destroy'
|
||||
|
||||
|
||||
PLAY [Destroy] *****************************************************************
|
||||
|
||||
|
||||
TASK [Destroy molecule instance(s)] ********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
TASK [Wait for instance(s) deletion to complete] *******************************
|
||||
ok: [localhost] => (item=None)
|
||||
ok: [localhost]
|
||||
|
||||
|
||||
TASK [Delete docker network(s)] ************************************************
|
||||
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=2 changed=1 unreachable=0 failed=0
|
||||
|
||||
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'dependency'
|
||||
Skipping, missing the requirements file.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'syntax'
|
||||
|
||||
|
||||
playbook: /home/elijah/sandbox/test-roles/example-role/molecule/default/playbook.yml
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'create'
|
||||
|
||||
|
||||
PLAY [Create] ******************************************************************
|
||||
|
||||
|
||||
TASK [Log into a Docker registry] **********************************************
|
||||
skipping: [localhost] => (item=None)
|
||||
|
||||
skipping: [localhost] => (item=None)
|
||||
|
||||
TASK [Create Dockerfiles from image names] *************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
TASK [Discover local Docker images] ********************************************
|
||||
ok: [localhost] => (item=None)
|
||||
ok: [localhost]
|
||||
|
||||
|
||||
TASK [Build an Ansible compatible image] ***************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
TASK [Create docker network(s)] ************************************************
|
||||
|
||||
|
||||
TASK [Create molecule instance(s)] *********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
TASK [Wait for instance(s) creation to complete] *******************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=5 changed=4 unreachable=0 failed=0
|
||||
|
||||
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'prepare'
|
||||
Skipping, prepare playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'converge'
|
||||
|
||||
|
||||
PLAY [Converge] ****************************************************************
|
||||
|
||||
|
||||
TASK [Gathering Facts] *********************************************************
|
||||
ok: [instance]
|
||||
|
||||
|
||||
TASK [example-role : Create 'example' user] ************************************
|
||||
changed: [instance]
|
||||
|
||||
|
||||
TASK [example-role : Install nginx] ********************************************
|
||||
changed: [instance]
|
||||
|
||||
RUNNING HANDLER [example-role : Restart nginx] *********************************
|
||||
changed: [instance]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
instance : ok=4 changed=3 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'idempotence'
|
||||
Idempotence completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'side_effect'
|
||||
Skipping, side effect playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'verify'
|
||||
--> Executing Testinfra tests found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
|
||||
============================= test session starts ==============================
|
||||
platform linux -- Python 3.7.1, pytest-4.1.0, py-1.7.0, pluggy-0.8.1
|
||||
rootdir: /home/elijah/sandbox/test-roles/example-role/molecule/default, inifile:
|
||||
plugins: testinfra-1.16.0
|
||||
collected 5 items
|
||||
|
||||
changed: [instance]
|
||||
|
||||
RUNNING HANDLER [example-role : Restart nginx] *********************************
|
||||
changed: [instance]
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
instance : ok=4 changed=3 unreachable=0 failed=0
|
||||
|
||||
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'idempotence'
|
||||
Idempotence completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'side_effect'
|
||||
Skipping, side effect playbook not configured.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'verify'
|
||||
--> Executing Testinfra tests found in /home/elijah/sandbox/test-roles/example-role/molecule/default/tests/...
|
||||
============================= test session starts ==============================
|
||||
platform linux -- Python 3.7.1, pytest-4.1.0, py-1.7.0, pluggy-0.8.1
|
||||
rootdir: /home/elijah/sandbox/test-roles/example-role/molecule/default, inifile:
|
||||
plugins: testinfra-1.16.0
|
||||
collected 5 items
|
||||
|
||||
tests/test_default.py ..... [100%]
|
||||
|
||||
|
||||
=============================== warnings summary ===============================
|
||||
|
||||
|
||||
...
|
||||
|
||||
|
||||
==================== 5 passed, 7 warnings in 27.37 seconds =====================
|
||||
Verifier completed successfully.
|
||||
--> Scenario: 'default'
|
||||
--> Action: 'destroy'
|
||||
|
||||
|
||||
PLAY [Destroy] *****************************************************************
|
||||
|
||||
|
||||
TASK [Destroy molecule instance(s)] ********************************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
TASK [Wait for instance(s) deletion to complete] *******************************
|
||||
changed: [localhost] => (item=None)
|
||||
changed: [localhost]
|
||||
|
||||
|
||||
TASK [Delete docker network(s)] ************************************************
|
||||
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
localhost : ok=2 changed=2 unreachable=0 failed=0
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
.. tags: irc, ssh, weechat, notification,
|
||||
.. category: irc
|
||||
.. slug: weechat-ssh-and-notification
|
||||
.. authors: Elijah Lazkani
|
||||
.. authors: Elia El Lazkani
|
||||
.. description: A way to patch weechat notifications through your system's libnotify over ssh.
|
||||
.. type: text
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
.. tags: kubernetes, helm, tiller,
|
||||
.. category: kubernetes
|
||||
.. slug: deploying-helm-in-your-kubernetes-cluster
|
||||
.. authors: Elijah Lazkani
|
||||
.. authors: Elia El Lazkani
|
||||
.. description: Post explaining how to deploy helm in your kubernetes cluster.
|
||||
.. type: text
|
||||
|
||||
|
@ -102,7 +102,7 @@ Save the following in ``ClusterRoleBinding.yaml`` and then
|
|||
$ kubectl apply -f ClusterRoleBinding.yaml
|
||||
clusterrolebinding.rbac.authorization.k8s.io/tiller created
|
||||
|
||||
|
||||
|
||||
Deploying Tiller
|
||||
================
|
||||
|
||||
|
@ -122,12 +122,12 @@ Now that we have all the basics deployed, we can finally deploy *Tiller* in the
|
|||
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
|
||||
Adding local repo with URL: http://127.0.0.1:8879/charts
|
||||
$HELM_HOME has been configured at ~/.helm.
|
||||
|
||||
|
||||
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
|
||||
|
||||
|
||||
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
|
||||
To prevent this, run `helm init` with the --tiller-tls-verify flag.
|
||||
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
|
||||
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
|
||||
Happy Helming!
|
||||
|
||||
.. note::
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
.. tags: kubernetes, rancher, rancheros, kvm, libvirt,
|
||||
.. category: kubernetes
|
||||
.. slug: local-kubernetes-cluster-on-kvm
|
||||
.. authors: Elijah Lazkani
|
||||
.. authors: Elia El Lazkani
|
||||
.. description: Deploying a kubernetes cluster locally on KVM.
|
||||
.. type: text
|
||||
|
||||
|
@ -31,7 +31,7 @@ Installing RancherOS
|
|||
|
||||
Once all 4 nodes have been created, when you boot into the *RancherOS* `ISO <https://rancher.com/docs/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/>`_ do the following.
|
||||
|
||||
.. note::
|
||||
.. note::
|
||||
|
||||
Because I was using *libvirt*, I was able to do ``virsh console <vm>`` and run these commands.
|
||||
|
||||
|
@ -42,13 +42,13 @@ If you are running these VMs on *libvirt*, then you can console into the box and
|
|||
|
||||
.. code:: text
|
||||
|
||||
# virsh list
|
||||
Id Name State
|
||||
-------------------------
|
||||
21 kube01 running
|
||||
22 kube02 running
|
||||
23 kube03 running
|
||||
24 rancher running
|
||||
# virsh list
|
||||
Id Name State
|
||||
-------------------------
|
||||
21 kube01 running
|
||||
22 kube02 running
|
||||
23 kube03 running
|
||||
24 rancher running
|
||||
|
||||
# virsh console rancher
|
||||
|
||||
|
@ -58,7 +58,7 @@ Configuration
|
|||
If you read the *RancherOS* `documentation <https://rancher.com/docs/os/v1.x/en/>`_, you'll find out that you can configure the *OS* with a ``YAML`` configuration file so let's do that.
|
||||
|
||||
.. code:: text
|
||||
|
||||
|
||||
$ vi cloud-config.yml
|
||||
|
||||
And that file should hold.
|
||||
|
@ -66,17 +66,17 @@ And that file should hold.
|
|||
.. code:: yaml
|
||||
|
||||
---
|
||||
hostname: rancher.kube.loco
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAA...
|
||||
rancher:
|
||||
network:
|
||||
interfaces:
|
||||
eth0:
|
||||
address: 192.168.122.5/24
|
||||
hostname: rancher.kube.loco
|
||||
ssh_authorized_keys:
|
||||
- ssh-rsa AAA...
|
||||
rancher:
|
||||
network:
|
||||
interfaces:
|
||||
eth0:
|
||||
address: 192.168.122.5/24
|
||||
dhcp: false
|
||||
gateway: 192.168.122.1
|
||||
mtu: 1500
|
||||
mtu: 1500
|
||||
|
||||
Make sure that your **public** *ssh key* is replaced in the example before and if you have a different network configuration for your VMs, change the network configuration here.
|
||||
|
||||
|
@ -89,10 +89,10 @@ After you save that file, install the *OS*.
|
|||
Do the same for the rest of the servers and their names and IPs should be as follows (if you are following this tutorial):
|
||||
|
||||
.. code:: text
|
||||
|
||||
192.168.122.5 rancher.kube.loco
|
||||
192.168.122.10 kube01.kube.loco
|
||||
192.168.122.11 kube02.kube.loco
|
||||
|
||||
192.168.122.5 rancher.kube.loco
|
||||
192.168.122.10 kube01.kube.loco
|
||||
192.168.122.11 kube02.kube.loco
|
||||
192.168.122.12 kube03.kube.loco
|
||||
|
||||
Post Installation Configuration
|
||||
|
@ -106,10 +106,10 @@ After *RancherOS* has been installed, one will need to configure ``/etc/hosts``
|
|||
|
||||
.. code:: text
|
||||
|
||||
127.0.0.1 rancher.kube.loco
|
||||
192.168.122.5 rancher.kube.loco
|
||||
192.168.122.10 kube01.kube.loco
|
||||
192.168.122.11 kube02.kube.loco
|
||||
127.0.0.1 rancher.kube.loco
|
||||
192.168.122.5 rancher.kube.loco
|
||||
192.168.122.10 kube01.kube.loco
|
||||
192.168.122.11 kube02.kube.loco
|
||||
192.168.122.12 kube03.kube.loco
|
||||
|
||||
Do the same on the rest of the servers while changing the ``127.0.0.1`` hostname to the host of the server.
|
||||
|
@ -131,7 +131,7 @@ If those points are understood, let's go ahead and deploy Rancher.
|
|||
First, ``$ ssh rancher@192.168.122.5`` then:
|
||||
|
||||
.. code:: text
|
||||
|
||||
|
||||
[rancher@rancher ~]$ docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
|
||||
|
||||
Give it a few minutes for the container to come up and the application as well. Meanwhile configure your ``/etc/hosts`` file on your machine.
|
||||
|
@ -171,7 +171,7 @@ Make sure you choose **Custom** as a *provider*. Then fill in the **Cluser Name*
|
|||
|
||||
|
||||
Optionally, you can choose your **Network Providor**, in my case I chose **Calico**. Then I clicked on **show advanced** at the bottom right corner then expanded the *newly shown tab* **Advanced Cluster Options**.
|
||||
|
||||
|
||||
|
||||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/04-nginx_ingressDisabled.png
|
||||
:align: center
|
||||
|
@ -180,7 +180,7 @@ Optionally, you can choose your **Network Providor**, in my case I chose **Calic
|
|||
|
||||
We will disable the **Nginx Ingress** and the **Pod Security Policy Support** for the time being. This will become more apparent why in the future, hopefully. Then hit **Next**.
|
||||
|
||||
|
||||
|
||||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/05-customer_nodes.png
|
||||
:align: center
|
||||
:alt: Customize Nodes
|
||||
|
@ -194,7 +194,7 @@ Do the same for *all the rest*. Once the first docker image gets downloaded and
|
|||
.. thumbnail:: /images/local_kubernetes_cluster_on_kvm/06-registered_nodes.png
|
||||
:align: center
|
||||
:alt: Registered Nodes
|
||||
|
||||
|
||||
|
||||
.. warning::
|
||||
|
||||
|
@ -221,4 +221,4 @@ Conclusion
|
|||
At this point, you can check that all the nodes are healthy and you got yourself a kubernetes cluster. In future blog posts we will explore an avenue to deploy *multiple ingress controllers* on the same cluster on the same ``port: 80`` by giving them each an IP external to the cluster.
|
||||
|
||||
But for now, you got yourself a kubernetes cluster to play with. Enjoy.
|
||||
|
||||
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
.. title: Minikube Setup
|
||||
.. date: 2019-02-09
|
||||
.. date: 2019-02-09
|
||||
.. updated: 2019-07-02
|
||||
.. status: published
|
||||
.. tags: minikube, kubernetes, ingress, ingress-controller,
|
||||
.. category: kubernetes
|
||||
.. slug: minikube-setup
|
||||
.. authors: Elijah Lazkani
|
||||
.. authors: Elia El Lazkani
|
||||
.. description: A quick and dirty minikube setup.
|
||||
.. type: text
|
||||
|
||||
|
@ -45,15 +45,15 @@ Let's start minikube.
|
|||
Verifying apiserver health ...
|
||||
Kubectl is now configured to use the cluster.
|
||||
Loading cached images from config file.
|
||||
|
||||
|
||||
|
||||
|
||||
Everything looks great. Please enjoy minikube!
|
||||
|
||||
Great... At this point we have a cluster that's running, let's verify.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# Id Name State
|
||||
# Id Name State
|
||||
--------------------------
|
||||
3 minikube running
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
.. tags: minikube, kubernetes, ingress, helm, prometheus, grafana,
|
||||
.. category: kubernetes
|
||||
.. slug: your-first-minikube-helm-deployment
|
||||
.. authors: Elijah Lazkani
|
||||
.. authors: Elia El Lazkani
|
||||
.. description: Deploying your first minikube helm charts.
|
||||
.. type: text
|
||||
|
||||
|
@ -25,7 +25,7 @@ Deploying Tiller
|
|||
Before we can start with the deployments using ``helm``, we need to deploy *tiller*. It's a service that manages communications with the client and deployments.
|
||||
|
||||
.. code:: text
|
||||
|
||||
|
||||
$ helm init --history-max=10
|
||||
Creating ~/.helm
|
||||
Creating ~/.helm/repository
|
||||
|
@ -38,9 +38,9 @@ Before we can start with the deployments using ``helm``, we need to deploy *till
|
|||
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
|
||||
Adding local repo with URL: http://127.0.0.1:8879/charts
|
||||
$HELM_HOME has been configured at ~/.helm.
|
||||
|
||||
|
||||
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
|
||||
|
||||
|
||||
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
|
||||
To prevent this, run ``helm init`` with the --tiller-tls-verify flag.
|
||||
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
|
||||
|
@ -59,19 +59,19 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
|
|||
LAST DEPLOYED: Sat Feb 9 18:09:43 2019
|
||||
NAMESPACE: kube-prometheus
|
||||
STATUS: DEPLOYED
|
||||
|
||||
|
||||
RESOURCES:
|
||||
==> v1/Secret
|
||||
NAME TYPE DATA AGE
|
||||
prometheus-operator-grafana Opaque 3 4s
|
||||
alertmanager-prometheus-operator-alertmanager Opaque 1 4s
|
||||
|
||||
|
||||
==> v1beta1/ClusterRole
|
||||
NAME AGE
|
||||
prometheus-operator-kube-state-metrics 3s
|
||||
psp-prometheus-operator-kube-state-metrics 3s
|
||||
psp-prometheus-operator-prometheus-node-exporter 3s
|
||||
|
||||
|
||||
==> v1/Service
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
prometheus-operator-grafana ClusterIP 10.107.125.114 80/TCP 3s
|
||||
|
@ -84,15 +84,15 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
|
|||
prometheus-operator-kube-scheduler ClusterIP None 10251/TCP 3s
|
||||
prometheus-operator-operator ClusterIP 10.101.253.101 8080/TCP 3s
|
||||
prometheus-operator-prometheus ClusterIP 10.107.117.120 9090/TCP 3s
|
||||
|
||||
|
||||
==> v1beta1/DaemonSet
|
||||
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
|
||||
prometheus-operator-prometheus-node-exporter 1 1 0 1 0 3s
|
||||
|
||||
|
||||
==> v1/Deployment
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
prometheus-operator-operator 1 1 1 0 3s
|
||||
|
||||
|
||||
==> v1/ServiceMonitor
|
||||
NAME AGE
|
||||
prometheus-operator-alertmanager 2s
|
||||
|
@ -106,14 +106,14 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
|
|||
prometheus-operator-node-exporter 2s
|
||||
prometheus-operator-operator 2s
|
||||
prometheus-operator-prometheus 2s
|
||||
|
||||
|
||||
==> v1/Pod(related)
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
prometheus-operator-prometheus-node-exporter-fntpx 0/1 ContainerCreating 0 3s
|
||||
prometheus-operator-grafana-8559d7df44-vrm8d 0/3 ContainerCreating 0 2s
|
||||
prometheus-operator-kube-state-metrics-7769f5bd54-6znvh 0/1 ContainerCreating 0 2s
|
||||
prometheus-operator-operator-7967865bf5-cbd6r 0/1 ContainerCreating 0 2s
|
||||
|
||||
|
||||
==> v1beta1/PodSecurityPolicy
|
||||
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
|
||||
prometheus-operator-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
|
||||
|
@ -122,7 +122,7 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
|
|||
prometheus-operator-alertmanager false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
|
||||
prometheus-operator-operator false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
|
||||
prometheus-operator-prometheus false RunAsAny RunAsAny MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
|
||||
|
||||
|
||||
==> v1/ConfigMap
|
||||
NAME DATA AGE
|
||||
prometheus-operator-grafana-config-dashboards 1 4s
|
||||
|
@ -139,7 +139,7 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
|
|||
prometheus-operator-persistentvolumesusage 1 4s
|
||||
prometheus-operator-pods 1 4s
|
||||
prometheus-operator-statefulset 1 4s
|
||||
|
||||
|
||||
==> v1/ClusterRoleBinding
|
||||
NAME AGE
|
||||
prometheus-operator-grafana-clusterrolebinding 3s
|
||||
|
@ -148,19 +148,19 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
|
|||
prometheus-operator-operator-psp 3s
|
||||
prometheus-operator-prometheus 3s
|
||||
prometheus-operator-prometheus-psp 3s
|
||||
|
||||
|
||||
==> v1beta1/Role
|
||||
NAME AGE
|
||||
prometheus-operator-grafana 3s
|
||||
|
||||
|
||||
==> v1beta1/Deployment
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
prometheus-operator-kube-state-metrics 1 1 1 0 3s
|
||||
|
||||
|
||||
==> v1/Alertmanager
|
||||
NAME AGE
|
||||
prometheus-operator-alertmanager 3s
|
||||
|
||||
|
||||
==> v1/ServiceAccount
|
||||
NAME SECRETS AGE
|
||||
prometheus-operator-grafana 1 4s
|
||||
|
@ -169,7 +169,7 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
|
|||
prometheus-operator-alertmanager 1 4s
|
||||
prometheus-operator-operator 1 4s
|
||||
prometheus-operator-prometheus 1 4s
|
||||
|
||||
|
||||
==> v1/ClusterRole
|
||||
NAME AGE
|
||||
prometheus-operator-grafana-clusterrole 4s
|
||||
|
@ -178,37 +178,37 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
|
|||
prometheus-operator-operator-psp 3s
|
||||
prometheus-operator-prometheus 3s
|
||||
prometheus-operator-prometheus-psp 3s
|
||||
|
||||
|
||||
==> v1/Role
|
||||
NAME AGE
|
||||
prometheus-operator-prometheus-config 3s
|
||||
prometheus-operator-prometheus 2s
|
||||
prometheus-operator-prometheus 2s
|
||||
|
||||
|
||||
==> v1beta1/RoleBinding
|
||||
NAME AGE
|
||||
prometheus-operator-grafana 3s
|
||||
|
||||
|
||||
==> v1beta2/Deployment
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
prometheus-operator-grafana 1 1 1 0 3s
|
||||
|
||||
|
||||
==> v1/Prometheus
|
||||
NAME AGE
|
||||
prometheus-operator-prometheus 2s
|
||||
|
||||
|
||||
==> v1beta1/ClusterRoleBinding
|
||||
NAME AGE
|
||||
prometheus-operator-kube-state-metrics 3s
|
||||
psp-prometheus-operator-kube-state-metrics 3s
|
||||
psp-prometheus-operator-prometheus-node-exporter 3s
|
||||
|
||||
|
||||
==> v1/RoleBinding
|
||||
NAME AGE
|
||||
prometheus-operator-prometheus-config 3s
|
||||
prometheus-operator-prometheus 2s
|
||||
prometheus-operator-prometheus 2s
|
||||
|
||||
|
||||
==> v1/PrometheusRule
|
||||
NAME AGE
|
||||
prometheus-operator-alertmanager.rules 2s
|
||||
|
@ -232,7 +232,7 @@ We often need to monitor multiple aspects of the cluster easily. Sometimes maybe
|
|||
NOTES:
|
||||
The Prometheus Operator has been installed. Check its status by running:
|
||||
kubectl --namespace kube-prometheus get pods -l "release=prometheus-operator"
|
||||
|
||||
|
||||
Visit https://github.com/coreos/prometheus-operator for instructions on how
|
||||
to create & configure Alertmanager and Prometheus instances using the Operator.
|
||||
|
||||
|
|
Reference in a new issue