initial checkin

This commit is contained in:
Radar231 2023-11-10 20:29:26 -05:00
commit 41bb8b7bd5
90 changed files with 3365 additions and 0 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
site/*

17
README.md Normal file
View File

@ -0,0 +1,17 @@
# Markdown source for radar231.com
* This is the markdown source for the radar231.com website. It uses [MkDocs](https://www.mkdocs.org/) and [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/).
* MkDocs is installed into a python virtual environment (venv). Then mkdocs-material and mkdocs-rss-plugin are installed into the mkdocs venv using the venv pip binary.
```
$ cd $HOME/bin/venv
$ python -m venv mkdocs
$ cd mkdocs
$ ./bin/pip install mkdocs
$ ./bin/pip install mkdocs-material
$ ./bin/pip install mkdocs-rss-plugin
$ cd $HOME/bin
$ ln -s venv/mkdocs/bin/mkdocs .
```

BIN
docs/Matrix_Tux.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

25
docs/about.md Normal file
View File

@ -0,0 +1,25 @@
---
hide:
- navigation
created: 2021-06-05 20:13
updated: 2022-11-10 15:20
---
# About radar231.com
This is simply a brain dump site for me to drop random documentation, both for my future usage, as well as potentially the use of others.
These pages are written in [Markdown](https://daringfireball.net/projects/markdown/){: target="_blank"}, more specifically
[Python-Markdown](https://python-markdown.github.io/){: target="_blank"}. The site is created using the [MkDocs](https://www.mkdocs.org/){: target="_blank"}
SSG (static site generator), along with the [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/){: target="_blank"} theme.
---
![](imgs/email_icon.png "radar231 (at) gmail (dot) com"){: style="height:10%;width:10%"}
![](imgs/gitea.png "git (dot) radar231 (dot) com"){: style="height:10%;width:10%"}
![](imgs/fluidicon.png "github (dot) com (slash) radar231"){: style="height:10%;width:10%"}
![](imgs/reddit.png "reddit (dot) com (slash) user (slash) radar231"){: style="height:10%;width:10%"}
![](imgs/libera-color.png "radar231 (at) irc (dot) libera (dot) chat"){: style="height:10%;width:10%"}
![](imgs/discord.png "radar231 (at) discord (dot) com" title="radar231 (at) discord (dot) com"){: style="height:10%;width:10%"}
![](imgs/rss-logo.png "radar231 (dot) com (slash) feed_rss_created.xml"){: style="height:10%;width:10%"}
![](imgs/rss-logo.png "radar231 (dot) com (slash) feed_rss_updated.xml"){: style="height:10%;width:10%"}
![](imgs/mypoppy.png "https://mypoppy.ca/"){: style="height:10%;width:10%"}

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 185 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

BIN
docs/imgs/atom-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 532 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

BIN
docs/imgs/discord.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

BIN
docs/imgs/discoverfeed.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

BIN
docs/imgs/email_icon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

BIN
docs/imgs/fluidicon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

BIN
docs/imgs/gitea.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
docs/imgs/haproxy_stats.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

BIN
docs/imgs/journal_entry.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

BIN
docs/imgs/lens_20230129.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

BIN
docs/imgs/libera-color.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

BIN
docs/imgs/monthly_index.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

BIN
docs/imgs/mypoppy.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 432 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

BIN
docs/imgs/reddit.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.3 KiB

BIN
docs/imgs/rss-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 194 KiB

BIN
docs/imgs/tag_cloud_1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

BIN
docs/imgs/tag_cloud_2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

BIN
docs/imgs/tag_page.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

11
docs/index.md Normal file
View File

@ -0,0 +1,11 @@
---
hide:
- navigation
created: 2023-11-10 13:00
updated: 2023-11-10 13:00
---
# Home
[TAGS]

BIN
docs/jimmy.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@ -0,0 +1,181 @@
---
hide:
- navigation
created: 2021-08-25 22:14
updated: 2021-09-04 16:22
tags:
- Tiddlywiki
---
# A TiddlyWiki Based Journal
## Links
* <https://tiddlywiki.com/>
* [https://tiddlywiki.com/static/Table-of-Contents Macros.html](https://tiddlywiki.com/static/Table-of-Contents%2520Macros.html)
* <https://tiddlywiki.com/static/ListWidget.html>
* [https://tiddlywiki.com/static/Transclusion in WikiText.html](https://tiddlywiki.com/static/Transclusion%2520in%2520WikiText.html)
## Introduction
As a big fan of TiddyWiki, I'm always looking for new ways to leverage its capabilities to improve my information management efforts. I've long used a TiddlyWiki as a daily log for work, but I wanted to try something a little different for tracking personal hobby activities as well as tasks around the house. The journal feature built into TiddlyWiki seemed a good fit for this, but I wanted to find a better way to organize my entries, and improve the ability to subsequently retrieve information from the wiki at a later date.
## Overview of Structure
The first thing to note is that I use the journal creation command as it is, and have the "Create a new journal tiddler" button enabled to appear at the top of the toolbar by selecting the "new tiddler" check box in the "Tools" tab.
The tiddler name format that I've selected for the journal tiddlers is "YYYYMMDD_(topic)". For the most part, the "topic" will be the same as the tag applied to the tiddler, but the topic name doesn't have to track the tags.
Next, all of the pages in the list that appear under the "Radar231 Journal" tab are all dynamic content index pages. This means that as new journal tiddlers are added, these index pages will be automatically updated.
The index pages contain links to the applicable (based on the filtering selection) individual journal tiddler pages. They also contain the text of each individual journal tiddler page, through a process call "Transclusion" (see [applicable link](https://tiddlywiki.com/static/Transclusion%2520in%2520WikiText.html)).
* [Journal Index List](../imgs/journal_expanded.png){: target="_blank"}
## Control Panel Settings
Most of the TiddlyWiki settings (themes, colours, fonts, etc) can be set as desired. The following settings differ from default and are needed for this journaling set up.
### Info / Basics
* set "Title of new journal tiddlers" to "YYYY0MM0DD_"
### Settings
* set "Default Sidebar Tab" to "Radar231 Journal"
## Root Tiddler Configuration
### "Radar231 Journal" Tiddler
Create a new tiddler named "Radar231 Journal". This tiddler will be the tab in the side bar that will anchor the list of all of our dynamic index pages.
Configure this tiddler as follows;
* add a tag named "$:/tags/SideBar"
* add the following new fields (name, value)
* "caption", "Radar231 Journal"
* "list-before", "$:/core/ui/SideBar/Open"
* add the following text to the tiddler body;
```
<div class="tc-table-of-contents">
<<toc-selective-expandable "Radar231 Journal" "sort[ind]">>
</div>
```
* ["Radar231 Journal" tiddler](../imgs/journal_radar231_journal_root.png){: target="_blank"}
## Tag Index Tiddler Configuration
### "Journal Indexes" Root Tiddler
This is the root page of a list of tag filtered index pages. This just provides an anchor point in the list for the individual journal index tiddlers.
This tiddler is configured as follows;
* add a tag named "Radar231 Journal"
* add the following text to the tiddler body;
```
<div class="tc-table-of-contents">
<<toc-selective-expandable "Journal Indexes" "sort[ind]">>
</div>
```
* ["Journal Indexes" tiddler](../imgs/journal_indexes.png){: target="_blank"}
### Individual "Journal Index" Tag Tiddlers
Under the "Journal Indexes" page are a number of individual index tiddlers, each one associated with a single metadata tag value. As new journal tiddlers are created, add the applicable tags to the journal tiddler, and that tiddler will then automatically be added to the appropriate index tiddler(s).
These tiddlers are configured as follows;
* add a tag named "Journal Indexes"
* add the following text to the tiddler body, setting the tag as appropriate (leave the "Journal" tag as is);
```
---
<$list filter="[tag[Journal]tag[Woodworking]!sort[title]]">
<br/>
<h2><$link><$transclude field="title" mode="block"/></$link></h2>
<$transclude field="text" mode="block"/>
<br/>
<hr/>
</$list>
```
* ["Woodworking Tag Index" tiddler](../imgs/journal_woodworking_config.png){: target="_blank"}
* ["Woodworking Tag Index" example tiddler](../imgs/journal_woodworking.png){: target="_blank"}
## Date-Based Index Tiddler Configuration
Next are a number of hierarchically organized date-based pages, drilling down to an index page for each month.
### "Journals" Root Tiddler
This is the root page for the date-based index pages. Under this page are a page for each year.
This tiddler is configured as follows;
* add a tag named "Radar231 Journals"
* add the following text to the tiddler body;
```
<div class="tc-table-of-contents">
<<toc-selective-expandable "Journals" "sort[ind]">>
</div>
```
* ["Journals" tiddler](../imgs/journal_journals.png){: target="_blank"}
### "Journals - (YYYY)" Tiddler
Next is another root page, this time for the monthly index pages in a particular year.
This tiddler is configured as follows;
* add a tag named "Journals"
* add the following text to the tiddler body;
```
<div class="tc-table-of-contents">
<<toc-selective-expandable "Journals - 2021" "sort[ind]">>
</div>`
```
* ["Journals - YYYY" tiddler](../imgs/journal_journals_2021.png){: target="_blank"}
### "Journals - (YYYYMM)" Tiddlers
Finally, next comes the monthly index tiddler. There will be, of course, one of these index tiddler pages for each month of the year.
This tiddler is configured as follows;
* add a tag named "Journals - YYYY"
* add the following text to the tiddler body;
```
---
<$list filter="[prefix[202010]!tag[Exercise]!sort[title]]">
<br/>
<h2><$link><$transclude field="title" mode="block"/></$link></h2>
<$transclude field="text" mode="block"/>
<br/>
<hr/>
</$list>
```
* ["Journals - YYYYMM" tiddler](../imgs/journal_202010_config.png){: target="_blank"}
* ["Journals - YYYYMM" example tiddler](../imgs/journal_202010.png){: target="_blank"}
## Day to Day Usage
Entering new journal entries into this system is quite easy. Click on the "Create a new journal tiddler" button in the top toolbar. This will open a new tiddler with the title of "YYYYMMDD_", and a single tag named "Journal" already applied. Add an applicable suffix to the title. Usually this will be the same as the default tag that will be added (ie, _RV). Next, add an applicable tag(s) to the tiddler, based on what the entry is related to.
Finally, enter the desired text in the body of the tiddler, in as great, or little detail as desired.
Once the tiddler is saved it will show up in the applicable dynamic index pages, both date based as well as tag based.

View File

@ -0,0 +1,252 @@
---
hide:
- navigation
created: 2021-11-12 16:03
updated: 2021-11-12 20:15
tags:
- Ansible
---
# Ansible Deployment of Kubernetes Workloads - Refactored
## References
* [Ansible Deployment of Kubernetes Workloads](ansible-k8s-deployments.md)
* <https://git.radar231.com/radar231/k8s_website-wiki>
* <https://git.radar231.com/radar231/role_k8s_website-wiki_deploy>
* <https://git.radar231.com/radar231/playbook_k8s-deployment>
* <https://docs.ansible.com/ansible/latest/user_guide/collections_using.html>
* <https://galaxy.ansible.com/docs/using/installing.html>
* <https://git.radar231.com/radar231/ansible_dev_env>
## Introduction
This post is a followup to my [previous post](ansible-k8s-deployments.md) on using ansible to deploy kubernetes workloads.
I've since refactored the way I'm using ansible to deploy to my kubernetes cluster, so a second post seemed in order.
## Refactoring Ansible Code
Until a few months ago, I had been keeping all of my ansible code in a single monolithic repository. While this simplified
the usage of the playbooks, management of the code was starting to become a challenge. Taking a page from the recent restructuring
that took place with the [ansible base and collections split](https://www.ansible.com/blog/getting-started-with-ansible-collections),
I decided to refactor all of my ansible code and break all of the roles and playbooks out into [separate repositories](repositories.md).
## My Development Environment (Collections and Roles)
There are a lot of ways to organize ansible projects, but in my case I've decided to go the ansible requirements.yml route for my roles.
While I have all of my roles in my own [git server](https://git.radar231.com), I can still use requirements.yml files to install both
the modules I require from ansible galaxy collections, as well as my own roles from my git server.
* clip from [ansible_dev_env/roles/requirements.yml](https://git.radar231.com/radar231/ansible_dev_env/src/branch/master/roles/requirements.yml)
```
---
(...)
- src: https://git.radar231.com/radar231/role_k8s_website-wiki_deploy
name: website-wiki_deploy
scm: git
(...)
# EOF
```
There are also a lot of ways to use collections and roles. They can be specified and included on a project by project basis, but I've chosen to
install the galaxy collection modules and all of my roles centrally into my ansible development environment. The default locations are in
''~/.ansible/collections/'' and ''~/.ansible/roles/'', but this can be changed in ''~/.ansible.cfg''. I have a
[shell script](https://git.radar231.com/radar231/ansible_dev_env/src/branch/master/mk_dev_env_links) that sets up my development environment
by symlinking in all of my playbooks and requirements files, as well as specific shell scripts into my development directory. Once done, my ansible
development environment directory ends up like this;
```
$ tree ansidev
ansidev
├── ansible.yml -> /home/rmorrow/dev/git.radar231.com/playbook_ansible/ansible.yml
├── base_pkgs.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/base_pkgs.yml
├── bash_mods.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/bash_mods.yml
├── chk_upgrades.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/chk_upgrades.yml
├── collections -> /home/rmorrow/dev/git.radar231.com/ansible_dev_env/collections
├── create_user.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/create_user.yml
├── del_inventory.yml -> /home/rmorrow/dev/git.radar231.com/playbook_del-updates/del_inventory.yml
├── docker.yml -> /home/rmorrow/dev/git.radar231.com/playbook_docker/docker.yml
├── dotfiles.yml -> /home/rmorrow/dev/git.radar231.com/playbook_dotfiles/dotfiles.yml
├── do_updates.sh -> /home/rmorrow/dev/git.radar231.com/playbook_del-updates/do_updates.sh
├── du_backups.yml -> /home/rmorrow/dev/git.radar231.com/playbook_du_backups/du_backups.yml
├── k3s_inventory.yml -> /home/rmorrow/dev/git.radar231.com/playbook_k3s-cluster/k3s_inventory.yml
├── k3s.yml -> /home/rmorrow/dev/git.radar231.com/playbook_k3s-cluster/k3s.yml
├── k8s-deployment.yml -> /home/rmorrow/dev/git.radar231.com/playbook_k8s-deployment/k8s-deployment.yml
├── lxdhost_inventory.yml -> /home/rmorrow/dev/git.radar231.com/playbook_lxdhost/lxdhost_inventory.yml
├── lxdhost.yml -> /home/rmorrow/dev/git.radar231.com/playbook_lxdhost/lxdhost.yml
├── microk8s_inventory.yml -> /home/rmorrow/dev/git.radar231.com/playbook_microk8s-cluster/microk8s_inventory.yml
├── microk8s.yml -> /home/rmorrow/dev/git.radar231.com/playbook_microk8s-cluster/microk8s.yml
├── mk_dev_env_links -> ../git.radar231.com/ansible_dev_env/mk_dev_env_links
├── monitorix.yml -> /home/rmorrow/dev/git.radar231.com/playbook_monitorix/monitorix.yml
├── nagios_agent.yml -> /home/rmorrow/dev/git.radar231.com/playbook_nagios_agent/nagios_agent.yml
├── pfetch.yml -> /home/rmorrow/dev/git.radar231.com/playbook_pfetch/pfetch.yml
├── rem_base_pkgs.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/rem_base_pkgs.yml
├── roles -> /home/rmorrow/dev/git.radar231.com/ansible_dev_env/roles
├── run_role.yml -> /home/rmorrow/dev/git.radar231.com/playbook_misc-utils/run_role.yml
├── setup-host.yml -> /home/rmorrow/dev/git.radar231.com/playbook_setup-host/setup-host.yml
├── update_roles.sh -> /home/rmorrow/dev/git.radar231.com/ansible_dev_env/update_roles.sh
├── updates.yml -> /home/rmorrow/dev/git.radar231.com/playbook_del-updates/updates.yml
└── vim_setup.yml -> /home/rmorrow/dev/git.radar231.com/playbook_vim_setup/vim_setup.yml
```
I use the following shell script to refresh my roles after a collection has been updated, or I've made changes to a role or additions to a requirements.yml file.
```
$ cat update_roles.sh
#!/bin/bash
ansible-galaxy install -r roles/requirements.yml --force
ansible-galaxy install -r collections/requirements.yml --force
```
## Application Deployment Role
Now, for the topic of the post. First off, this is the deployment role for my website-wiki application, which is the same application that I highlighted in my previous post.
The tasks file for the role is pretty much the same as the playbook from the previous post.
* Role Directory Structure
```
$ tree role_k8s_website-wiki_deploy
role_k8s_website-wiki_deploy
├── meta
│   └── main.yml
├── README.md
└── tasks
└── main.yml
```
* Role Tasks File
```
$ cat role_k8s_website-wiki_deploy/tasks/main.yml
---
#####################################################################
#
# website-wiki_deploy role
#
# - requires that the 'devpath' variable be set
#
#####################################################################
# tasks file for website-wiki_deploy role
- debug: msg="Deploying website-wiki app."
- name: Create the tiddlywiki namespace
community.kubernetes.k8s:
name: tiddlywiki
api_version: v1
kind: Namespace
state: present
- name: Create the PV object
community.kubernetes.k8s:
state: present
src: "{{ devpath }}/k8s_website-wiki/website-wiki_pv.yml"
- name: Create the PVC object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_pvc.yml"
- name: Create the secrets object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_secret.yml"
- name: Create the deployment object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_deployment.yml"
- name: Create the service object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_service.yml"
- name: Create the ingress object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s_website-wiki/website-wiki_ingress.yml"
# EOF
```
## Top-Level Deployment Playbook
The top-level deployment playbook pulls it all together, and sequentially calls all of the application deployment roles.
```
$ cat playbook_k8s-deployment/k8s-deployment.yml
---
#####################################################################
#
# k8s-deployment playbook
#
# - requires that the 'devpath' variable be set to the path of the
# kubernetes application manifests.
#
# - requires that the 'haproxy_ingress_ver' and 'metallb_ver' variables
# be set to the desired version of each to install
#
#####################################################################
- hosts: localhost
tasks:
roles:
# haproxy ingress controller
- role: haproxy_deploy
# metallb load-balancer
- role: metallb_deploy
# delfax namespace
- role: ddclient_deploy
- role: delinit_deploy
- role: website_deploy
# guacamole namespace
- role: maxwaldorf-guacamole_deploy
# home-automation namespace
- role: home-assistant_deploy
- role: mosquitto_deploy
- role: motioneye_deploy
# homer namespace
- role: homer_deploy
# k8stv namespace
- role: flexget_deploy
- role: transmission-openvpn_deploy
# nagios namespace
- role: nagios_deploy
# pihole namespace
- role: pihole_deploy
# tiddlywiki namespace
- role: journal-wiki_deploy
- role: notes-wiki_deploy
- role: website-wiki_deploy
- role: wfh-wiki_deploy
vars:
devpath: "/home/rmorrow/dev/git.radar231.com"
haproxy_ingress_ver: 0.13.4
metallb_ver: v0.10.3
# EOF
```
## Conclusion
While it was a fair bit of work to refactor my ansible code, in the end it was well worth the effort. The code is much clearer and more
manageable, and it is simpler to grab a specific role or playbook from the repository.

View File

@ -0,0 +1,127 @@
---
hide:
- navigation
created: 2021-06-09 01:27
updated: 2021-09-01 02:36
tags:
- Ansible
---
# Ansible Deployment of Kubernetes Workloads
## References
* https://galaxy.ansible.com/community/kubernetes
## Introduction
Ansible is well known as a great automation tool, useful for configuration management, state management, application deployment and upgrades. It can also be used to effectively manage Kubernetes workloads as well.
## Prerequisites
In order to work with a Kubernetes cluster, the community.kubernetes ansible galaxy collection will need to be installed on the management workstation. It is also presumed that there is a working cluster administrative configuration file located at ~/.kube/config.
## Sample Playbook
There are a number of modules within the community.kubernetes collection that can be used to directly manage Kubernetes objects, but the way that I've decided to use it is to have Ansible apply pre-existing Kubernetes yaml manifest files. The reason for this is that the manifest files probably already exist as a result of creating an application deployment, so, without having to recreate the entire deployment within an Ansible playbook, we can affect Kubernetes objects from either kubectl or Ansible.
Note that this presumes that the filestore backing the persistent volumes (PV) have already been created, and probably contain either the application's initial state, or current state for an existing application.
```
$ cat website-wiki.yml
---
#####################################################################
#
# website-wiki tiddlywiki playbook
#
# - requires that the 'devpath' variable be set
#
#####################################################################
- hosts: localhost
tasks:
- debug: msg="Deploying website-wiki app."
- name: Create the tiddlywiki namespace
community.kubernetes.k8s:
name: tiddlywiki
api_version: v1
kind: Namespace
state: present
- name: Create the PV object
community.kubernetes.k8s:
state: present
src: "{{ devpath }}/k8s/tiddlywiki/website-wiki/website-wiki_pv.yml"
- name: Create the PVC object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s/tiddlywiki/website-wiki/website-wiki_pvc.yml"
- name: Create the Secrets object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s/tiddlywiki/website-wiki/website-wiki_secret.yml"
- name: Create the deployment object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s/tiddlywiki/website-wiki/website-wiki_deployment.yml"
- name: Create the service object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s/tiddlywiki/website-wiki/website-wiki_service.yml"
- name: Create the ingress object
community.kubernetes.k8s:
state: present
namespace: tiddlywiki
src: "{{ devpath }}/k8s/tiddlywiki/website-wiki/website-wiki_ingress.yml"
# EOF
```
## Sample Shell Deployment Script
* This shell script simply calls an Ansible playbook for each Kubernetes application to deploy.
```
$ cat k8s_deployment.sh
#!/bin/bash
#####################################################################
devpath='/home/rdr231/dev'
ansible-playbook -i localhost, -e "devpath=${devpath}" heimdall.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" gitea-mysql.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" gitea-app.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" transmission.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" flexget.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" mosquitto.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" motioneye.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" home-assistant.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" notes-wiki.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" wfh-wiki.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" website-wiki.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" delinit.yml
ansible-playbook -i localhost, -e "devpath=${devpath}" website.yml
# EOF
```
## Conclusion
Using this method the deployment script completes in roughly two minutes. Depending on the current container image cache, the applications are all up and running within 30 seconds to a few minutes later.

View File

@ -0,0 +1,33 @@
---
hide:
- navigation
created: 2022-09-27 04:18
updated: 2022-09-27 04:25
tags:
- HamRadio
---
# End Fed Long Wire (EFLW) Antenna Links
## Introduction
This page is simply a collection of links to pages with information about EFLW antennas, Un-Uns, and common mode chokes.
## Links
### EFLW Antennas
* <https://www.hamuniverse.com/randomwireantennalengths.html>
* <https://udel.edu/~mm/ham/randomWire/>
* <https://sprott.physics.wisc.edu/technote/randwire.htm>
* <https://ve7sar.blogspot.com/2019/01/the-best-random-wire-antenna-lengths.html>
* <http://www.earchi.org/92011endfedfiles/Endfed6_40.pdf>
* <https://www.balundesigns.com/content/Wire%20Lengths%20for%204%20and%209-1%20ununs.pdf>
* <http://www.dxsupply.com/produktfiler/Wire%20Lengths%20for%204%20and%209-1%20ununs.pdf>
### Un-Uns and Common Mode Chokes
* <https://vk6ysf.com/unun_9-1.htm>
* <https://g8jnj.webs.com/balunsandtuners.htm>
* <http://www.karinya.net/g3txq/chokes/>
* <https://palomar-engineers.com/antenna-products/1-1-balun-kits>

View File

@ -0,0 +1,100 @@
---
hide:
- navigation
created: 2022-08-12 12:45
updated: 2022-08-26 14:48
tags:
- HamRadio
---
# FT891 Digital Config
## References
* TBD
## Introduction
This page captures the current working configuration used to get my Ham transceiver up and running on digital modes. This is currently what works, but should be considered a work in progress.
## FT891 Menu Settings
| Menu Number | Menu Name | Setting | Comment |
|-------------|-----------------|---------|---------|
| 05-06 | CAT Rate | 38400 | |
| 05-07 | CAT Tot | 1000 | |
| 05-08 | CAT RTS | Disable | |
| — | — | — | — |
| 08-01 | Data Mode | Others | |
| 08-03 | Other Disp | 1500 | |
| 08-04 | Other Shift | 1500 | |
| 08-05 | Data LCut Freq | Off | |
| 08-07 | Data LCut Slope | Off | |
| 08-09 | Data in Select | Rear | |
| 08-10 | Data PTT Select | RTS | |
| 08-11 | Data Out Level | 100 | |
| 08-12 | Data BFO | USB | |
| — | — | — | — |
| 11-08 | SSB PTT Select | DAKY | |
| — | — | — | — |
| 16-03 | HF Power | 100 | |
## FLRig XCVR Conf
* XCVR
* /dev/ttyUSB0
* 38400
* 1 stop bit
* PTT
* CAT Port PTT options
* PTT via CAT
## FLDigi Rig Conf
* Rig Control
* FLRig
* Enable flrig xcvr control with fldigi as client
* Hardware PTT
* Use separate serial port PTT
* /dev/ttyUSB1
* Use RTS
* Soundcard
* Devices
* PulseAudio
## WSJTX Radio Conf
* Radio
* CAT Control
* /dev/ttyUSB0
* 38400
* Eight
* One
* None
* PTT Method
* RTS
* Port
* /dev/ttyUSB1
* Mode
* Data
* Split
* None
## JS8Call Conf
* Radio
* CAT Control
* /dev/ttyUSB0
* 38400
* Eight
* One
* None
* Rig Options
* PTT Method
* RTS
* Port
* /dev/ttyUSB1
* Mode
* Data/Pkt
* Split
* None

96
docs/posts/gluster-dfs.md Normal file
View File

@ -0,0 +1,96 @@
---
hide:
- navigation
created: 2021-06-04 02:38
updated: 2021-09-01 02:36
tags:
- FileServer
---
# Gluster DFS
## Introduction
Gluster (<https://www.gluster.org/>) is a distributed parallel fault-tolerant file system, which can
also be referred to as a clustered file system. I'll refer to it simply as a Distributed File System (DFS).
Gluster, currently owned by Red Hat, is an open source enterprise grade DFS with an active public community.
There is also a commercially supported variant known as "Red Hat Gluster Storage"
(<https://www.redhat.com/en/technologies/storage/gluster>).
Online documentation for Gluster can be found at <https://docs.gluster.org/en/latest/>.
## Implementation
Gluster is implemented as a cluster of storage nodes. Each node is an equal peer in the cluster, so administrative
commands can be run from any of the nodes.
Each node supplies one or more data partitions, known as 'bricks', that are used to build network accessible
cluster volumes. There are a number of different configurations that the bricks can be arranged into to build
the volumes. More information about volume types can be found in the documentation at
<https://docs.gluster.org/en/latest/Administrator Guide/Setting Up Volumes/>.
A common implementation uses a configuration known as 'dispersed', where you have data storage and redundancy
storage, striped across multiple bricks in the volume. This configuration provides for a file redundant and
high availability network file system that is protected against node failure.
The layout is referred to in terms of brick capacity (which should be equal and balanced throughout the nodes),
where you have x capacities of data plus y capacities of redundancy. Examples are 3+1 (3 capacities of data plus
1 capacity of redundancy) or 4+2 (4 data plus 2 redundancy). The number of redundancy capacities also indicates
the number of nodes that can be lost without impacting on volume availability. For a 3+1 volume, one node could
be lost without impact, and for a 4+2 volume, two nodes.
## Administrative Commands
As stated above, all nodes in a cluster are equal peers, so administrative commands can be run from any node. All
administrative commands are implemented as subcommands of the 'gluster' executable. Information about the commands
can be viewed from either the man pages ('man gluster'), or via command line help ('gluster help'). Subcommands
also have their own help pages, accessible as 'gluster < subcommand > help' (ie, 'gluster volume help').
## Status
Gluster has extensive logging, located in /var/log/glusterfs. Much of the information in the log directory is
specific to the node the logs are located on.
Status information at a cluster scope can be obtained using the gluster command. For example, to see the nodes
that are currently members of the "Trusted Storage Pool" (TSP), use 'gluster pool list'. To see the status of
each of the peer nodes in the cluster, use 'gluster peer status'.
To see what volumes have been defined, use 'gluster volume list'. To see basic info about one or more volumes, use
'gluster volume info' to see all volumes (assuming more than one volume has been defined), or 'gluster volume info
< volume >' for information about a specific volume.
Similarly, to see more detailed information about volumes, you can use either 'gluster volume status' or 'gluster volume status < volume >'.
One command of note is 'gluster volume status < volume > clients', which shows all of the client connections to the volume.
## Expansion
There are two ways to expand a volume in a Gluster cluster. One way would be to add an additional node to the cluster.
This can be a bit complicated, as will be explained below.
Another way is to add more bricks to the volume. There is one caveat that must be considered when expanding a Gluster
volume though. Due to the way the volumes are built, bricks must be added in the same quantity as were used to initially
setup the volume. For example, in a 3+1 volume, the volume must be expanded in multiples of 4 bricks, and for a 4+2 volume, 6 bricks.
This caveat is why expanding a Gluster volume by adding a new node can be problematic. In order to satisfy the brick count requirement,
the new node will likely end up unbalanced with respect to the other node brick counts. Future volume expansion will also be confusing,
as new bricks will be added in a different count than the current number of nodes.
A simpler method would be to plan out the cluster size with future expansion in mind, and only expand the volume by adding equal number of bricks to each node.
Example: A six node cluster has a 4+2 volume made up of 1 TB bricks. The initial volume capacity would be 4 TB (4 data brick capacities).
To expand this volume, add a 1 TB partition to each node. After formatting and mounting, the new bricks are added to the volume using the
'gluster volume add-brick' command. After adding the new set of bricks to the volume, the volume capacity will now be 8 TB, and in
'gluster volume info' the bricks will be listed as '2 x (4 + 2) = 12'. Future expansion of the volume is accomplished in the same way.
## Network Access
Gluster supports two methods to directly access the volume. The first is the GlusterFS native client. This is the best method, and provides
for high concurrency, performance and transparent failover.
Gluster also supports NFS v3 to access Gluster volumes.
In the case of sharing the Gluster volume via CIFS (Samba), a samba file server can be stood up with the volume mounted locally via the
GlusterFS native client, and then re-shared across the network via CIFS.

148
docs/posts/homelab.md Normal file
View File

@ -0,0 +1,148 @@
---
hide:
- navigation
created: 2021-06-10 12:00
updated: 2023-11-10 15:22
tags:
- HomeLab
---
# Homelab
## Introduction
[![Server Rack](../imgs/server-rack_20230501_sm.png)](../imgs/server-rack_20230501.png){: target="_blank"}
I've a relatively simple homelab, set up to primarily run virtualized workloads.
I have four Raspberry Pi4-8GB's set up as the core of a Kubernetes cluster. The cluster also includes two virtual machine nodes, running
on the virtualization servers. This gives me a six node [multi-architecture cluster](multi-architecture.md), maximizing the flexibility
for running workloads of either arm64 or amd64 architecture.
There are also five physical servers and a Synology NAS. The servers are used as virtualization servers, while the NAS is used as a file server.
## Kubernetes Cluster
* Kubernetes cluster (24 cores, 40GB RAM)
* Multi-architecture
* Raspberry Pi4-8GB (x4)
* 4 cores (16 cores arm64 node total)
* 8GB RAM (32GB RAM arm64 node total)
* amd64 VM containers (x2) (running on the virtualization servers)
* 4 cores (8 cores amd64 node total)
* 4GB RAM (8GB RAM amd64 node total)
The Kubernetes cluster is the primary location for running workloads on the homelab. The core of the cluster are four Raspberry Pi4-8GB's, and
this is where most of the cluster workloads are intended to be run. However, as it is [multi-architecture](multi-architecture.md) I can also run
amd64 workloads on the virtual machine nodes.
[![Raspberry Pi4 Cluster](../imgs/pi4-cluster_20230501_sm.png)](../imgs/pi4-cluster_20230501.png){: target="_blank"}
[![Homelab K8s Cluster](../imgs/homelab-k8s_20231109.png){: style="height:25%;width:25%"}](../imgs/homelab-k8s_20231109.png){: target="_blank"}
## Proxmox Nodes
The five virtualization servers are running Proxmox, configured as a five node cluster, with HA provided by using shared storage backed on an NFS
share from the file server. There are a number of applications running on the virtualization servers, either on LXC containers or virtual-machines,
usually under nested docker instances. These are applications that might not run correctly under Kubernetes, applications under test or development, or
simply standalone services.
[![Proxmox Nodes](../imgs/proxmox_20231109_sm.png)](../imgs/proxmox_20231109.png){: target="_blank"}
## Virtualization Servers
There are five virtualization servers, set up to run workloads on;
* LXC containers or VM's, or
* Docker containers (nested within an LXC container or a VM).
### Reddwarf
* Dell T610
* Dual Xeon CPU (24 cores)
* 64GB RAM
* Primary virtualization server
### Starbug
* Retired gaming PC
* i7 CPU (8 cores)
* 16GB RAM
[![Servers](../imgs/servers_20231109_sm.png)](../imgs/servers_20231109.png){: target="_blank"}
There are also two mini PCs and an old laptop being used as additional virtualization servers.
[![Skutter Servers](../imgs/Skutters_20220808_sm.png)](../imgs/Skutters_20220808.png){: target="_blank"}
[![Laptop Servers](../imgs/laptops_20231109_sm.png)](../imgs/laptops_20231109.png){: target="_blank"}
### Skutter01
* Beelink GK35
* J4105 CPU (4 cores)
* 8GB RAM
### Skutter02
* Beelink GK35
* J4105 CPU (4 cores)
* 8GB RAM
### Hollister
* HP Pavilion dv6 Notebook
* Intel i5 CPU (4 cores)
* 6GB RAM
## NAS
* Synology DS420J
* Seagate ST3000DM NAS HDDs (x4)
The NAS houses four 3TB HDD's, which are set up in a Synology SHR configuration, providing approximately 9TB of usable space. The NAS
provides file shares and media storage for the network. There are also NFS shares, one which provides persistent storage to the
applications running on the Kubernetes cluster and the other as shared storage for the Proxmox cluster.
There is a 2TB external USB HDD attached to the NAS that is used as a destination for both the NAS backups, as well as server
[duplicity](https://duplicity.gitlab.io/duplicity-web/) rsync backups from across the LAN.
[![NAS](../imgs/DS420J_20230501_sm.png)](../imgs/DS420J_20230501.png){: target="_blank"}
## Homelab Services
This diagram shows the applications and services currently running on the homelab, and how they are structured.
[![Services](../imgs/del-services_landscape_20231109_sm.png)](../imgs/del-services_landscape_20231109.png){: target="_blank"}
## Monitoring
I use a number of tools for monitoring the applications and services running on the homelab, including [Prometheus](https://prometheus.io/) and
[Grafana](https://grafana.com/), [Monitorix](https://www.monitorix.org), [Nagios Core](https://www.nagios.org/projects/nagios-core/),
[Uptime Kuma](https://uptime.kuma.pet/), as well as a number of command line management utilities for Kubernetes, LXD, KVM and Docker.
### Prometheus & Grafana
[![Prometheus and Grafana](../imgs/prometheus-grafana_20230129_sm.png)](../imgs/prometheus-grafana_20230129.png){: target="_blank"}
### Monitorix
[![Monitorix](../imgs/monitorix_20230129_sm.png)](../imgs/monitorix_20230129.png){: target="_blank"}
### Nagios Core
[![Nagios](../imgs/nagios_20231108_sm.png)](../imgs/nagios_20231108.png){: target="_blank"}
### Uptime Kuma
I'm currently in the process of getting [Uptime Kuma](https://uptime.kuma.pet/) set up to provide additional monitoring as well as limited notifications.
[![Uptime Kuma](../imgs/uptime-kuma_20230129_sm.png)](../imgs/uptime-kuma_20230129.png){: target="_blank"}
### Lens
I am also using [Lens](https://k8slens.dev/) to provide monitoring of the health and state of the Kubernetes cluster. This is only temporary and will soon be
moving Kubernetes monitoring to Prometheus and Grafana.
[![Lens](../imgs/lens_20230129_sm.png)](../imgs/lens_20230129.png){: target="_blank"}

View File

@ -0,0 +1,17 @@
---
hide:
- navigation
created: 2021-06-09 02:05
updated: 2021-09-01 02:37
tags:
- Kubernetes
---
# How to Restart a Kubernetes Application
* While one could simply delete a pod to have Kubernetes redeploy it, the correct way to restart an application is to perform a 'rollout restart'.
```
$ kubectl rollout restart deployment (deployment name) -n (namespace name)
```

View File

@ -0,0 +1,170 @@
---
hide:
- navigation
created: 2023-01-10 19:54
updated: 2023-01-29 01:32
tags:
- LXD
---
# IaC: LXD as the Vehicle and Ansible as the Engine
## References
* IaC (Infrastructure as Code) Definition
* <https://en.wikipedia.org/wiki/Infrastructure_as_code>
* <https://www.redhat.com/en/topics/automation/what-is-infrastructure-as-code-iac>
* [Ansible Community.General Collection](https://docs.ansible.com/ansible/latest/collections/community/general/)
* Ansible Community.General Support Modules for LXD
* [lxd_inventory](https://docs.ansible.com/ansible/latest/collections/community/general/lxd_inventory.html)
* [lxd_connection](https://docs.ansible.com/ansible/latest/collections/community/general/lxd_connection.html)
* [lxd_container](https://docs.ansible.com/ansible/latest/collections/community/general/lxd_container_module.html)
* [lxd_profile](https://docs.ansible.com/ansible/latest/collections/community/general/lxd_profile_module.html)
* [lxd_project](https://docs.ansible.com/ansible/latest/collections/community/general/lxd_project_module.html)
* Ansible LXD Deployment Roles and Playbook
* [role_lxc_deploy](https://git.radar231.com/radar231/role_lxc_deploy)
* [role_lxdhost](https://git.radar231.com/radar231/role_lxdhost)
* [playbook_deploy-host](https://git.radar231.com/radar231/playbook_deploy-host)
## Introduction
Adopting "DevOps" practices for a homelab may seem like overkill, but it could be appropriate if the homelab is being used as a training or educational tool for current or future employment. At the very least, there are aspects of the DevOps methodology that could be beneficial in any homelab. One of these is the concept of "Infrastructure as Code" (IaC), which dictates that the configuration of any virtualized infrastructure host (either a system container or virtual machine, in the case of LXD) be specified in a configuration file. This configuration file is then fed into a deployment engine to create the virtualized host. The benefit of this practice is that it is easy to ensure that all virtualized hosts are deployed in a predictable and consistent manner.
A common tool used for IaC deployment is Terraform. Terraform can be integrated into many virtualization systems, including LXD.
Fortunately this isn't required for LXD, as containers and virtual machines are typically deployed from images stored on one of the default image sources (which can be listed using `lxc remote list`). This means that containers and virtual machines deployed from stock images will be consistent from deployment to deployment, short of configurations made via cloud init.
## Ansible as an IaC Engine
Ansible has long had a prominent role in performing an IaC function. Playbooks that perform system configurations or install software to a freshly deployed host are performing an aspect of IaC. This is a bit of a grey area though, as using the same playbooks to install software to an existing host is often also looked at as "Configuration Management". Either way, having a consistent method to perform configurations or install software to a host is a key IaC role.
What normally is missing for Ansible is the initial deployment of the virtualized host itself. There are many Ansible modules to interface with a wide variety of virtualization systems. In the case of LXD there are Ansible modules that integrate into LXD container and virtual machine deployment and management.
## Ansible LXD Modules: lxd_container and lxd_connection
The two modules we'll look at here for the purpose of IaC are the "lxd_container" and "lxd_connection" modules. Links to the documentation for each are listed in the References section.
### Ansible lxd_container Module
The lxd_container module is the first module that we will use for LXD virtualized host deployment. This module provides a means to define key values to control deployment of a desired container or virtual machine from a specified image source. The image source can either be on one of the standard remote image servers, a custom remote server, or an image existing locally on the local LXD server.
### Ansible lxd_connection Module
The lxd_connection module allows for running commands within an LXD container via the [LXD REST API](https://linuxcontainers.org/lxd/rest-api/). This is functionally equivalent to using `lxc exec (container) -- (command and arguments)`. While you could perform pretty much all software installation or configuration using the lxd_connection module, I use it only for getting the networking and SSH control configured for the new virtualized host and then switch to a standard Ansible connection via SSH for further configuration. Where this could be useful would be for a situation where you have access to the LXD server but do not have direct network access to the deployed virtualized host.
### A Caveat for the lxd_container Module
The lxd_container module works well for virtualized hosts deployed directly to the local LXD server. There is also a capability to deploy to a specific target LXD server if operating within a cluster environment.
Unfortunately, while there seems to be capabilities within the module to operate on remote LXD servers in a non-cluster environment, I haven't been successful in getting that to work. As my homelab runs multiple LXD servers that are not clustered, this is an issue for me. Until I manage to figure out the "magic sauce" to get this to work, I use a work-around for initial deployment. My Ansible development system also has LXD installed on it, with all of my remote LXD servers defined as "remotes". I then use a simple Ansible shell command to run an `lxc launch ...` command to perform the actual deployment of the virtualized host.
## Deploying and Configuring an LXD Container
This is a walk-through of a deployment using my Ansible code. The links to the roles and playbook are located in the References.
Please note that most of my Ansible code is only configured to run on either Debian or Ubuntu distributions. Extension of most of the code to run on other distributions should be fairly easy, but isn't included here.
### The Inventory Host Definition
The first step is to define the parameters of the virtualized host to be deployed. I do that within the inventory host definition file, and have a template file ("inventory/host_vars/(hostname).yml") as an example.
```
$ cat inventory/host_vars/_template.yml
---
#######################################
# Host inventory definition
# template
#######################################
# host network configuration
ansible_host: 192.168.20.233
ip_gw: 192.168.20.1
ip_ns1: 192.168.20.21
ip_ns2: 192.168.20.22
#######################################
# VM/Container LXD configuration
# LXD Container or VM
host_type: Container
# LXD profile to apply
profile: bridged
# LXD image selection
image_name: "ubuntu"
image_vers: "22.04"
image_location: "images"
# where to deploy container
remote_name: hollister
#######################################
# Host virtual hardware configuration
# CPU cores, Memory, Root disk size
cpu: 4
mem: 8
root: 100
#######################################
# Ansible roles to apply to host
# - uncomment to select
# - create_user includes create_user, sudoers, vim_setup, bash_mods and gitconfig roles
# - use "nil" for no ansible configuration management
host_config:
- nil
# - base_pkgs
# - create_user
# - du_backups
# - monitorix
# - nagios_agent
# - docker
# - k3s
#######################################
# user definition for "create_user" role
user: rmorrow
pw: resetthispasswd
home: /home/rmorrow
# EOF
```
Most of the fields of this file should be pretty much self explanatory, but in general I define aspects of the target host, such as CPU, memory and root disk size, network parameters, source image, system configuration groups, user account to create, as well as what remote server to deploy to. The "host_config" list controls playbook execution of specific system configuration and software installation roles after the virtualized host has been deployed and configured for SSH access, and can be selected or deselected, as required. For a bare host with no configuration to be performed, all groups can be commented out with just the "nil" entry left uncommented.
All of these values are imported into variables that will be used in the roles or playbooks to control the deployment and configuration of the virtualized host.
### The deploy-host Playbook
The deploy-host playbook controls the deployment and configuration of the virtualized host. It first calls the lxc_deploy role to perform the actual deployment of the host, and then calls the lxdhost role which will perform the networking and SSH configuration to allow the deployed virtualized host to be managed via Ansible using the SSH connection.
Next the playbook will call the setup-host playbook in order to perform a number of package configuration roles.
### The setup-host Playbook
The actual configuration of the host after deployment happens in the setup-host playbook. The "host_config" section is used to control while configuration roles are run, and any other variables required by these roles are also contained in the inventory host definition file.
This also means that for an existing Container or VM, or for a physical machine, The same inventory host definition file and same host configuration playbook can be used to perform the exact same system configuration. The only step that will need to be completed before using this playbook is to perform the root user SSH key setup to allow for Ansible management. An example of how to do this is the "debinit" file from the [delinit_files](https://git.radar231.com/radar231/delinit_files) repository.
### The lxc_deploy Role
The lxd_deploy role is responsible for the initial deployment of the virtualized host. Normally this would be performed via the "lxd_container" module, but for now I simply use an Ansible shell function to run an `lxc launch ...` command, using the values defined in the inventory file to control the initial configuration of the deployed host.
The role differentiates between a container and a virtual machine deployment. For containers the only post deployment action is to confirm that python3 is installed on the virtualized host.
For virtual machines, in addition to python3, installation of the "cloud-guest-utils" and "fdisk" packages are confirmed. These packages are required to perform the post deployment resizing of the root disk.
### The lxdhost Role
The lxdhost role uses the "lxd_connection" module to communicate directly with the deployed virtualized host using the LXD REST API. This way we are able to modify the network configuration of the deployed host without losing connectivity.
There are templates for both Ubuntu and Debian network configuration files. The first thing the role does is to replace the existing network configuration with one containing the network configuration from the inventory host definition file. The virtualized host is then rebooted to allow the new network configuration to become active.
After the host has finished booting, the role then configures the virtualized host SSH server to allow SSH key only logins for the root user and copies the Ansible user SSH public key into the authorized_key file for the root user.
There is a point to note here. Ansible has the capability to run as a non-root user, and use privilege escalation (ie, sudo) to become root. You can have Ansible prompt you for the privilege escalation password at playbook run time. This would be the most secure way of accomplishing privilege escalation. However, if you need to perform unattended non-interactive execution, and use passwordless sudo, then there is effectively no difference between that and running directly as the root user. The key point to keep in mind is that whichever method you use, make sure that the user you use for Ansible execution cannot be logged into from the network using a password.
As these playbooks and roles are running on my isolated homelab network, I'm not too concerned with the way I have it configured. If however I was running this in a production environment over a widely distributed corporate network then I would probably change to using a password protected privilege escalation for most of the code execution.
## Conclusion
To tie it all together, to use this playbook and roles you first define your desired destination host(s) in your Ansible inventory. Once you have the inventory host files created and the hostname entries added to the main inventory file, you can run the lxdhost playbook to perform the deployment;
`$ ansible-playbook -l host1,host2,host3 -i inventory/inventory_file.yml deploy-host.yml`
If you are performing a configuration on an existing host, you can call the setup-host playbook instead;
`$ ansible-playbook -l host1,host2,host3 -i inventory/inventory_file.yml setup-host.yml`
This is only one way to perform IaC on a homelab that uses LXD servers. It works well for my purposes and helps to keep all of my homelab LXD host deployments consistent.

View File

@ -0,0 +1,86 @@
---
hide:
- navigation
created: 2021-10-07 17:21
updated: 2023-03-11 11:01
tags:
- LXD
---
# K3S Nodes in LXD Containers
## References
* <https://github.com/ruanbekker/k3s-on-lxd>
* <https://github.com/corneliusweig/kubernetes-lxd>
## Introduction
The page describes the process to enable running k3s nodes in an LXD container. The benefits of running an application under an LXD container instead of a virtual machine should be clear. No virtualization overhead, better deployment density, configuration flexibility; these are only a few examples.
## Container storage can't be on a ZFS or BTRFS storage pool
It seems that k3s has issues allocating storage when the backing storage for an LXD container is a ZFS or BTRFS storage pool. The simplest way to solve this is to make a new pool of type LVM, and use that for k3s LXD containers. You could also use a DIR type storage pool, but be aware that there are performance issues and limitations with DIR based storage pools.
```
$ lxc storage create k3s lvm size=50GiB
```
## New profile for k3s containers
A number of configuration parameters need to be added to the k3s LXD container. The easiest way to do this is by using a profile. As we're using a custom pool, and a bridged profile, we'll create a profile that encapsulates both the custom settings required for a k3s node, as well as the bridged network configuration.
```
$ cat k3s.cnf
config:
security.nesting: "true"
security.privileged: "true"
limits.cpu: "2"
limits.memory: 4GB
limits.memory.swap: "false"
linux.kernel_modules: overlay,nf_nat,ip_tables,ip6_tables,netlink_diag,br_netfilter,xt_conntrack,nf_conntrack,ip_vs,vxlan
raw.lxc: |
lxc.apparmor.profile = unconfined
lxc.cgroup.devices.allow = a
lxc.mount.auto=proc:rw sys:rw
lxc.cap.drop =
description: Profile settings for a bridged k3s container
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
kmsg:
path: /dev/kmsg
source: /dev/kmsg
type: unix-char
root:
path: /
pool: k3s
type: disk
name: k3s
used_by:
```
To create the new profile, execute the following commands;
```
$ lxc profile create k3s
$ lxc profile edit k3s <k3s.cnf
```
## Packages to add to container after launch
This step is partially unique to my setup. If your k3s nodes are running Ubuntu then you'll probably also require the apparmor-utils package.
* need to add the following packages to get k3s running
* curl (to install k3s)
* nfs-common (to access nfs storage)
* apparmor-utils (if running Ubuntu on the k3s node container)
## Conclusion
At this point the LXD container should be ready for deploying a k3s node using the [standard procedure](https://rancher.com/docs/k3s/latest/en/quick-start/).

View File

@ -0,0 +1,149 @@
---
hide:
- navigation
created: 2021-10-14 14:56
updated: 2021-10-22 13:10
tags:
- Kubernetes
---
# Kubernetes API Load-Balancer using HAProxy
## References
* <https://hub.docker.com/_/haproxy>
* <https://hub.docker.com/r/haproxytech/haproxy-alpine>
* <https://cbonte.github.io/haproxy-dconv/>
* <https://www.haproxy.com/blog/how-to-run-haproxy-with-docker/>
* <https://www.ibm.com/docs/en/api-connect/2018.x?topic=environment-load-balancer-configuration-in-kubernetes-deployment>
* <https://docs.kublr.com/articles/onprem-multimaster/>
* <https://githubmemory.com/repo/k3s-io/k3s/issues/3369>
## Introduction
For a simple Kubernetes cluster, with perhaps just a single master node, pointing your kubectl configuration directly at the master node is not a problem, and in fact is usually the standard configuration. When you have a HA (High Availability) cluster though, pointing your kubectl configuration at one node could be problematic if that particular master node has failed or is down for maintenance. While it is relatively simple to change the IP in the kubectl configuration to that of a servicable master node, a better way to set things up is to use a load-balancer in front of the Kubernetes cluster, and have it control access to all of the master nodes of the cluster.
In the case of my homelab, I've set up HAProxy under docker to act as my Kubernetes API load-balancer for my k3s based cluster.
## Docker-Compose File
Nothing too special about this docker-compose file. As I often do, I chose to explicitly set the image version being used. I've been bitten by using the ':latest' tag in the past when upstream image updates mysteriously break existing deployments.
```
$ cat docker-compose.yml
---
version: '3'
services:
haproxy:
container_name: haproxy
image: haproxytech/haproxy-alpine:2.4.7
volumes:
- ./config:/usr/local/etc/haproxy:ro
environment:
- PUID=1000
- PGID=1000
- TZ=America/Toronto
restart: unless-stopped
ports:
- "6443:6443"
- "8404:8404"
# EOF
```
## HAProxy Config
What follows is a fairly straightforward haproxy.cfg file, although I did have to dig through a few sources and consolidate them into a configuration that worked for me.
```
$ cat config/haproxy.cfg
global
stats socket /var/run/api.sock user haproxy group haproxy mode 660 level admin expose-fd listeners
log stdout format raw local0 info
defaults
log global
mode http
option httplog
option dontlognull
timeout client 10s
timeout connect 5s
timeout server 10s
timeout http-request 10s
frontend stats
bind *:8404
stats enable
stats uri /
stats refresh 10s
frontend k8s-api
bind *:6443
mode tcp
option tcplog
option forwardfor
default_backend k8s-api
backend k8s-api
mode tcp
option ssl-hello-chk
option log-health-checks
default-server inter 10s fall 2
server node-1-rpi4 192.168.7.51:6443 check
server node-2-lxc 192.168.7.52:6443 check
server node-3-lxc 192.168.7.53:6443 check
```
## Adding HAProxy IP to k3s SAN
After creating the haproxy.cfg and starting the haproxy container, the next step is to change the endpoint IP to that of the haproxy server in your kubeconfig file. When you do this, you're likely to receive the following error;
```
$ kubectl get nodes
Unable to connect to the server: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.7.51, 192.168.7.52, 192.168.7.53, not 192.168.7.32
```
You have to add the haproxy IP to the k3s.service file, in the ExecStart line. This will need to be done on all master nodes. The ExecStart line should end up looking like this;
```
- /etc/systemd/system/k3s.service clip
------------------
ExecStart=/usr/local/bin/k3s \
server \
'--disable=traefik' \
'--disable=servicelb' \
'--tls-san=192.168.7.32' \
------------------
```
After making this change, run the following to get the change to take effect;
```
# systemctl daemon-reload
# systemctl restart k3s
# curl -vk --resolve 192.168.7.32:6443:127.0.0.1 https://192.168.7.32:6443/ping
```
After this, run this check on a workstation (use the original endpoint IP in the kubeconfig);
```
$kubectl -n kube-system get secret k3s-serving -o yaml
```
If everything went well, the haproxy IP should now show up in the list of "listener.cattle.io" entries. At this point, the endpoint IP in the kubeconfig file should be able to be changed to that of the haproxy server.
This option can be specified at k3s installation time by specifying "tls-san 192.168.7.32" on the installation command line.
## Statistics Web Page
HAProxy has a very nice statistics page that can also be enabled in the haproxy.cfg.
[![](../imgs/haproxy_stats.png){: style="height:25%;width:25%"}](../imgs/haproxy_stats.png){: target="_blank"}

View File

@ -0,0 +1,142 @@
---
hide:
- navigation
created: 2021-10-22 12:54
updated: 2021-10-22 13:29
tags:
- Kubernetes
---
# Kubernetes Ingress Load-Balancer using HAProxy
## References
* <https://kubernetes.io/docs/concepts/services-networking/ingress/>
* <https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/>
* <https://haproxy-ingress.github.io/docs/getting-started/>
* <https://www.haproxy.com/documentation/kubernetes/latest/usage/ingress/>
* <https://www.haproxy.com/blog/use-helm-to-install-the-haproxy-kubernetes-ingress-controller/>
## Introduction
This page describes how to extend the HAproxy configuration from Kubernetes API Load-Balancer using HAProxy to also act as a load-balancer for a cluster ingress controller as well.
I won't go into any detail on Kubernetes ingress or ingress-controllers. The first two links in the references provide ample detail for these topics.
I will describe how I have ingress set up on my k3s based cluster, and how I use HAProxy to act as a load-balancer for accessing all web applications on the cluster.
## Ingress-Controller Selection
I've disabled the default traefik based ingress controller on my k3s cluster using the "`--disable traefik`" option during installation.
To replace traefik I've installed the haproxy-ingress ingress controller to my k3s cluster. I used the 'daemonset' installation, which brings up an haproxy ingress pod on each node. This means that for any web application that has an ingress configuration set up, it can be accessed on any of the nodes. While a local DNS could be set up to point a cname for each application at an arbitrary node, this is difficult to maintain, and is prone to problems due to node failure or maintenance, as previously described for the API load-balancer.
A better way is to set up a load-balancer and point the application cnames at the load-balancer.
Rather than set up a new HAProxy load-balancer, I've simply extended the one that I was using for Kubernetes API load-balancing.
## Docker-Compose file
This is the extended docker-compose.yml file.
```
$ cat docker-compose.yml
---
version: '3'
services:
haproxy:
container_name: haproxy
image: haproxytech/haproxy-alpine:2.4.7
volumes:
- ./config:/usr/local/etc/haproxy:ro
environment:
- PUID=1000
- PGID=1000
- TZ=America/Toronto
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "6443:6443"
- "8404:8404"
# EOF
```
## HAProxy Config
This is the extended haproxy.cfg file.
```
$ cat config/haproxy.cfg
global
stats socket /var/run/api.sock user haproxy group haproxy mode 660 level admin expose-fd listeners
log stdout format raw local0 info
defaults
log global
mode http
option httplog
option dontlognull
timeout client 10s
timeout connect 5s
timeout server 10s
timeout http-request 10s
frontend stats
bind *:8404
stats enable
stats uri /
stats refresh 10s
frontend k8s-api
bind *:6443
mode tcp
option tcplog
option forwardfor
default_backend k8s-api
frontend ingress-80
bind *:80
default_backend ingress-80
frontend ingress-443
bind *:443
default_backend ingress-443
backend k8s-api
mode tcp
option ssl-hello-chk
option log-health-checks
default-server inter 10s fall 2
server node-1-rpi4 192.168.7.51:6443 check
server node-2-lxc 192.168.7.52:6443 check
server node-3-lxc 192.168.7.53:6443 check
backend ingress-80
option log-health-checks
server node-1-rpi4 192.168.7.51:80 check
server node-2-lxc 192.168.7.52:80 check
server node-3-lxc 192.168.7.53:80 check
server node-4-lxc 192.168.7.54:80 check
server node-5-rpi4 192.168.7.55:80 check
server node-6-rpi4 192.168.7.56:80 check
server node-7-rpi4 192.168.7.57:80 check
backend ingress-443
option log-health-checks
server node-1-rpi4 192.168.7.51:443 check
server node-2-lxc 192.168.7.52:443 check
server node-3-lxc 192.168.7.53:443 check
server node-4-lxc 192.168.7.54:443 check
server node-5-rpi4 192.168.7.55:443 check
server node-6-rpi4 192.168.7.56:443 check
server node-7-rpi4 192.168.7.57:443 check
```
## Conclusion
Using this configuration, HAProxy will now act as a load-balancer for both the Kubernetes API access, as well as any HTTP or HTTPS ingress configurations set up on the cluster.

View File

@ -0,0 +1,89 @@
---
hide:
- navigation
created: 2021-10-12 11:36
updated: 2021-10-12 13:03
tags:
- Kubernetes
---
# Kured - Kubernetes Reboot Daemon
## References
* <https://www.weave.works/blog/announcing-kured-a-kubernetes-reboot-daemon>
* <https://www.weave.works/blog/one-year-kured-kubernetes-reboot-daemon>
* <https://github.com/weaveworks/kured>
* <https://hub.docker.com/r/weaveworks/kured>
* <https://github.com/raspbernetes/multi-arch-images>
* <https://hub.docker.com/r/raspbernetes/kured>
## Introduction
This page will describe how I use kured (Kubenetes Reboot Daemon) for my homelab kubernetes cluster. I won't go into the details of kured itself though. For more information about kured, please refer to the links provided in the references section.
## Requirement
Keeping your servers up to date via the package management system should be a defacto SOP for people running homelabs. For my homelab servers, most of which are running Ubuntu, that means using the apt package manager to find and apply any package updates on a regular periodic basis. I use [an ansible playbook](https://git.radar231.com/radar231/playbook_del-updates) to provide daily automated package updates to all of my local and remote homelab servers.
An issue arises for kubernetes cluster nodes though when kernel or core system updates have been applied, and the server requires a reboot in order to complete the update. The normal way of doing node maintenance of any sort with Kubernetes is to first drain and cordon the node (`kubectl drain node_name --ignore-daemonsets`). Once all of the workloads have been moved to other nodes then the cordoned node can be rebooted. After reboot it is made available to accept workloads by uncordoning the node (`kubectl uncordon node_name`).
Having to do this manually, whether for a large production cluster, or a more modest homelab cluster, can be time consuming, prone to errors, and easily forgotten or missed if not paying close attention to the updates that have been applied.
This is where kured comes in. The description from the [kured github page](https://github.com/weaveworks/kured#introduction) describes it well;
> Kured (KUbernetes REboot Daemon) is a Kubernetes daemonset that performs safe automatic node reboots when the need to do so is indicated by the package management system of the underlying OS.
## Multi-Architecture Modification
There is one issue though, that will affect those running Raspberry Pi based Kubernetes clusters. Currently weaveworks only provides images built against the amd64 architecture. As I mentioned in the multi-architecture post, one solution would be to manually build an arm64 image for kured.
Luckily, this has already been done for us, at the [raspbernetes multi-arch-images github page](https://github.com/raspbernetes/multi-arch-images). They track a number of images, including kured, and provide up to date multi-architecture images for each. The raspbernetes image built for kured (and likely all the others; I've only used their kured image, so far) is a drop in replacement for the weaveworks image, so using it is a simple case of substituting it in the weaveworks kured manifest file.
Starting with the [installation instructions](https://github.com/weaveworks/kured#installation) on the weaveworks/kured github page, download the latest manifest file using wget or curl;
```
$ latest=$(curl -s https://api.github.com/repos/weaveworks/kured/releases | jq -r .[0].tag_name)
$ wget https://github.com/weaveworks/kured/releases/download/$latest/kured-$latest-dockerhub.yaml
```
Next, substitute the kured multi-architecture image from raspbernetes;
```
$ diff kured-1.8.0-dockerhub.yaml-dist kured-1.8.0-dockerhub_raspbernetes.yaml
95c95,96
< image: docker.io/weaveworks/kured:1.8.0
---
> #image: docker.io/weaveworks/kured:1.8.0
> image: raspbernetes/kured:1.8.0
```
## Deploying kured
Once you have the modifications to the manifest file completed, kured is installed to the cluster in the standard manner using kubectl;
```
$ kubectl apply -f kured-1.8.0-dockerhub_raspbernetes.yaml
```
This will start a kured pod on each node, which will then manage automated node reboots as required.
```
$ kubectl get pods -l name=kured -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kured-b6xr9 1/1 Running 0 108m 10.42.0.16 node-1-rpi4 <none> <none>
kured-brwrn 1/1 Running 0 109m 10.42.1.10 node-2-lxc <none> <none>
kured-n5xsl 1/1 Running 0 111m 10.42.2.14 node-3-lxc <none> <none>
kured-8ql4r 1/1 Running 0 109m 10.42.3.12 node-4-lxc <none> <none>
kured-jbvm7 1/1 Running 0 109m 10.42.5.13 node-5-rpi4 <none> <none>
kured-zfcj8 1/1 Running 0 110m 10.42.4.14 node-6-rpi4 <none> <none>
kured-6jzmz 1/1 Running 0 110m 10.42.6.14 node-7-rpi4 <none> <none>
```
## Conclusion
I've been using kured for a few months now (at the time of writing this post) and it has performed flawlessly for me. Usually the only way I know it has been running is when I happen to notice the uptime change on my kubernetes nodes.
It is pretty cool to watch it at work though. Seeing a node drain and corden itself, reboot and then uncordon itself, all autonomously, is more than a little surreal.

View File

@ -0,0 +1,183 @@
---
hide:
- navigation
created: 2021-09-26 20:04
updated: 2023-05-03 16:46
tags:
- LXD
---
# LXD Bridged Profile
## References
* <https://developers.redhat.com/blog/2018/10/22/introduction-to-linux-interfaces-for-virtual-networking#bridge>
* <https://wiki.debian.org/BridgeNetworkConnections>
* <https://linux.die.net/man/8/brctl>
* <https://linux.die.net/man/1/nmcli>
* <https://major.io/2015/03/26/creating-a-bridge-for-virtual-machines-using-systemd-networkd/>
* <https://netplan.io/>
* <https://netplan.io/examples/#configuring-network-bridges>
## Introduction
When you start using LXD containers, eventually you'll want to have your container appear directly on your main network. By default, LXD sets up a bridge, usually named lxdbr0, that it connects all containers to. This bridge has a DHCP server, and is set up to use NAT for network addressing of containers. This works fine when using containers for testing or development, but when you want to set up a container for production use you'll probably want to set up a bridged profile for your production containers.
I won't go over the many ways of creating network bridges on Linux servers. I've included a few links in the References section with some alternatives. Likely you'll required the bridge-utils package and will have to perform the initial bridge interface creation using brctl.
## Bridged Network Configuration
### Ubuntu
Here's an example of adding a bridge to an Ubuntu server via a netplan configuration. Chances are that if you've been around netplan for a while, you've probably taken the default dhcp netplan configuration and set it up for a static IP.
```
$ cat /etc/netplan/server.yaml
network:
version: 2
renderer: networkd
ethernets:
enp3s0:
dhcp4: no
dhcp6: no
bridges:
br0:
dhcp4: no
dhcp6: no
interfaces: [enp3s0]
addresses: [192.168.7.10/24]
gateway4: 192.168.7.1
nameservers:
addresses:
- 192.168.7.83
- 192.168.7.84
parameters:
stp: true
forward-delay: 4
```
### Debian
This is an example configuration as used on a Debian based server.
```
$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
auto lo enp3s0 br0
iface lo inet loopback
iface enp3s0 inet manual
iface br0 inet static
dhcp4 no
dhcp6 no
bridge_ports enp3s0
address 192.168.7.10/24
gateway 192.168.7.1
dns-nameservers 192.168.7.83 192.168.7.84
dns-search lan
```
### systemd-network
This is an example configuration for a system using the systemd-networkd networking configuration.
```
$ ls -1 /etc/systemd/network
br0.netdev
br0.network
enp3s0.network
$ cat /etc/systemd/network/enp3s0.network
[Match]
Name=enp3s0
[Network]
Bridge=br0
$ cat /etc/systemd/network/br0.netdev
[NetDev]
Name=br0
Kind=bridge
$ cat /etc/systemd/network/br0.network
[Match]
Name=br0
[Network]
DHCP=false
Address=192.168.20.90/24
Gateway=192.168.20.1
DNS=192.168.20.21
DNS=192.168.20.22
Domains=lan
```
## LXD bridge profile
Regardless of how you set up a bridge, once you've created it you can then use it in an LXD profile to allow your containers to be directly connected to your main network, rather than the default NAT network.
The way I do this is to create a bridged configuration file first, and then apply it to a newly created profile.
```
$ cat bridged.cnf
config: {}
description: Profile settings for a bridged container
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: default
type: disk
name: bridged
used_by:
$ lxc profile create bridged
$ lxc profile edit bridged <bridged.cnf
$ lxc profile show bridged
config: {}
description: Profile settings for a bridged container
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: default
type: disk
name: bridged
used_by:
```
Once you have a bridged profile created, the next step is to apply it to newly created containers.
```
$ lxc launch images:ubuntu/focal --profile bridged u2004
Creating u2004
Starting u2004
$ lxc list
+---------+---------+------------------------------+------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------+---------+------------------------------+------+-----------------+-----------+
| u2004 | RUNNING | 192.168.7.126 (eth0) | | CONTAINER | 0 |
+---------+---------+------------------------------+------+-----------------+-----------+
```
Once the container starts up it will have an IP on your main network, supplied by your DHCP server. If required, you can configure a static IP in the same manner as normally performed for the specific distribution in the container.

83
docs/posts/lxd-metrics.md Normal file
View File

@ -0,0 +1,83 @@
---
hide:
- navigation
created: 2023-01-17 14:19
updated: 2023-09-11 20:08
tags:
- LXD
---
# LXD Metrics
## References
* <https://linuxcontainers.org/lxd/docs/master/metrics/>
* <https://grafana.com/grafana/dashboards/15726-lxd/>
* <https://prometheus.io/docs/prometheus/latest/installation/#using-docker>
* <https://hub.docker.com/r/prom/prometheus>
* <https://grafana.com/docs/grafana/latest/setup-grafana/installation/docker/>
* <https://hub.docker.com/r/grafana/grafana>
* <https://git.radar231.com/radar231/docker_prometheus-grafana.git>
## Introduction
LXD servers have a metrics endpoint as part of the REST API. Arguably the best way to take advantage of this is to use [Prometheus](https://prometheus.io/) to collect the metrics data, and [Grafana](https://grafana.com/) to graph the collected data.
This page describes configuring LXD servers to enable metrics gathering, as well as setting up docker containers of Prometheus and Grafana to make use of the metrics data.
## Metrics Certificate
We will need a certificate to provide authentication in order to gather metrics from LXD servers. The resultant files (metrics.key, metrics.crt) need to be copied to the metrics server. In addition, the metrics.crt file will need to be copied to each LXD server for which metrics data will be gathered.
```
$ openssl req -x509 -newkey ec -pkeyopt ec_paramgen_curve:secp384r1 -sha384 -keyout metrics.key -nodes -out metrics.crt -days 3650 -subj "/CN=metrics.local"
```
## LXD Server Configuration
Gathering metrics data from an LXD server require two configuration steps. First, the metrics data port needs to be configured. Next a trust needs to be added using the metrics public certificate. Finally, we need to obtain the LXD server certificate, which will be copied to the metrics server. These steps need to be performed on each LXD server that metrics will be gathered from.
```
$ lxc config set core.metrics_address :8444
$ lxc config trust add --type metrics --name prometheus metrics.crt
$ sudo cat /var/snap/lxd/common/lxd/server.crt >(server)-server.crt
```
## Setting up the Docker Directory
The following sub-directories need to be created in the docker-compose run directory. The permissions and directory ownership need to be set as well.
```
$ mkdir prometheus_data prometheus_etc prometheus_etc/tls grafana_data grafana_etc
$ chmod 777 prometheus_data
$ chmod 777 grafana_data
$ sudo chown 472:0 grafana_data
```
Next you need to generate the grafana.ini file. For some reason the grafana docker image doesn't create this on the initial run in this setup.
```
$ docker run --rm --entrypoint /bin/bash grafana/grafana -c 'cat $GF_PATHS_CONFIG' > grafana_etc/grafana.ini
```
Finally, the prometheus.yml file, suitably modified to match the current environment, is copied into prometheus_etc. All of the server cert files are copied into prometheus_etc/tls, as well as the metrics.key and metrics.crt files.
Once all of this is in place, copy the docker-compose.yml file into the run directory and start the servers with `docker-compose up -d`. Confirm everything is running correctly by checking the docker container logs.
## Check Prometheus Data Sources
Confirm that Prometheus is gathering metrics data from the LXD servers by first opening the prometheus page in a web browser. Open "<http://(metrics-server):9090>" in a browser, and then select "Status -> Targets". All of the LXD server data sources should be marked as "up", with no errrors.
## Setup Grafana Data Source and Dashboard
Finally, open the Grafana server in a web browser at "<http://(grafana-server):3000>". Select "Configuration -> Data Sources". Select "Add data source", and "Prometheus". In the "HTTP -> URL" field enter "<http://prometheus:9090>". Scroll down to the bottom of the page and select "Save & test".
Next select "Dashboards -> Import", and in the "Import via grafana.com" field, enter "19131", then click "Load" to the left of that. At the bottom of the next page, set "Data Source" to "Prometheus", then select "Import".
This will import the LXD dashboard created by "stgraber". This dashboard is configured to display the LXD instance metrics for the selected LXD server.
## Conclusion
This is a very simple deployment of Prometheus and Grafana. There are many other Prometheus features, such as alerting, etc, that haven't been touched upon. There are also many features of Grafana that could be expanded upon as well.
However, this will provide a very functional metrics gathering and display system for LXD servers though, which was exactly what I was looking for when I set this up.

View File

@ -0,0 +1,166 @@
---
hide:
- navigation
created: 2021-09-26 16:50
updated: 2023-05-01 17:19
tags:
- LXD
---
# LXD Virtual Machines
## References
* LXD
* <https://linuxcontainers.org/>
* <https://linuxcontainers.org/lxd/docs/master/>
* <https://linuxcontainers.org/lxd/advanced-guide/>
* <https://wiki.archlinux.org/title/LXD>
* QEMU
* <https://www.qemu.org/>
* <https://www.qemu.org/documentation/>
* <https://wiki.qemu.org/Main_Page>
* KVM
* <https://www.linux-kvm.org/page/Main_Page>
* <https://www.linux-kvm.org/page/Documents>
* libvirt
* <https://libvirt.org/>
## Introduction
LXD is a great hypervisor to manage system containers, and pretty much anything you can use a virtual machine for you can use a system container instead. However, LXD is also able to manage virtual machines, and has been able to do so [for more than a year now](https://discuss.linuxcontainers.org/t/running-virtual-machines-with-lxd-4-0/7519).
While virtual machines have been a part of LXD for quite a while, using them has been a bit challenging. Much of that has since been sorted out, and using virtual machines under LXD is now as easy to do as using system containers.
This page is pretty much my 'cheat sheet' for LXD virtual machines, where I can keep information and notes for future reference.
## Note for remote LXD hosts
* all of the below can be performed on a remote LXD host by appending the remote name to the vm
* ie, "starbug:u2004v"
## Basic launch of vm
```
$ lxc launch images:ubuntu/focal --vm u2004v
```
## Launch vm and connect to console
* console in shell
```
$ lxc launch images:ubuntu/focal --vm u2004v --console
```
* console in remote-viewer
```
$ lxc launch images:ubuntu/focal --vm u2004v --console=vga
```
## Connect to vm using lxd-agent
* LXD containers have the lxd-agent built in, but for VM's it is an additional package named 'lxd-agent-loader'
* It seems that the 'lxd-agent-loader' package is included in most of the LXD VM images in both the images: and ubuntu: remote image repositories
```
$ lxd exec u2004v bash
```
## Launch vm with bridged profile
```
$ lxc launch images:ubuntu/focal --vm --profile bridged u2004v
```
## Launch vm with specified cpu and memory
```
$ lxc launch images:ubuntu/focal --vm -c limits.cpu=2 -c limits.memory=4GiB u2004v
```
## Connecting to console of running vm
* connect to console in shell
```
$ lxc console u2004v
```
* connect to console using remote-viewer
```
$ lxc console u2004v --type=vga
```
## Create blank vm to install via ISO
* NOTE: The ISO needs to be on a local filesystem. You'll likely receive a permission issue if the ISO is on a network filesystem.
```
$ lxc init --vm --empty -c limits.cpu=2 -c limits.memory=4GiB -c security.secureboot=false u2004v
$ lxc config device override u2004v root size=20GiB
$ lxc config device add u2004v iso disk source=/usr/local/ISOs/ubuntu-20.04-legacy-server-amd64.iso boot.priority=10
$ lxc start u2004v --console=vga
- perform the vm installation, as per normal
- after the install, when prompted to remove cd;
$ lxc stop -f u2004v
$ lxc config device remove u2004v iso
$ lxc start u2004v
```
## Change size of disk for stock vm images
Ref: <https://discuss.linuxcontainers.org/t/cannot-change-vm-root-disk-size/8727/5>
* changing the disk size on a stock image vm is a two part process
* first, init the vm and set the desired disk size
```
$ lxc init --vm images:ubuntu/focal u2004v
$ lxc config device override u2004v root size=20GiB
```
* next, start the vm, and connect to it using 'lxc exec u2004v bash'
* from within the vm, run the following (as root if entering as a non-root user)
* (assumes the '/' root filesystem is on /dev/sda2)
* *(Note: some vm images automatically detect new disk size and resize '/' appropriately)*
```
# growpart /dev/sda 2
# resize2fs /dev/sda2
```
If the VM has used LVM for the disk partitions, you'll likely need to perform an lvextend instead of the growpart;
```
(assuming lv is at /dev/mapper/ubuntu--vg-ubuntu--lv)
# lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
# resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
```
## Add second nic to LXD VM
```
lxc config device add u2004v eth1 nic name=eth1 nictype=bridged parent=lxdbr0
```
## Add second disk to LXD VM
* create 100G disk in default storage pool and add it to vm named 'u2004v'
```
$ lxc storage volume create default u2004v-sdb size=100GiB --type=block
$ lxc config device add u2004v u2004v-sdb disk pool=default source=u2004v-sdb
```
Please note that the new disk won't be deleted if you delete the VM. In this case you need to delete the disk manually after deletion of the VM.
```
$ lxc storage volume delete default u2004v-sdb
```

View File

@ -0,0 +1,225 @@
---
hide:
- navigation
created: 2021-12-22 19:47
updated: 2021-12-27 20:21
tags:
- Kubernetes
---
# Managing Kubernetes Secrets Using Sops and Age
## References
* <https://kubernetes.io/docs/concepts/configuration/secret/>
* <https://github.com/mozilla/sops>
* <https://github.com/FiloSottile/age>
* <https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/>
* <https://www.vaultproject.io/>
* <https://www.conjur.org/>
* <https://github.com/bitnami-labs/sealed-secrets>
* <https://oteemo.com/hashicorp-vault-is-overhyped-and-mozilla-sops-with-kms-and-git-is-massively-underrated/>
* <https://www.thorsten-hans.com/encrypt-your-kubernetes-secrets-with-mozilla-sops/>
* <https://asciinema.org/a/431605>
## Introduction
Many kubernetes deployments require some kind of key, password or other sensitive data. These are known collectively as 'secrets'. Managing the files containing secrets in a source control system like git can be difficult as you likely want to avoid storing clear text passwords and the like in a public repository. The normal solution is to exclude the secrets files from source control through the use of a .gitignore file. This solves the unintentional release of the secrets files in the repository, but they still need to be managed somehow.
The obvious solution is to encrypt the secrets files, but this then adds another level of complexity when it comes time to deploy the secrets file to a kubernetes cluster.
## A Warning About Secrets in a Kubernetes Cluster
It is probably prudent to bring up an important point about kubernetes secrets before going any further. By default, secrets stored in a kubernetes cluster (they are actually stored within the etcd database) are not encrypted, and are stored in the clear in their base64 format. This can be seen by retrieving any existing secret from the system;
```
$ cat secret.yaml
---
#############################################
# - creds for website
# - generate value using;
# echo -n '<text>' | base64
#############################################
apiVersion: v1
kind: Secret
metadata:
name: some-pass
data:
SOME_PASSWD: bm90IHJlYWxseSBhIHBhc3N3b3Jk
# EOF
```
```
$ kubectl apply -f secret.yaml
secret/some-pass created
```
```
$ kubectl get secrets some-pass -o yaml
apiVersion: v1
data:
SOME_PASSWD: bm90IHJlYWxseSBhIHBhc3N3b3Jk
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"SOME_PASSWD":"bm90IHJlYWxseSBhIHBhc3N3b3Jk"},"kind":"Secret","metadata":{"annotations":{},"name":"some-pass","namespace":"default"}}
creationTimestamp: "2021-12-22T21:25:14Z"
name: some-pass
namespace: default
resourceVersion: "19798883"
uid: 5694ba95-39df-466f-a03c-93aaeafb9681
type: Opaque
```
As can be seen, the secret is stored essentially in the clear.
This can be partially remedied through careful access control configuration using RBAC, but a good fix is available using the information at [Encrypting Secret Data at Rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/). One thing to note about this though, which is highlighted in the article, is that while this will encrypt the secrets values in the system, the encryption key used in the EncryptionConfig object is still potentially visible for anyone that has access to this object, depending on the key management system selected. Proper access control configuration must be applied to the EncryptionConfig object.
All that being said, this post is about managing the secrets files themselves, and being able to store them securely in a source management system.
## The 'Enterprise' Solution
The normal 'go-to' solution chosen to manage secrets in an enterprise environment is [HashiCorp Vault](https://www.vaultproject.io/). This is a well regarded enterprise level solution that can manage and protect all manners of sensitive data. It is also open source so it could be a deployed as a free self-managed solution, but it is a bit big (IMHO) for HomeLab use.
Another enterprise level system that is similar to HashiCorp Vault is [CyberArk Conjur](https://www.conjur.org/). Conjur is also open source.
A third alternative with an interesting twist on managing the secrets is [Bitnami Sealed-Secrets](https://github.com/bitnami-labs/sealed-secrets). This solution is tightly integrated into kubernetes, and deploys a controller on the cluster that manages decryption of the secrets prior to deployment.
## A Solid HomeLab Solution: Mozilla Sops + Age
The system I've chosen for my HomeLab is [Mozilla sops](https://github.com/mozilla/sops). Sops is a utility that can encrypt specified key data values within a number of file formats, including yaml, json and others. You can encrypt just the required key data fields in a yaml secrets manifest file, and then that file can be safely stored in source control. When it comes time to deploy that manifest to kubernetes, sops can decrypt to standard output which can then be piped to kubectl.
Sops hands the encryption off to an external application, of which a number of options are available. Sops can use cloud providers such as Azure or GCP for encryption, but can also utilize local encryption utilities as well. PGP is one option, but the one I've chosen is [age](https://github.com/FiloSottile/age).
## Environment Set-up and Configuration
Installation of both sops and age is relatively simple, as there are binaries available within the releases section of each repository.
Once both applications are installed the next step is creation of the age key file. I've chosen to store my key file in my $HOME/bin directory. If multiple users will be managing kubernetes secrets manifests, or if an automation system such as ansible will be used, the key file can be placed in a file system location appropriate to the requirement. Alternatively, multiple recipients could be specified during encryption (each with their own age key file), which would allow multiple users to each decrypt the file using their own key.
Please note that by default the age key file is not password protected, although it can be password protected, if desired (see the following example). If you choose to leave your key file without a password be sure to protect the destination directory and file appropriately.
```
$ age-keygen -o $HOME/bin/age-key.txt
Public key: age1x7aazmg26qf5vm7hnvxjqy77yvv5lc7jez7untjfnwrg8pa6aqysxlaa42
- to password protect your age key file, use the following;
$ age-keygen | age -p > $HOME/bin/age-key.age
```
After key creation, create two bash environment variables for sops. These aren't required, but do simplify sops usage by eliminating the need to specify the age recipient and age key file location as command line arguments to sops. The following should be added to the .bashrc for the user that manages kubernetes secrets manifest files;
```
export SOPS_AGE_RECIPIENTS="age1x7aazmg26qf5vm7hnvxjqy77yvv5lc7jez7untjfnwrg8pa6aqysxlaa42"
export SOPS_AGE_KEY_FILE="${HOME}/bin/age-key.txt"
```
## Encrypting a Secrets Manifest File
Once sops and age are setup and configured, it is relatively simple to encrypt a secrets manifest file.
```
$ cat secret.yml
---
#############################################
# - creds for website
# - generate value using;
# echo -n '<text>' | base64
#############################################
apiVersion: v1
kind: Secret
metadata:
name: some-pass
data:
SOME_PASSWD: bm90IHJlYWxseSBhIHBhc3N3b3Jk
# EOF
```
```
$ sops --encrypt --encrypted-regex '^data' secret.yml >secret.enc.yml
```
```
$ cat secret.enc.yml
#############################################
# - creds for website
# - generate value using;
# echo -n '<text>' | base64
#############################################
apiVersion: v1
kind: Secret
metadata:
name: some-pass
data:
SOME_PASSWD: ENC[AES256_GCM,data://OIS0cajpG3mI6c832Hauy+R/voNPw4M1q3/Q==,iv:jGi0FIwI/ZqPFmb8Re68VC/m/QzB3WtlAQG88OCzlO4=,tag:gMajKzNRrcwCkFLhoMo4TA==,type:str]
# EOF
sops:
kms: []
gcp_kms: []
azure_kv: []
hc_vault: []
age:
- recipient: age1x7aazmg26qf5vm7hnvxjqy77yvv5lc7jez7untjfnwrg8pa6aqysxlaa42
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBZRFFvS3FMU1g5Q3ZTUjRm
WVBJcEQ4TkhQb0dBbEZ6Rm9VallEdmtZZXo4Ck13aEw4Q043SGdtWXlzVWhCNk9u
c2ZiK1VvNHpUV3lxaGxzR3craHB5aW8KLS0tIGxsa044dGtjeEZSeTBYQ2lzS2E4
bFlxL0dNSjlESmtlcFdFc0FYTzBwS0EKvksaFFkx1PEw9ULPVWNOtqcRobV9VdFm
ZpydHNaF9EQrhtTR+dvJZp8BZMQEaJwZQN8F3gQ71z955Ryd7TYYUQ==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2021-12-21T18:04:54Z"
mac: ENC[AES256_GCM,data:QRyBSBD2JdAaXc1Xm9rut+c8aiWBtG8MZt2H7WpH+vyw3UUFtNOwbtwm4n+TCadDwY8Exg+8+k3M6hRIF6+wBpWIKtXd53TRbDs09aZhxY4v6q8ak5yoIcOgF3KSKGlL+tHYBLYSoPbqNGGgCNJlEvWou1UH2MRmyEMEBy6NGNE=,iv:FScvzwzAczwq5vWsVtvbnjoIcyUK0g1MrdiYrnR8nTg=,tag:pzmeeR7G+I5Nds06FxvG4w==,type:str]
pgp: []
encrypted_regex: ^(data|stringData)$
version: 3.7.1
```
The `--encrypted-regex '^data'` option directs sops to only encrypt data values under the 'data:' key. If you don't specify what key data values to encrypt sops will encrypt the data values for all keys it finds in the target file.
Once you have the secrets manifest file encrypted you can remove the unencrypted file.
## Decryption and Deployment of an Encrypted Secrets Manifest File
Using the encrypted secrets manifest file is relatively easy, with only a slightly more complicated command line usage;
```
$ sops --decrypt secret.enc.yml | kubectl apply -f -
secret/some-pass created
```
## Docker ENV Files
While this post has been primarily targeted at using sops + age for kubernetes secrets manifest files, it can also be used for docker env files. This then allows storage of the env files in source control as well.
```
$ cat docker-app.env
USER=some-user
PASS=some-password
```
```
$ sops -e docker-app.env >docker-app.enc.env
```
```
$ cat docker-app.enc.env
USER=ENC[AES256_GCM,data:XsA4Vmcqgs9s,iv:lFnUUSogZ6ijiMgQsjCxJxpTzN/PoK4c+DJTH71ah/w=,tag:tXRoCH7mdUmWvvpSP4/A6A==,type:str]
PASS=ENC[AES256_GCM,data:YkK3gFxOz9GMeKGP0g==,iv:s4XxwBoRNUfK+PMbwE7QsJhEw+bD5NWSz5Sm73FiBoA=,tag:XWS6Cr+AIOFSA9rc5QV/jw==,type:str]
sops_age__list_0__map_enc=-----BEGIN AGE ENCRYPTED FILE-----\nYWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBIdWNTSUxpMmVnWjhCbHFi\nZC9BYllWZHVCVWdibUFTMGZsZ0UyamI0dFZ3CmpTNHBPK09WSldPbyswSDlQVFNx\ncklxRUJTcG01dHJPckVWL2pUdHJWSnMKLS0tICtVRWVvV2s5STFJbDlLeXNobm5z\naHhnU1BlYTdwM1REdjBaekYwcDljZjgKfeE0kL2ScHXzDBL0j1tWPRte/FpeikQ0\nhmhDi7mWPII12RMp34MryN72RmFi79ET5VphYEYPSXwr5IyE+0g4Gg==\n-----END AGE ENCRYPTED FILE-----\n
sops_lastmodified=2021-12-22T22:04:36Z
sops_unencrypted_suffix=_unencrypted
sops_version=3.7.1
sops_age__list_0__map_recipient=age1x7aazmg26qf5vm7hnvxjqy77yvv5lc7jez7untjfnwrg8pa6aqysxlaa42
sops_mac=ENC[AES256_GCM,data:Slr4iwrZJ2iHymCWWnq4jJ1iWfkRWu3iyEZTsyeZuvJ1vg9CLG+JijIA8prNp2E3Ts7P/k278QPe0pVZ8rc/oRisFyF1nRl2GoWrm2RxLxQ/wFihYDnSYkSXAHGM43Ml7gFr2FgmLskCggkaI+P6oudmnn+WVRqrpBe1VJZfzgA=,iv:8dSlp8BgFZrPXA312mnaehuWIesvvgfIo5tMuqmrOp8=,tag:vEhIHfLnIQcBurOMWCtb3w==,type:str]
```
The unencrypted env file can then be deleted.
To use the encrypted env file, simply decrypt it in place once the repository has been deployed to the destination docker host. Alternatively it can be decrypted on the management host and then copied to the destination docker host. The source env filename (ie, docker-app.env) should be added to a .gitignore file, so that the decrypted env file won't inadvertently be added back into the source control.

View File

@ -0,0 +1,136 @@
---
hide:
- navigation
created: 2021-06-08 02:58
updated: 2022-03-21 12:01
tags:
- Kubernetes
---
# Multi-Architecture Kubernetes Cluster and nodeAffinity
## References
* <https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity>{: target="_blank"}
## Introduction
One of the issues with running a Kubernetes cluster on a group of Raspberry Pi single board computers is that you are limited to container images that are built against an arm64 architecture. As anyone that has spent any time with Docker or Kubernetes knows, a large percentage of the available images on hub.docker.com are built for amd64 architecture only, and don't have an arm64 version of the image built. While you could always rebuild the image from the dockerfile (if available) this isn't always possible and sometimes doesn't build properly, and you spend more time debugging the image build.
## Multi-Architecture Cluster
One way to solve this problem is to create a multi-architecture cluster. The way I accomplished this was to create three amd64 LXD containers on my virtualization servers, and added those to the cluster.
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node-1-rpi4 Ready control-plane,etcd,master 75d v1.22.5+k3s1 192.168.7.51 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-arm64 containerd://1.5.8-k3s1
node-2-lxc Ready control-plane,etcd,master 63d v1.22.5+k3s1 192.168.7.52 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.5.8-k3s1
node-3-lxc Ready control-plane,etcd,master 63d v1.22.5+k3s1 192.168.7.53 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.5.8-k3s1
node-4-lxc Ready <none> 75d v1.22.5+k3s1 192.168.7.54 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-amd64 containerd://1.5.8-k3s1
node-5-rpi4 Ready <none> 75d v1.22.5+k3s1 192.168.7.55 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-arm64 containerd://1.5.8-k3s1
node-6-rpi4 Ready <none> 75d v1.22.5+k3s1 192.168.7.56 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-arm64 containerd://1.5.8-k3s1
node-7-rpi4 Ready <none> 75d v1.22.5+k3s1 192.168.7.57 <none> Debian GNU/Linux 11 (bullseye) 5.10.0-10-arm64 containerd://1.5.8-k3s1
```
## Identifying Node Architecture
Having a multi-architecture cluster is only half of the solution though. We have to have a way to ensure that arm64 images run on an arm64 node, and amd64 images on an amd64 node. First we have to be able to identify what architecture each node is. Luckily, the system adds the architecture as a label on each node.
```
$ kubectl get node node-1-rpi4 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node-1-rpi4 Ready control-plane,etcd,master 23h v1.21.5+k3s2 beta.kubernetes.io/arch=arm64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm64,kubernetes.io/hostname=node-1-rpi4,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s
-------------------------------------------
$ kubectl get node node-2-lxc --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node-2-lxc Ready control-plane,etcd,master 23h v1.21.5+k3s2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2-lxc,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s
```
## Node Affinity
The final piece of the puzzle is having a way of using the architecture label to direct deployment of images. This is done with a configuration option known as 'nodeAffinity'. In the deployment manifest, we can add a section to identify the type of node that we want to deploy the container images to.
```
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
```
Here's an example of a complete deployment manifest, showing where the 'nodeAffinity' section is placed.
```
$ cat website-wiki_deployment.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: website-wiki
spec:
selector:
matchLabels:
app: website-wiki
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: website-wiki
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: website-wiki
image: m0wer/tiddlywiki
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
- name: TZ
value: "America/Toronto"
- name: USERNAME
value: "radar231"
- name: PASSWORD
valueFrom:
secretKeyRef:
name: website-wiki-pass
key: WIKI_PASSWD
ports:
- containerPort: 8080
name: "website-wiki"
volumeMounts:
- name: website-wiki
mountPath: "/var/lib/tiddlywiki"
volumes:
- name: website-wiki
persistentVolumeClaim:
claimName: website-wiki-pvc
# EOF
```
## Conclusion
While I only make use of the architecture label, 'nodeAffinity' can be used against any label. Custom labels can be created as well, and these can also be used.

View File

@ -0,0 +1,153 @@
---
hide:
- navigation
created: 2023-04-18 17:42
updated: 2023-04-19 20:43
tags:
- Kubernetes
---
# NFS Persistent Storage for Kubernetes
## References
* Manual NFS PV Provisioning
* <https://kubernetes.io/docs/concepts/storage/volumes/#nfs>
* <https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs>
* Automated NFS PV Provisioning using nfs-subdir-external-provisioner
* <https://kubernetes.io/docs/concepts/storage/storage-classes/#nfs>
* <https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner>
* <https://git.radar231.com/radar231/k8s_nfs-provisioner>
## Introduction
Kubernetes is an incredible vehicle for automated deployment and management of containerized workloads. While the workloads are ephemeral, persistent storage is required to gain any real value from the applications deployed.
Kubernetes doesn't include any persistent storage other than a very basic local node storage called 'hostpath'. This is fine for a single node test or development environment, but for multi-node clusters or for production environments, an external persistent storage system is required.
## Manual NFS PV Provisioning
An NFS server provides a good solution for persistent storage for a homelab kubernetes cluster. Other than an available NFS server (such as a local network NAS for example) the only other requirement is to have NFS client support available on the cluster node hosts. Once these requirements are satisfied, it is a relatively simple task to create NFS backed persistent storage to be used by a deployed application.
### Manual NFS PV
The simplest way to make use of a local NFS server for persistent storage is the manual method. First you create the storage directory in an appropriate location in the NFS share. Then you create a PV (Persistent Volume) object that references your NFS server and the specified storage directory.
```
$ cat delinit_pv.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: delinit-pv
labels:
name: delinit-pv
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
mountOptions:
- hard
- nfsvers=4.0
nfs:
server: 192.168.20.11
path: "/volume1/k8s-storage/delfax/delinit"
# EOF
```
### Manual NFS PVC
The next step is to create a PVC (Persistent Volume Claim) object that binds to the previously created PV object. The PVC object provides the link to the persistent storage for the application.
```
$ cat delinit_pvc.yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: delinit-pvc
labels:
app: delinit
spec:
accessModes:
- ReadWriteOnce
storageClassName: "manual"
resources:
requests:
storage: 1Gi
selector:
matchLabels:
name: delinit-pv
# EOF
```
The PVC object is then referenced in the application deployment manifest.
```
$ cat delinit_deployment.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: delinit
spec:
selector:
matchLabels:
app: delinit
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: delinit
spec:
containers:
- name: delinit
image: nginx
ports:
- containerPort: 80
name: "delinit"
volumeMounts:
- name: delinit
mountPath: "/usr/share/nginx/html"
volumes:
- name: delinit
persistentVolumeClaim:
claimName: delinit-pvc
# EOF
```
## Automatic NFS PV Provisioning
There are currently two external NFS provisioners. The one I use is the NFS Subdir External Provisioner, linked in the references section. How I've deployed it to my homelab is also linked in the references section.
### Automatic NFS PVC
Once the NFS provisioner has been successfully deployed to the cluster, it is quite easy to make use of it for creation of persistent storage for deployed applications. The following is an example of a PVC manifest that will create a storage object which can then be referenced in an application deployment manifest.
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
# annotations:
# nfs.io/storage-path: "test-path" # not required, depending on whether this annotation was shown in the storage class description
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
# EOF
```

View File

@ -0,0 +1,63 @@
---
hide:
- navigation
created: 2021-06-10 14:43
updated: 2021-09-01 02:37
tags:
- LXD
---
# Remote LXD Management
## References
* <https://linuxcontainers.org/lxd/advanced-guide/#remote-servers>
* <https://linuxcontainers.org/lxd/advanced-guide/#add-remote-servers>
## Introduction
To allow for easier management of LXD containers on remote hosts, those remote hosts can be added to the local workstation.
## Set-up of Remote Host
```
$ lxc config set core.https_address "[::]"
$ lxc config set core.trust_password (some-password)
```
## Adding Remote Host
```
$ lxc remote add (some-name) <IP>
```
* This will prompt you to confirm the remote server fingerprint and then ask you for the password.
## Usage Examples
* listing LXD containers on remote host
```
$ lxc list starbug:
+-----------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-----------+---------+----------------------+------+-----------+-----------+
| delans | RUNNING | 192.168.x.xx (eth0) | | CONTAINER | 0 |
+-----------+---------+----------------------+------+-----------+-----------+
| lxdmosaic | RUNNING | 192.168.x.xx (eth0) | | CONTAINER | 0 |
+-----------+---------+----------------------+------+-----------+-----------+
| nbtwiki | RUNNING | 192.168.x.xxx (eth0) | | CONTAINER | 0 |
+-----------+---------+----------------------+------+-----------+-----------+
| pihole1 | RUNNING | 192.168.x.xx (eth0) | | CONTAINER | 0 |
+-----------+---------+----------------------+------+-----------+-----------+
```
* attaching to container on remote host
```
$ lxc exec starbug:delans bash
root@delans:~#
```

View File

@ -0,0 +1,32 @@
---
hide:
- navigation
created: 2021-06-10 14:43
updated: 2021-09-01 02:36
tags:
- HomeLab
---
# Remote VM Management with Virt-Manager
## References
* <https://www.linux-kvm.org/page/Main_Page>
* <https://www.linux-kvm.org/page/Management_Tools>
* <https://libvirt.org/>
* <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/virtualization/chap-virtualization-managing_guests_with_virsh>
* <https://libvirt.org/manpages/virsh.html>
* <https://virt-manager.org/>
## Introduction
My preferred VM virtualization system is [KVM (Kernel Virtual Machine)](https://www.linux-kvm.org/page/Main_Page), the virtualization system built into the Linux kernel. There are a large number of [utilities for managing KVM](https://www.linux-kvm.org/page/Management_Tools), but most of the time KVM is managed using the [libvirt library](https://libvirt.org/). This provides support for the command line utility named [virsh](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/virtualization/chap-virtualization-managing_guests_with_virsh), as well as the GUI utility named [virt-manager](https://virt-manager.org/).
## Using Virt-Manager to Manage Remote VM's
Most people that have used virt-manager are probably familiar with using it to manage VM's on the local host, but it can also be used to manage VM's on a remote host as well. This is accomplished by selecting the **"File -> Add Connection..."** menu within virt-manager. The resultant dialog provides a number of options. Usually leaving the **"Hypervisor"** value set to **"QEMU/KVM"** will be most appropriate. Select the **"Connect to remote host over SSH"** check box, and fill in the appropriate remote user and the applicable remote server.
There are a couple points to note here with respect to the user account selected. First off, setting up ssh key authentication will allow for connection without a password prompt. Secondly, the user account on the remote host will likely require to be a member of at least the libvirt group on the remote host. Depending on how the remote host is set up, the remote user may also require membership in some of the other available libvirt or kvm groups on the remote host as well.
Leaving the **"Autoconnect"** check box selected will mean that virt-manager will automatically connect to the remote host at launch.

183
docs/posts/repositories.md Normal file
View File

@ -0,0 +1,183 @@
---
hide:
- navigation
created: 2021-09-13 23:07
updated: 2023-09-08 11:42
tags:
- Uncategorized
---
# Repositories
## Introduction
This page contains a (hopefully mostly up to date :-)) list of my repositories at https://git.radar231.com, organized by repository type.
## Ansible Roles
1. [role_age](https://git.radar231.com/radar231/role_age){: target="_blank"}
1. [role_ansible](https://git.radar231.com/radar231/role_ansible){: target="_blank"}
1. [role_aur_builder](https://git.radar231.com/radar231/role_aur_builder){: target="_blank"}
1. [role_base_pkgs](https://git.radar231.com/radar231/role_base_pkgs){: target="_blank"}
1. [role_bash_mods](https://git.radar231.com/radar231/role_bash_mods){: target="_blank"}
1. [role_borg_backups](https://git.radar231.com/radar231/role_borg_backups_mods){: target="_blank"}
1. [role_chk_upgrades](https://git.radar231.com/radar231/role_chk_upgrades){: target="_blank"}
1. [role_create_reboot-required](https://git.radar231.com/radar231/role_create_reboot-required){: target="_blank"}
1. [role_create_user](https://git.radar231.com/radar231/role_create_user){: target="_blank"}
1. [role_docker](https://git.radar231.com/radar231/role_docker){: target="_blank"}
1. [role_du_backups](https://git.radar231.com/radar231/role_du_backups){: target="_blank"}
1. [role_gitconfig](https://git.radar231.com/radar231/role_gitconfig){: target="_blank"}
1. [role_i3_cfg](https://git.radar231.com/radar231/role_i3_cfg){: target="_blank"}
1. [role_k3s](https://git.radar231.com/radar231/role_k3s){: target="_blank"}
1. [role_k8s_dashy_deploy](https://git.radar231.com/radar231/role_k8s_dashy_deploy){: target="_blank"}
1. [role_k8s_ddclient_deploy](https://git.radar231.com/radar231/role_k8s_ddclient_deploy){: target="_blank"}
1. [role_k8s_delinit_deploy](https://git.radar231.com/radar231/role_k8s_delinit_deploy){: target="_blank"}
1. [role_k8s_flexget_deploy](https://git.radar231.com/radar231/role_k8s_flexget_deploy){: target="_blank"}
1. [role_k8s_freshrss_deploy](https://git.radar231.com/radar231/role_k8s_freshrss_deploy){: target="_blank"}
1. [role_k8s_haproxy_deploy](https://git.radar231.com/radar231/role_k8s_haproxy_deploy){: target="_blank"}
1. [role_k8s_heimdall_deploy](https://git.radar231.com/radar231/role_k8s_heimdall_deploy){: target="_blank"}
1. [role_k8s_homer_deploy](https://git.radar231.com/radar231/role_k8s_homer_deploy){: target="_blank"}
1. [role_k8s_home-assistant_deploy](https://git.radar231.com/radar231/role_k8s_home-assistant_deploy){: target="_blank"}
1. [role_k8s_journal-wiki_deploy](https://git.radar231.com/radar231/role_k8s_journal-wiki_deploy){: target="_blank"}
1. [role_k8s_linkding_deploy](https://git.radar231.com/radar231/role_k8s_linkding_deploy){: target="_blank"}
1. [role_k8s_maxwaldorf-guacamole_deploy](https://git.radar231.com/radar231/role_k8s_maxwaldorf-guacamole_deploy){: target="_blank"}
1. [role_k8s_metallb_deploy](https://git.radar231.com/radar231/role_k8s_metallb_deploy){: target="_blank"}
1. [role_k8s_mosquitto_deploy](https://git.radar231.com/radar231/role_k8s_mosquitto_deploy){: target="_blank"}
1. [role_k8s_motioneye_deploy](https://git.radar231.com/radar231/role_k8s_motioneye_deploy){: target="_blank"}
1. [role_k8s_nagios_deploy](https://git.radar231.com/radar231/role_k8s_nagios_deploy){: target="_blank"}
1. [role_k8s_navidrome_deploy](https://git.radar231.com/radar231/role_k8s_navidrome_deploy){: target="_blank"}
1. [role_k8s_nfs-provisioner_deploy](https://git.radar231.com/radar231/role_k8s_nfs-provisioner_deploy){: target="_blank"}
1. [role_k8s_notes-wiki_deploy](https://git.radar231.com/radar231/role_k8s_notes-wiki_deploy){: target="_blank"}
1. [role_k8s_pihole_deploy](https://git.radar231.com/radar231/role_k8s_pihole_deploy){: target="_blank"}
1. [role_k8s_signal-api_deploy](https://git.radar231.com/radar231/role_k8s_signal-api_deploy){: target="_blank"}
1. [role_k8s_transmission-openvpn_deploy](https://git.radar231.com/radar231/role_k8s_transmission-openvpn_deploy){: target="_blank"}
1. [role_k8s_uptime-kuma_deploy](https://git.radar231.com/radar231/role_k8s_uptime-kuma_deploy){: target="_blank"}
1. [role_k8s_vaultwarden_deploy](https://git.radar231.com/radar231/role_k8s_vaultwarden_deploy){: target="_blank"}
1. [role_k8s_website_deploy](https://git.radar231.com/radar231/role_k8s_website_deploy){: target="_blank"}
1. [role_k8s_website-wiki_deploy](https://git.radar231.com/radar231/role_k8s_website-wiki_deploy){: target="_blank"}
1. [role_k8s_wfh-wiki_deploy](https://git.radar231.com/radar231/role_k8s_wfh-wiki_deploy){: target="_blank"}
1. [role_kubectl](https://git.radar231.com/radar231/role_kubectl){: target="_blank"}
1. [role_lxc_deploy](https://git.radar231.com/radar231/role_lxc_deploy){: target="_blank"}
1. [role_lxdhost](https://git.radar231.com/radar231/role_lxdhost){: target="_blank"}
1. [role_microk8s](https://git.radar231.com/radar231/role_microk8s){: target="_blank"}
1. [role_monitorix](https://git.radar231.com/radar231/role_monitorix){: target="_blank"}
1. [role_nagios_agent](https://git.radar231.com/radar231/role_nagios_agent){: target="_blank"}
1. [role_pfetch](https://git.radar231.com/radar231/role_pfetch){: target="_blank"}
1. [role_reboot](https://git.radar231.com/radar231/role_reboot){: target="_blank"}
1. [role_rem_base_pkgs](https://git.radar231.com/radar231/role_rem_base_pkgs){: target="_blank"}
1. [role_sops](https://git.radar231.com/radar231/role_sops){: target="_blank"}
1. [role_sudoers](https://git.radar231.com/radar231/role_sudoers){: target="_blank"}
1. [role_update_ansi_auth](https://git.radar231.com/radar231/role_update_ansi_auth){: target="_blank"}
1. [role_update_cache](https://git.radar231.com/radar231/role_update_cache){: target="_blank"}
1. [role_upgrade_pkgs](https://git.radar231.com/radar231/role_upgrade_pkgs){: target="_blank"}
1. [role_vim_setup](https://git.radar231.com/radar231/role_vim_setup){: target="_blank"}
## Ansible Playbooks
1. [playbook_ansible](https://git.radar231.com/radar231/playbook_ansible){: target="_blank"}
1. [playbook_del-updates](https://git.radar231.com/radar231/playbook_del-updates){: target="_blank"}
1. [playbook_deploy-host](https://git.radar231.com/radar231/playbook_deploy-host){: target="_blank"}
1. [playbook_docker](https://git.radar231.com/radar231/playbook_docker){: target="_blank"}
1. [playbook_dotfiles](https://git.radar231.com/radar231/playbook_dotfiles){: target="_blank"}
1. [playbook_du_backups](https://git.radar231.com/radar231/playbook_du_backups){: target="_blank"}
1. [playbook_k3s-cluster](https://git.radar231.com/radar231/playbook_k3s-cluster){: target="_blank"}
1. [playbook_k8s-deployment](https://git.radar231.com/radar231/playbook_k8s-deployment){: target="_blank"}
1. [playbook_kubectl](https://git.radar231.com/radar231/playbook_kubectl){: target="_blank"}
1. [playbook_microk8s-cluster](https://git.radar231.com/radar231/playbook_microk8s-cluster){: target="_blank"}
1. [playbook_misc-utils](https://git.radar231.com/radar231/playbook_misc-utils){: target="_blank"}
1. [playbook_monitorix](https://git.radar231.com/radar231/playbook_monitorix){: target="_blank"}
1. [playbook_nagios_agent](https://git.radar231.com/radar231/playbook_nagios_agent){: target="_blank"}
1. [playbook_pfetch](https://git.radar231.com/radar231/playbook_pfetch){: target="_blank"}
1. [playbook_sops-age](https://git.radar231.com/radar231/playbook_sops-age){: target="_blank"}
1. [playbook_vim_setup](https://git.radar231.com/radar231/playbook_vim_setup){: target="_blank"}
## Ansible Misc
1. [ansible_dev_env](https://git.radar231.com/radar231/ansible_dev_env){: target="_blank"}
## Docker Deployments
1. [docker_dhcpd](https://git.radar231.com/radar231/docker_dhcpd){: target="_blank"}
1. [docker_gitea](https://git.radar231.com/radar231/docker_gitea){: target="_blank"}
1. [docker_haproxy](https://git.radar231.com/radar231/docker_haproxy){: target="_blank"}
1. [docker_jellyfin](https://git.radar231.com/radar231/docker_jellyfin){: target="_blank"}
1. [docker_linkding](https://git.radar231.com/radar231/docker_linkding){: target="_blank"}
1. [docker_lms](https://git.radar231.com/radar231/docker_lms){: target="_blank"}
1. [docker_lxdware](https://git.radar231.com/radar231/docker_lxdware){: target="_blank"}
1. [docker_mariadb](https://git.radar231.com/radar231/docker_mariadb){: target="_blank"}
1. [docker_nextcloud](https://git.radar231.com/radar231/docker_nextcloud){: target="_blank"}
1. [docker_nfs-server](https://git.radar231.com/radar231/docker_nfs-server){: target="_blank"}
1. [docker_nginx-delfax.net](https://git.radar231.com/radar231/docker_nginx-delfax.net){: target="_blank"}
1. [docker_nginx-go.delfax.net](https://git.radar231.com/radar231/docker_nginx-go.delfax.net){: target="_blank"}
1. [docker_nginx-jmc-delfax-net](https://git.radar231.com/radar231/docker_nginx-jmc-delfax-net){: target="_blank"}
1. [docker_nginx-proxy-manager](https://git.radar231.com/radar231/docker_nginx-proxy-manager){: target="_blank"}
1. [docker_nginx-radar231.com](https://git.radar231.com/radar231/docker_nginx-radar231.com){: target="_blank"}
1. [docker_npm-lan_nginx-proxy-manager](https://git.radar231.com/radar231/docker_npm-lan_nginx-proxy-manager){: target="_blank"}
1. [docker_ntopng](https://git.radar231.com/radar231/docker_ntopng){: target="_blank"}
1. [docker_pihole1](https://git.radar231.com/radar231/docker_pihole1){: target="_blank"}
1. [docker_pihole2](https://git.radar231.com/radar231/docker_pihole2){: target="_blank"}
1. [docker_plex](https://git.radar231.com/radar231/docker_plex){: target="_blank"}
1. [docker_portainer](https://git.radar231.com/radar231/docker_portainer){: target="_blank"}
1. [docker_portainer_agent](https://git.radar231.com/radar231/docker_portainer_agent){: target="_blank"}
1. [docker_portainer_agent_npm](https://git.radar231.com/radar231/docker_portainer_agent_npm){: target="_blank"}
1. [docker_prometheus-grafana](https://git.radar231.com/radar231/docker_prometheus-grafana){: target="_blank"}
1. [docker_rpi_monitor](https://git.radar231.com/radar231/docker_rpi_monitor){: target="_blank"}
1. [docker_vaultwarden](https://git.radar231.com/radar231/docker_vaultwarden){: target="_blank"}
1. [docker_zap2xml](https://git.radar231.com/radar231/docker_zap2xml){: target="_blank"}
## Kubernetes Deployments
1. [k8s_cleanup-replicasets](https://git.radar231.com/radar231/k8s_cleanup-replicasets){: target="_blank"}
1. [k8s_dashy](https://git.radar231.com/radar231/k8s_dashy){: target="_blank"}
1. [k8s_ddclient](https://git.radar231.com/radar231/k8s_ddclient){: target="_blank"}
1. [k8s_delinit](https://git.radar231.com/radar231/k8s_delinit){: target="_blank"}
1. [k8s_deployment_restart_utility](https://git.radar231.com/radar231/k8s_deployment_restart_utility){: target="_blank"}
1. [k8s_flexget](https://git.radar231.com/radar231/k8s_flexget){: target="_blank"}
1. [k8s_freshrss](https://git.radar231.com/radar231/k8s_freshrss){: target="_blank"}
1. [k8s_heimdall](https://git.radar231.com/radar231/k8s_heimdall){: target="_blank"}
1. [k8s_home-assistant](https://git.radar231.com/radar231/k8s_home-assistant){: target="_blank"}
1. [k8s_homer](https://git.radar231.com/radar231/k8s_homer){: target="_blank"}
1. [k8s_journal-wiki](https://git.radar231.com/radar231/k8s_journal-wiki){: target="_blank"}
1. [k8s_kured](https://git.radar231.com/radar231/k8s_kured){: target="_blank"}
1. [k8s_linkding](https://git.radar231.com/radar231/k8s_linkding){: target="_blank"}
1. [k8s_maxwaldorf-guacamole](https://git.radar231.com/radar231/k8s_maxwaldorf-guacamole){: target="_blank"}
1. [k8s_metallb](https://git.radar231.com/radar231/k8s_metallb){: target="_blank"}
1. [k8s_mosquitto](https://git.radar231.com/radar231/k8s_mosquitto){: target="_blank"}
1. [k8s_motioneye](https://git.radar231.com/radar231/k8s_motioneye){: target="_blank"}
1. [k8s_nagios](https://git.radar231.com/radar231/k8s_nagios){: target="_blank"}
1. [k8s_navidrome](https://git.radar231.com/radar231/k8s_navidrome){: target="_blank"}
1. [k8s_nbtwiki](https://git.radar231.com/radar231/nbtwiki){: target="_blank"}
1. [k8s_nfs-provisioner](https://git.radar231.com/radar231/k8s_nfs-provisioner){: target="_blank"}
1. [k8s_notes-wiki](https://git.radar231.com/radar231/k8s_notes-wiki){: target="_blank"}
1. [k8s_pihole-1](https://git.radar231.com/radar231/k8s_pihole-1){: target="_blank"}
1. [k8s_pihole-2](https://git.radar231.com/radar231/k8s_pihole-2){: target="_blank"}
1. [k8s_signal-api](https://git.radar231.com/radar231/k8s_signal-api){: target="_blank"}
1. [k8s_transmission-openvpn](https://git.radar231.com/radar231/k8s_transmission-openvpn){: target="_blank"}
1. [k8s_uptime-kuma](https://git.radar231.com/radar231/k8s_uptime-kuma){: target="_blank"}
1. [k8s_vaultwarden](https://git.radar231.com/radar231/k8s_vaultwarden){: target="_blank"}
1. [k8s_website](https://git.radar231.com/radar231/k8s_website){: target="_blank"}
1. [k8s_website-wiki](https://git.radar231.com/radar231/k8s_website-wiki){: target="_blank"}
1. [k8s_webtop](https://git.radar231.com/radar231/k8s_webtop){: target="_blank"}
1. [k8s_wfh-wiki](https://git.radar231.com/radar231/k8s_wfh-wiki){: target="_blank"}
## Scripts & Utilities
1. [borg_backups](https://git.radar231.com/radar231/borg_backups){: target="_blank"}
1. [du_backups](https://git.radar231.com/radar231/du_backups){: target="_blank"}
1. [laptop_display_ctrl](https://git.radar231.com/radar231/laptop_display_ctrl){: target="_blank"}
1. [mk_nb_pages](https://git.radar231.com/radar231/mk_nb_pages){: target="_blank"}
1. [mkwebstats](https://git.radar231.com/radar231/mkwebstats){: target="_blank"}
1. [pihole-dns-sync](https://git.radar231.com/radar231/pihole-dns-sync){: target="_blank"}
1. [remote_backups](https://git.radar231.com/radar231/remote_backups){: target="_blank"}
1. [rpi_led_control](https://git.radar231.com/radar231/rpi_led_control){: target="_blank"}
1. [static_website](https://git.radar231.com/radar231/static_website){: target="_blank"}
1. [zn](https://git.radar231.com/radar231/zn){: target="_blank"}
## Configuration Backups
1. [delinit_files](https://git.radar231.com/radar231/delinit_files){: target="_blank"}
1. [homer_configs](https://git.radar231.com/radar231/homer_configs){: target="_blank"}
1. [lxd_profiles](https://git.radar231.com/radar231/lxd_profiles){: target="_blank"}
1. [nagios_files](https://git.radar231.com/radar231/nagios_files){: target="_blank"}
1. [nut_files](https://git.radar231.com/radar231/nut_files){: target="_blank"}

View File

@ -0,0 +1,64 @@
---
hide:
- navigation
created: 2021-11-15 13:35
updated: 2021-11-15 15:42
tags:
- Tiddlywiki
---
# Revisiting the TiddlyWiki Journal
## References
* A TiddlyWiki Based Journal
## Introduction
In a previous post I described how I used the journalling feature of TiddlyWiki to create a journal for tracking personal hobby activities as well as tasks around the house. Since that post I've made a number of changes to my journalling system to improve the entry and retrieval of journal entries. I'll outline the changes I've made and the thought process behind these changes. Please refer to the original post for the parts of the journalling system that I didn't change.
## Journal Entry Changes
The first change I've made was to make the journal completely "tag based". What this means is that all organization of the journal entries and the subsequent retrieval of entries is accomplished based on the tags applied to each journal entry. The title for the journal entry is now just a date-time string (YYYYMMDD-HHMMSS), and the actual title of the journal entry is entered into the "caption" metadata field. Tags are applied as required to categorize the post (all journal posts get the "Journal" tag automatically).
[![](../imgs/journal_entry.png)](../imgs/journal_entry.png){: target="_blank"}
In the "ControlPanel" page, I've set the "Title of new journal tiddlers" field to "YYYY0MM0DD-0hh0mm0ss".
I've also set the "Default focus field for new tiddlers" to "fields". This acts as a reminder of sorts to enter the journal title into the "caption" field.
In addition, I've added the following to the "Text for new journal tiddlers" field;
```
!! <$link><$view field="caption"></$view></$link>
```
This adds the "caption" field as a header at the top of the journal entry. It also makes it a link, which will be required for the tag or month based dynamic index pages.
[![](../imgs/journal_entry_2.png)](../imgs/journal_entry_2.png){: target="_blank"}
## Tag Index Pages
The main entry point to the journal is via a 'tag cloud' page. I've entered this page name in the "Default tiddlers" field in the ControlPanel page. This makes this page open any time I start up the journal wiki, or refresh the wiki page.
[![](../imgs/tag_cloud_1.png)](../imgs/tag_cloud_1.png){: target="_blank"}
[![](../imgs/tag_cloud_2.png)](../imgs/tag_cloud_2.png){: target="_blank"}
Each tag has a clickable menu which lists all of the journal entries with that particular tag applied, as well as a link to a 'tag' page, which provides a similar list. The tags also scale based on the number of journal entries with that tag applied to them.
The only issue with the tag menu is that it uses the title field for the list, which is in the 'data-time' format. The 'tag' page however, has a list of all of the journal entries with that tag applied, listed by the 'caption' field.
[![](../imgs/tag_page.png)](../imgs/tag_page.png){: target="_blank"}
## Monthly Index Pages
I've kept the concept of the monthly index pages from the previous journal iteration. The contents of the monthly index page has changed to accommodate the changes to the journal though.
[![](../imgs/monthly_index.png)](../imgs/monthly_index.png){: target="_blank"}
This will create a list of journal pages that have the specified prefix in the title (ie, "202111" for November 2021), and will include the content of each journal entry via transclusion. Both the title and caption for each journal entry are links that will open that specific journal entry page.
## Work Flow
Using the journal is pretty much the same as for the previous iteration. The only difference is with the tiddler title field being formatted in date-time format, and the actual title of the entry being entered in the "caption" field. Retrieval of information is accomplished in the same manner, either by following a tag to a specific entry, or via the search box on the sidebar.

View File

@ -0,0 +1,124 @@
---
hide:
- navigation
created: 2021-09-04 14:42
updated: 2021-11-15 14:16
tags:
- Tiddlywiki
---
# RSS Feed for Tiddlywiki SSG Website
## Links
* <https://techlifeweb.com/tiddlywiki/tw5tribalknowledge.html#RSS%20and%20Atom%20Feeds%20for%20your%20TiddlyWiki>
* <https://techlifeweb.com/tiddlywiki/tw5tribalknowledge.html#Atom%20Feed>
* <https://techlifeweb.com/tiddlywiki/tw5tribalknowledge.html#RSS%20Feed>
* <https://techlifeweb.com/tiddlywiki/tw5tribalknowledge.html#%24%3A%2Fdiscoverfeed>
## Introduction
This post describes a feature extension to a Tiddlywiki generated static web site.
If your web site will have periodic posts (such as a blog or similar style of site), then an RSS feed would be a handy addition. This would allow readers to subscribe to the RSS feed using an RSS reader, and be notified when new posts are put up on the website, or existing posts are updated.
## New Tiddlers
The following new tiddlers need to be created to support RSS and Atom feeds.
* RSS Feed
* Atom Feed
* $:/discoverfeed
The RSS and Atom Feed tiddlers are dynamic pages, and handle creation of the RSS and Atom feeds. The discoverfeed tiddler allows addition of the RSS and Atom feed URLs to the site page headers. This allows RSS readers to auto-discover the correct URL to retrieve the RSS or Atom XML files. Unfortunately this content isn't currently making it through the static web page generation, so this information isn't showing up in the static web site pages. This isn't too big a deal though, as we can specify the correct RSS and Atom URLs somewhere on the website, such as an 'About' page, for example.
### RSS Feed tiddler contents
```
\define MyFilter(MyTag,domain)
[tag[$(MyTag)$]!sort[created]limit[100]]
\end
&#60;?xml version="1.0" encoding="UTF-8" ?&#62;<br />
&#60;rss version="2.0"&#62;<br />
&#60;channel&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;title&#62;
{{$:/SiteTitle}}
&#60;/title&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;link&#62;{{$:/discoverfeed!!serverdomain}}&#60;/link&#62;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;description&#62;
{{$:/SiteSubtitle}}
&#60;/description&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;lastBuildDate&#62;<$list filter="[!is[system]get[modified]!prefix[NaN]!sort[]limit[1]]" variable=modified><$list filter="[!is[system]modified<modified>]"><$view field="modified" format=date template="[UTC]ddd, 0DD mmm YYYY 0hh:0mm:0ss GMT"/></$list></$list>&#60;/lastBuildDate&#62;<br />
<$set name="MyTag" value=Feed>
<$set name="domain" value={{$:/discoverfeed!!serverdomain}}>
<$list filter=<<MyFilter>>>
&#60;item&#62;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;title&#62;
<$view field="title"/>
&#60;/title&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;link&#62;<<domain>><$view field="title" format="doubleurlencoded"/>.html&#60;/link&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;pubDate&#62;<$view field="modified" format=date template="[UTC]ddd, 0DD mmm YYYY 0hh:0mm:0ss GMT"/>&#60;/pubDate&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;description&#62;&#60;![CDATA[<$view field="text" format=htmlwikified/>]]&#62;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;/description&#62;<br />&#60;/item&#62;<br />
</$list></$set></$set>
&#60;/channel&#62;<br />&#60;/rss&#62;<br />
```
### Atom Feed tiddler contents
```
\define MyFilter(MyTag,domain)
[tag[$(MyTag)$]!sort[created]limit[100]]
\end
&#60;?xml version="1.0" encoding="UTF-8"?&#62;<br />
&#60;feed xmlns="http://www.w3.org/2005/Atom"&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;title&#62;
{{$:/SiteTitle}}
&#60;/title&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;link href="{{$:/discoverfeed!!serverdomain}}" /&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;updated&#62;<$list filter="[!is[system]get[modified]!prefix[NaN]!sort[]limit[1]]" variable=modified><$list filter="[!is[system]modified<modified>]"><$view field="modified" format=date template="[UTC]YYYY-0MM-0DDT0hh:0mm:0ssZ"/></$list></$list>&#60;/updated&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;author&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;name&#62;
{{$:/status/UserName}}
&#60;/name&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;/author&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;id&#62;{{$:/discoverfeed!!serverdomain}}&#60;/id&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;link rel="self" type="application/atom+xml" href="{{$:/discoverfeed!!atomfile}}" /&#62;<br />
<$set name="MyTag" value=Feed>
<$set name="domain" value={{$:/discoverfeed!!serverdomain}}>
<$list filter=<<MyFilter>>>
&#60;entry&#62;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;title&#62;
<$view field="title"/>
&#60;/title&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;link href="<<domain>>#<$view field="title" format="urlencoded"/>"/&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;id&#62;<<domain>>#<$view field="title" format="urlencoded"/>&#60;/id&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;updated&#62;<$view field="modified" format=date template="[UTC]YYYY-0MM-0DDT0hh:0mm:0ssZ"/>&#60;/updated&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;published&#62;<$view field="created" format=date template="[UTC]YYYY-0MM-0DDT0hh:0mm:0ssZ"/>&#60;/published&#62;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;content&#62;&#60;![CDATA[<$view field="text" format=htmlwikified/>]]&#62;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#60;/content&#62;<br />&#60;/entry&#62;<br />
</$list></$set></$set>
&#60;/feed&#62;<br />
```
### discoverfeed tiddler contents
[![](../imgs/discoverfeed.png)](../imgs/discoverfeed.png){: target="_blank"}
## Exporting RSS and Atom Feeds to Static Site
The RSS and Atom Feed tiddlers are exported to the rss.xml and atom.xml files using the "rendertiddler" tiddlywiki command. This will get run once for each file. This means our "build_static_website.sh" (as detailed in Tiddlywiki Static Site Generation) will now look like this;
```
$ cat build_static_website.sh
#!/bin/bash
wikiPath='/mnt/k8s-storage/wikis/website-wiki/mywiki'
devWebsite='/mnt/k8s-storage/delfax/website'
cd ${wikiPath}
sudo tiddlywiki --rendertiddlers [!is[system]!tag[Draft]] $:/rdr231/templates/static.tiddler.html static text/plain --rendertiddler $:/rdr231/templates/static.template.css static/static.css text/plain
sudo tiddlywiki --rendertiddler "RSS Feed" static/rss.xml text/plain ""
sudo tiddlywiki --rendertiddler "Atom Feed" static/atom.xml text/plain ""
sudo rsync -avv --delete ${wikiPath}/output/static/ ${devWebsite}/
sudo chmod -R o+r ${devWebsite}/*
```

View File

@ -0,0 +1,70 @@
---
hide:
- navigation
created: 2021-06-05 12:24
updated: 2021-09-01 02:37
tags:
- HomeLab
---
# Simple Git Server for a Home Lab
## References
* <https://www.git-scm.com>
* <https://www.git-scm.com/docs/git-daemon>
* <https://gitea.io/>
* <https://gogs.io/>
## Introduction
There are a number of different git servers available, including a built in daemon (git-daemon). While implementations like gitea or gogs are quite suitable for a home lab environment, sometimes even those are more complex that what is really required. The simplest git server is nothing more than an account on an ssh reachable server hosting the git repos.
## Server Account Setup
* Create an account named 'git' on a suitable server
* Set up SSH for the 'git' account, and add the SSH public keys of all users that will be accessing the hosted repos to the "~/.ssh/authorized_keys" file of the 'git' account
* Create a directory (ie, named 'repos') in the 'git' user home directory to hold the hosted git repos
## Initial Git Repo Setup
* As the 'git' user, create a directory named for the new repo in the 'repos' directory, and initialize it as an empty git repo
```
$ mkdir repos/<new-repo>
$ cd repos/<new-repo>
$ git init --bare
```
* On the dev workstation, initialize the code directory as a git repo. Add the existing files and do the initial checkin
```
$ git init
$ git add .
$ git commit -m "initial checkin"
```
* Add the location on the git server as the remote repo for the new git repo. Perform the initial push of the files to the git server
```
$ git remote add origin ssh://git@git.lan/home/git/repos/<new-repo>
$ git push --set-upstream origin master
```
## Using the new Git Repo
* Further usage of the git repo will only require a 'git push'
```
(make edits, add files, etc)
$ git add .
$ git commit -m "<commit message>"
$ git push
```
* To perform a checkout of the repo to a new server/location
```
$ git clone ssh://git@git.lan/home/git/repos/<new-repo>
```

View File

@ -0,0 +1,59 @@
---
hide:
- navigation
created: 2021-06-06 16:12
updated: 2021-09-04 16:28
tags:
- Tiddlywiki
---
# Tiddlywiki Static Site Generation
## References
* <https://tiddlywiki.com/>
* <https://tiddlywiki.com/static/Generating%2520Static%2520Sites%2520with%2520TiddlyWiki.html>
* <https://nesslabs.com/tiddlywiki-static-website-generator>
* <https://www.didaxy.com/exporting-static-sites-from-tiddlywiki>
## Introduction
For a lightweight, information holding website, a [Static Site Generator (SSG)](https://en.wikipedia.org/wiki/Static_web_page) is the way to go (IMHO). There are a lot of great SSG's, such as [Hugo](https://gohugo.io/), [Jekyll](https://jekyllrb.com/), etc. However, as I'm a heavy user of [Tiddlywiki](https://tiddlywiki.com/) for information management, it only made sense to use the [SSG that is built into Tiddlywiki](https://tiddlywiki.com/static/Generating%2520Static%2520Sites%2520with%2520TiddlyWiki.html).
## Implementation
I won't go into the details of using the Tiddlywiki SSG here; that is well covered in the reference links provided. However, I will provide a bit of information about how I implemented my workflow, and provide the shell script I use to build the static site web pages and deploy the pages to my internal development web server. I'll also provide the shell script that I use for deploying the web pages from the dev web server to the production web server.
I use Kubernetes in my home lab, so both the source tiddlywiki server and the development web server are hosted in containers on my cluster.
## Static page build and deployment to dev server
```
$ cat build_static_website.sh
#!/bin/bash
wikiPath='/mnt/k8s-storage/wikis/website-wiki/mywiki'
devWebsite='/mnt/k8s-storage/delfax/website'
cd ${wikiPath}
sudo tiddlywiki --rendertiddlers [!is[system]!tag[Draft]] $:/rdr231/templates/static.tiddler.html static text/plain --rendertiddler $:/rdr231/templates/static.template.css static/static.css text/plain
sudo rsync -avv --delete ${wikiPath}/output/static/ ${devWebsite}/
sudo chmod -R o+r ${devWebsite}/*
```
## Deployment to prod server
```
$ cat deploy_static_website.sh
#!/bin/bash
devWebsite='/mnt/k8s-storage/delfax/website'
sitePath='/home/rmorrow/docker/docker_nginx-radar231.com/radar231.com'
siteFQDN='radar231.com'
rsync -avv --delete -e "ssh -p 50022" ${devWebsite}/ ${siteFQDN}:${sitePath}/html/
ssh -p 50022 ${siteFQDN} cp -r ${sitePath}/images ${sitePath}/html/
```

View File

@ -0,0 +1,47 @@
---
hide:
- navigation
created: 2022-05-28 01:45
updated: 2022-05-28 04:06
tags:
- LXD
---
# Use Cases for LXD
## References
* <https://discuss.linuxcontainers.org/t/community-spotlight-were-looking-for-lxd-examples-to-share/14124>
* <https://radar231.com/Homelab.html>
## Introduction
In response to the request for LXD usage examples posted to the Linux Containers forum, I've decided to describe a couple of my personal uses of LXD.
First, a bit of background information. I've recently retired from a 41 year career in the Canadian Public Service, where I worked in a variety of Electronics Engineering and Information Technology positions. My last position was working as a systems analyst, designing and implementing systems integration solutions. My last project was to develop a system to act as a communications bridge between two very different legacy systems.
Since retirement I've filled my time by setting up what I consider a fairly modest but quite effective home lab environment. A post describing my home lab can be found in the references section of this post.
Both of these examples leverage LXD extensively.
## Communications Bridge
While the production deployment of the final system didn't utilize LXD, the design and development of the solution was performed exclusively on LXD containers. I had been using LXD containers for many years, both as development systems as well as for testing code and configurations on a variety of Linux distributions. In this particular case, the final system was going to end up a multi-channel meshed system consisting of many physical servers. Each channel consisted of a number of servers and firewalls, each performing a function of protocol translation, data routing or security isolation. The channels were configured in a mesh interconnection, providing redundancy and fault tolerance, with the capability to detect node failures and modify the routing to bypass the failures.
As can be seen from my description, this was going to be a fairly complex system, and was going to be a challenge to develop as well as to perform initial and ongoing systems testing. LXD containers provided a perfect development platform for this. I had a custom deployment script that would spin up more than 15 containers from a custom local image, set up the isolated networks that provided the interconnection at various levels within the system, deployed the applicable code to each node and then finally reboot all the containers to bring the development system up into operations.
Once done with a development session, another script would shut down all the containers, tear down the virtual networks and remove all of the containers. Deployment of the development environment would only take a few minutes to accomplish, and the clean up and removal of the environment was complete within another minute or so.
While a similar environment could have been created using something like terraform to manage virtual machines, it would have been many orders of magnitude slower to deploy than a set of LXD containers, and would have ended up using a much larger amount of system resources. In fact, the whole LXD development environment was actually deployed within a VM running LXD (albeit, a rather beefy VM :-)).
## Home Labs and LXD
After having used LXD for so many years at work, it only made sense to go all in with LXD for my home lab. While I have a number of virtualization servers in my home lab, all production workloads are containerized, and run either on my kubernetes cluster, or on docker encapsulated within an LXD container. I do have KVM/QEMU via libvirt running, but this is mainly only used for desktop VM's that require sound. This is due to the lack of virtio sound card emulation making sound an issue for LXD VM's. I do try to use LXD VM's for any server or test environment that doesn't require sound though.
As can be seen from the following screen capture, I have a number of LXD containers running. Some are acting as X86 nodes for my kubernetes cluster. There are currently two that are running as docker aggregation servers, where I run containerized applications that won't run correctly on the k8s cluster. I have a development system where I do my coding as well as ansible based management of the home lab servers. I also have one running Nginx Proxy Manager which manages internal access to applications running in either the kubernetes cluster or in docker containers within the home lab.
[![LXD and Libvirt](../imgs/LXD-Libvirt.png){: style="height:25%;width:25%"}](../imgs/LXD-Libvirt.png){: target="_blank"}
## Conclusion
These two examples are fairly small in scope, and barely scratch the surface of what LXD is capable of. They do illustrate both development/testing as well as production usage of LXD though. Moving forward I see no reason to stop using LXD for both containers and VM's. I have a few LXD related postings on my blog now, and there will probably be more posts added as I move forward with LXD.

77
mkdocs.yml Normal file
View File

@ -0,0 +1,77 @@
site_name: radar231
site_description: radar231's odd n sods
site_url: https://radar231.com
theme:
name: material
custom_dir: overrides
logo: Matrix_Tux.png
favicon: Matrix_Tux.png
features:
- navigation.instant
- navigation.tabs
- navigation.tabs.sticky
- navigation.top
language: en
palette:
- scheme: default
toggle:
icon: material/weather-night
name: Switch to dark mode
primary: blue
accent: purple
- scheme: slate
toggle:
icon: material/weather-sunny
name: Switch to light mode
primary: blue
accent: lime
icon:
tag:
ansible: simple/ansible
files: simple/files
radio: material/radio-handheld
network: material/network
k8s: simple/kubernetes
lxd: material/server
wiki: simple/tiddlywiki
uncat: material/folder-search
plugins:
- search
- tags:
tags_file: index.md
- rss:
use_git: false
abstract_chars_count: 160
pretty_print: true
categories:
- tags
date_from_meta:
as_creation: created
as_update: updated
default_timezone: America/Toronto
date_format: "%Y-%m-%d %H:%M"
default_time: 09:30
nav:
- "Home": index.md
- "About": about.md
markdown_extensions:
- attr_list
extra:
tags:
Ansible: ansible
FileServer: files
HamRadio: radio
HomeLab: network
Kubernetes: k8s
LXD: lxd
Tiddlywiki: wiki
Uncategorized: uncat
# my custom config for adding dates to pages
page_dates_head: false
page_dates_foot: true

View File

@ -0,0 +1,56 @@
{#-
This file was automatically generated - do not edit
-#}
{% if "material/tags" in config.plugins and tags %}
{% include "partials/tags.html" %}
{% endif %}
{% include "partials/actions.html" %}
{% if "\x3ch1" not in page.content %}
<h1>{{ page.title | d(config.site_name, true)}}</h1>
{% endif %}
{% if page.meta and config.extra.page_dates_head %}
<div class="page-dates-head">
<small>
{% if page.meta.created %}
{{ lang.t("source.file.date.created") }}:
{{ page.meta.created }}
{% endif %}
<br>
{% if page.meta.updated %}
{{ lang.t("source.file.date.updated") }}:
{{ page.meta.updated }}
{% endif %}
</small>
<hr>
</div>
{% endif %}
{{ page.content }}
{% if page.meta and config.extra.page_dates_foot %}
<div class="page-dates-foot">
<hr>
<small>
{% if page.meta.created %}
{{ lang.t("source.file.date.created") }}:
{{ page.meta.created }}
{% endif %}
<br>
{% if page.meta.updated %}
{{ lang.t("source.file.date.updated") }}:
{{ page.meta.updated }}
{% endif %}
</small>
</div>
{% endif %}
{% if page.meta and (
page.meta.git_revision_date_localized or
page.meta.revision_date
) %}
{% include "partials/source-file.html" %}
{% endif %}
{% include "partials/feedback.html" %}
{% include "partials/comments.html" %}