Fedora Linux LUKS Encryption with TPM Unlock


Written by

Windows has BitLocker, Linux has LUKS as Full Disk Encryption, but by default LUKS doesn’t unlock via the TPM and requires a password.

There are many guides out there that show a very complex setup but for the basic encrypt the root partition and unlock it with a TPM, its actually fairly simple.

The following commands will setup your Fedora Linux (Tested with Fedora 32) LUKS boot volume to unlock automatically with the TPM.

dnf install clevis clevis-dracut clevis-luks
clevis luks bind -d /dev/sda3 tpm2 '{"pcr_ids":"7"}'
dracut -f
reboot

This was tested with non-Secure booting EFI.

These guides were very helpful:

How to Re-Address a Docker Swarm Master


Written by

Docker Swarm; the simple and quick way to build a docker cluster. That is until you need to re-address the cluster especially the master.

According to the Docker website, it is not really possible (warning: my vague interpretation of their documentation) and that you should be running your masters with static IP Addresses. Although workers are perfectly fine with dynamic addressing. This is great but what happens when your “test” environment is suddenly your “production” environment?

Option 1:

Create a second master on the new address. This is what I ended up doing – Learn from my mistakes.

Option 2:

If for some reason you can’t create a second master, then you can really hack up docker and force it to do what you want; technically once the swarm is initialised, you can’t change any of these via the docker command line but this will force some changes. This was highly unstable for me.

Within the directory /var/lib/docker/swarm there are two json files that you need to edit, state and docker-state.

On the master node you need to edit the “LocalAddr”, the “ListenAddr” and “AdvertiseAddr” in the docker-state.json, you also need to edit the “addr” in state.json.

On any worker nodes, you need to edit the “RemoteAddr” in the docker-state.json and the “addr” in stat.json.

Then simply restart docker; you may have to remove nodes and rejoin them.

Puppet Duplicate Resources with PuppetDB


Written by

If you’re using puppet and exported resources and get an error while running the puppet agent showing

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: A duplicate resource was found while collecting exported resources, with the type and title CMD[Title] on node HOSTNAME
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Then you’ve got a duplicate resource in the puppetdb which typically happens when you export resources to be imported else which has already happened, usually because you’ve rebuilt the host and not deactivated it prior

puppet node deactivate HOSTNAME

To resolve this you can remove the previous exports from the database. I’ve used a select statement first to check what is returned

Postgres:

su - postgres
psql puppetdb
> SELECT * FROM catalog_resources WHERE title LIKE '%Title%';
# DELETE FROM catalog_resources WHERE title LIKE '%Title%';

Built in PuppetDB (HSQLDB):

systemctl stop puppetdb
cd /var/lib/puppetdb/db/
cat -n db.script | grep <Title>
sed -i.bak -e '<n>d' db.script
systemctl start puppetdb

Please replace <Title> with the title of your export and replace <n> with the line number that is returned from the cat command.

Some of this information is from Chris Cowley and he’s done a better job and writing about it.

Linux ZFS Quotas and my hacked solution


Written by

ZFS in linux doesn’t support quotas, this is a pain in the arse.
After deploying ZFS on our various NFS servers, we were hitting issues where users can’t check their ZFS quotas and specifically on their home space directories.

After a frustrating amount of time researching, I found that there is a Github PR/Merge awaiting for the support and not wanting to compile my own version of ZFS for production gear, I wrote a little set of scripts to hack around the issue for the time being.

You can find a version of the scripts on my github grid-scripts and the latest at github zfs-quota.

Just pop the server script on your ZFS server and have cron run it every x minutes along with piping the stdout to a shared location (we’re using NFS mounted on /mnt/home and then the client script needs to be on all client machines (we used puppet to distribute it). The you should have ZFS Quotas for your end users.

Quota Report

ZFS Quota Output

Its a very quick and dirty script hacked together to solve the problem quickly, so your mileage my vary. If you have any improvements or suggestions, please don’t hesitate to contact me.

ARGUS Pool User to Certificate WebUI


Written by

As previously mentioned, I have written a nasty bash script to get the user certificate DN for a given username in ARGUS.

This is a step further, it is a WebUI that is seeded by data updated via a script in a cronjob on ARGUS, it requires a shared filesystem (in this case, /mt/admin or it requires ARGUS to have PHP and a WebServer.

The latest files are on github.

It depends on the following:

HTTPD (Apache/Nginx), PHP, PHP-LDAP, ARGUS

The bash/shell script is run on argus to produce an output, every 30minutes should suffice.

The web files only need access to the output file created by the shell script along with a basic webserver running php and php-ldap.

The search box at the top, if it correctly loads all required javascript allows users to search the table beneath, including partial searching.

A table displaying ARGUS Pool users and Certificates