List NFS Clients on Server

Written by

I was looking for a simple and easy way to view the NFS Clients connected to my NFS Server, there’s many guides on how to view the mounts available and people say to run various client side commands but that doesn’t help for the opposite way.

This is a little Linux bash script to show the currently connected NFS clients to your server; it uses a dig lookup to get the hostname, which may not work if you dont have rDNS for internal addressing.

# A little bash script to show currently connected NFS clients.
# Adam Boutcher - Jul 2020
# IPPP, Durham University

# Function to check that a binary exists
function check_bin() {
  which $1 1>/dev/null 2>&1
  if [[ $? -ne 0 ]]; then
    echo "$1 cannot be found. Please install it or add it to the path. Exiting."
    exit 1

check_bin which
check_bin netstat
check_bin grep
check_bin awk
check_bin echo
check_bin dig

NCLIENTS=$(netstat -plna | grep 2049 | awk '{print $5}' | grep -v "*" | awk -F ":" '{print $1}')
echo ""
echo "NFS clients currently connected:"
for CLIENT in ${NCLIENTS}; do
  CNAME=$(dig +short -x $CLIENT);
  echo - $CLIENT ($CNAME)
echo ""
exit 0;

There are other tools available like nfstat to help show other NFS information, specifically for servers use:

nfsatst -s

Fedora Linux LUKS Encryption with TPM Unlock

Written by

Windows has BitLocker, Linux has LUKS as Full Disk Encryption, but by default LUKS doesn’t unlock via the TPM and requires a password.

There are many guides out there that show a very complex setup but for the basic encrypt the root partition and unlock it with a TPM, its actually fairly simple.

The following commands will setup your Fedora Linux (Tested with Fedora 32) LUKS boot volume to unlock automatically with the TPM.

dnf install clevis clevis-dracut clevis-luks
clevis luks bind -d /dev/sda3 tpm2 '{"pcr_ids":"7"}'
dracut -f

This was tested with non-Secure booting EFI.

These guides were very helpful:

How to Re-Address a Docker Swarm Master

Written by

Docker Swarm; the simple and quick way to build a docker cluster. That is until you need to re-address the cluster especially the master.

According to the Docker website, it is not really possible (warning: my vague interpretation of their documentation) and that you should be running your masters with static IP Addresses. Although workers are perfectly fine with dynamic addressing. This is great but what happens when your “test” environment is suddenly your “production” environment?

Option 1:

Create a second master on the new address. This is what I ended up doing – Learn from my mistakes.

Option 2:

If for some reason you can’t create a second master, then you can really hack up docker and force it to do what you want; technically once the swarm is initialised, you can’t change any of these via the docker command line but this will force some changes. This was highly unstable for me.

Within the directory /var/lib/docker/swarm there are two json files that you need to edit, state and docker-state.

On the master node you need to edit the “LocalAddr”, the “ListenAddr” and “AdvertiseAddr” in the docker-state.json, you also need to edit the “addr” in state.json.

On any worker nodes, you need to edit the “RemoteAddr” in the docker-state.json and the “addr” in stat.json.

Then simply restart docker; you may have to remove nodes and rejoin them.

Puppet Duplicate Resources with PuppetDB

Written by

If you’re using puppet and exported resources and get an error while running the puppet agent showing

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: A duplicate resource was found while collecting exported resources, with the type and title CMD[Title] on node HOSTNAME
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Then you’ve got a duplicate resource in the puppetdb which typically happens when you export resources to be imported else which has already happened, usually because you’ve rebuilt the host and not deactivated it prior

puppet node deactivate HOSTNAME

To resolve this you can remove the previous exports from the database. I’ve used a select statement first to check what is returned


su - postgres
psql puppetdb
> SELECT * FROM catalog_resources WHERE title LIKE '%Title%';
# DELETE FROM catalog_resources WHERE title LIKE '%Title%';

Built in PuppetDB (HSQLDB):

systemctl stop puppetdb
cd /var/lib/puppetdb/db/
cat -n db.script | grep <Title>
sed -i.bak -e '<n>d' db.script
systemctl start puppetdb

Please replace <Title> with the title of your export and replace <n> with the line number that is returned from the cat command.

Some of this information is from Chris Cowley and he’s done a better job and writing about it.

Linux ZFS Quotas and my hacked solution

Written by

ZFS in linux doesn’t support quotas, this is a pain in the arse.
After deploying ZFS on our various NFS servers, we were hitting issues where users can’t check their ZFS quotas and specifically on their home space directories.

After a frustrating amount of time researching, I found that there is a Github PR/Merge awaiting for the support and not wanting to compile my own version of ZFS for production gear, I wrote a little set of scripts to hack around the issue for the time being.

You can find the lastest version of the scripts on my github grid-scripts.

Just pop the server script on your ZFS server and have cron run it every x minutes along with piping the stdout to a shared location (we’re using NFS mounted on /mnt/home and then the client script needs to be on all client machines (we used puppet to distribute it). The you should have ZFS Quotas for your end users.

Quota Report

ZFS Quota Output

Its a very quick and dirty script hacked together to solve the problem quickly, so your mileage my vary. If you have any improvements or suggestions, please don’t hesitate to contact me.