Posts

Debugging Kubernetes PVCs

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSa15SQyH6PwPZjAVs_5jMZVGv6zxoMk9CmWQ&s 

While attempting to create a Tekton Pipeline consisting of multiple tasks recently, I was having a terrible time trying to work out what the various directory definitions were between tasks.  These are pre-defined tasks so not ones that I had written myself and hence it can be a bit tricky to try to pick your way through all the various bits of yaml involved.  What I really wanted to do (and what helped me debug the issue with my pipeline) was take a look at the content of the PVC (Persistent Volume Claim) that the pipeline was using.
 
Thanks to a really helpful post, it's quite simple to spin up a little pod that mounts your PVC. You can then exec a shell into the pod and take a look around your PVC. 
 
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: pvc-inspector
spec:
  containers:
  - image: busybox
    name: pvc-inspector
    command: ["tail"]
    args: ["-f", "/dev/null"]
    volumeMounts:
    - mountPath: /pvc
      name: pvc-mount
  volumes:
  - name: pvc-mount
    persistentVolumeClaim:
      claimName: pvc-name
EOF
 
The above will set up a pod that will stay running (thanks to the tail command) and mount your pvc "pvc-name" to /pvc, then all you need to do is start a shell
kubectl exec -it pvc-inspector -- sh
When you're done, you can exit and then delete the pod with
kubectl delete pod pvc-inspector
 

Managing NVidia on Fedora

Linus Torvalds has previously had some very choice words to say about NVidia with regards their interaction with open source and Linux.  I've spent much of my career using (and battling with) the NVidia drivers on my Linux laptop and these are my notes that make life a little easier and better.  For me, NVidia drivers at home on a PC work pretty well.  They work less well on a laptop where things are a bit more complicated with external screens, hot plugging of screens, suspend, etc.  I've tried the open source alternatives and they really just don't work as well in terms of stability (not that I find the NVidia proprietary Linux drivers especially stable, especially on a modern desktop under Wayland) but also in terms of power management and graphical performance.  Hence, I've always concluded I'm better off running the proprietary drivers.  If it weren't for the fact that the Lenovo Laptops I use at work have their external monitor connections hard wired to the NVidia card then I would probably just turn off the GPU entirely and rely on the embedded Intel GPU.

Installation

Thankfully, this is pretty simple on Fedora due to the packaging at RPM Fusion.   Hence, I think the best way to install the NVidia driver on Fedora is to use the RPM Fusion repository, as follows:

  1. Follow the guide at https://rpmfusion.org/Configuration to install both the free and non-free RPM Fusion repositories
  2. Install the NVidia drivers (see https://rpmfusion.org/Howto/NVIDIA)

It is also best to set your GPU to "discrete only" mode in your UEFI/BIOS.

Note: there is a highly viable alternative repository that also I've run very successfully at Negativo17 and you can, of course, grab the drivers directly from NVidia via their unix downloads page.

Re-Build the NVidia Kernel Module

There are instances where manually kicking off the re-building of the NVidia kernel module can be useful. This can be done very easily as follows:

$ akmodsbuild --kernels $(uname -r) /usr/src/akmods/nvidia-kmod.latest 
$ sudo dnf reinstall <name-of-the-output-rpm-from-above>

Note that the above will rebuild the NVidia kernel module for the current running kernel. You can swap out the $(uname -r) piece of the command for the version string of any of your installed kernel modules to build for a different kernel that you have.

Suspend/Resume Stability

Create a file /etc/modprobe.d/nvidia.conf with the following content:

options nvidia NVreg_PreserveVideoMemoryAllocations=1
options nvidia NVreg_TemporaryFilePath=/var/tmp
options nvidia_drm modeset=1

These should already be on your system after installation but just in case, make sure you have the following RPMs installed by running:

$ sudo dnf install xorg-x11-drv-nvidia-power nvidia-modprobe

Ensure the nvidia suspend/resume services are enabled:

$ sudo systemctl enable nvidia-suspend.service nvidia-hibernate.service nvidia-resume.service

Note: do not enable the nvidia-persistenced.service as this can cause issues on a single GPU machine such as a laptop

Fixing graphical LUKS password prompt

Often, the installation of the NVidia driver can cause the LUKS password prompt to drop back to text mode. This isn't a problem from a functional point of view but it's not so nice to look at as the graphical prompt. If you want to restore the graphical prompt you need to ensure that the NVidia driver is built into your ram disk image. This can be achieved fairly simply, as follows:

Create a file /etc/dracut.conf.d/nvidia.conf and add these lines (note the spaces are required in the quoted strings so that's not a mistake):

add_drivers+=" nvidia nvidia_modeset nvidia_uvm nvidia_drm "
install_items+=" /etc/modprobe.d/nvidia.conf "

With that file in place, you can re-generate your ram disk image by running:

$ sudo dracut -f
Larger console font

Not strictly an NVidia related thing but while I'm on the topic of displays, on HiDPI/4k laptop screens, the console font (seen during the boot process if things go wrong) can be nearly impossible to read as it's far too small. You can resolve this issue by switching your console font to a larger font size.

First, make sure you have the relevant console font(s) installed:

$ sudo dnf install kbd-misc

Next, edit the file /etc/vconsole.conf and change the "FONT=" line to a larger font e.g.

FONT="latarcyrheb-sun32"


Gnome Extensions

With the current release of Fedora Linux being updated to version 40 this week, I find myself upgrading to it on the first day of release (rare for me, I usually wait a couple of months) and getting up-to-speed with the changes in Gnome 46.  My previous post about Gnome focused on my migration to Gnome 3 and the extensions I was using at the time.  During the course of the previous (nearly) 5 years, these have changed quite significantly as the Gnome desktop has grown and as my usage of it has moved on.  Hence, rather than update my old post as I have done in previous years, I thought it time to write a whole new post focusing on how I set up my Gnome desktop today.

So without further ado, this is the list of extensions I'm using right now as I write this post (in alphabetical order):

AppIndicator and KStatusNotifierItem Support

This is one of the few extensions that has stood the test of time with my particular usage of Gnome.  While the free desktop standard continues to specify the classic "icon tray" that was supported by extensions such as TopIcons, in reality few of the modern desktops (I'm referring to Gnome and KDE) support them.  The modern take on the tray icon is an AppIndicator icon and many modern applications are written to use this standard (and perhaps fall back to a tray icon). 

Dash to Panel

This is a more recent discovery for me.  I've previously evaluated Dash to Dock several times and never liked the user experience.  However, the similarly named and with somewhat similar functionality, Dash to Dock has replaced my use of the Window List extension.  It provides a more modern alternative to showing which windows I have open with window previews and suchlike.  It can do considerably more than the way I've configured it, but I have it set just how I like to work with my Gnome top bar still in tact and a minimal bottom bar used for navigating between my open applications.  If you want to try it out with my configuration, I've exported my settings into this GitHub Gist that you can import.

Frippery Panel Favourites

It takes your favourited applications and adds them as a set of icons to the Gnome top panel, making for extra quick access to your commonly used apps.  I tend to flip between using this and just searching for apps via the Super button (windows key).

GTile
This great little extension allows you to easily resize your windows in order to tile them across your display.  I love the side-snapping in Gnome 3 that allows you to size a window to half the screen size.  However, GTile adds an icon to your Gnome Panel that, when clicked, allows you to size to any area of your screen across a pre-defined grid - you can even change the grid size.  Brilliant for usability with lots of on-screen windows at the same time.  It strikes a great balance for me as someone that generally prefers to tile windows but doesn't like a tiling window manager.

Hide Activities Button

This is almost a little bit superfluous for my usage but I found myself never using the activities button (top left) in the Gnome Panel.  The Dash to Panel configuration I have created maintains an activities button (bottom right) which is the place I've grown familiar with in order to use my GUI to switch between desktops (although I generally switch between desktops using keyboard shortcuts).

Pano Clipboard Manager

This is a really great modern take on the clipboard.  Press shift+windows+v and you get a pop up at the bottom of the screen with a graphical representation of your clipboard history.  The extension is clever enough to be aware of various types of clipboard content such as text, images or hyperlinks.  You also get a button on the top panel that allows access to the clipboard, incogneto mode (stops copying stuff to the extension) and settings.

System Monitor Next

Adds little graphs to the Gnome Panel that show resource usage.  The extension is pretty configurable but I have it showing CPU, memory and network utilisation.  This allows me to keep an easy eye on my machine and how loaded it is at the current time.  Extremely useful for spotting those occasional rogue apps that start eating an entire core of my CPU.

Scouting

Graham in Scout UniformA few years back, I decided it would be a good idea to register my little lad on the waiting list for Beaver Scouts and didn't realise at that point in time quite what a journey would be unfolding in front of us.

Needless to say, I ended up volunteering.  I started on a fairly typical journey by offering to help out occasionally and getting DBS cleared.  I found I was attending pretty much every week so from there it was a relatively small step to go into uniform and make my role official.  Four years since I started helping out, I'm an "Assistant Beaver Section Leader" known to the children as "Merlin".

While on the topic of scout names, everyone in my section adopts the name of a creature found in British wildlife.  It's customary for the children to use an adult's scouting name when referring to us.  In fact, they have no idea what our real names are.  I'm not sure why, that's just the way things have evolved.  I wanted to use the name "Goshawk" but at the time our section already had someone named "Hawk" which seemed a bit too similar so I considered other small birds of prey such as "Sparrowhawk" (still too similar), "Hobby" (didn't seem right) and eventually settled on "Merlin".  Now everyone seems to think I'm a wizard and I have to explain the name all the time.  Stupid choice!

Scouting is a great organisation and charity, run at the top level by paid professionals but the majority of the things that happen are as a result of volunteers, such as myself!  The volunteers often go to a lot of lengths, putting in hours of preparation and hard work to make sure all the kids in their section have fun, experience things they wouldn't otherwise do but remain safe.  Hence, I thought I would recount some of the stuff I've been doing since signing up.  Most of this is a total surprise to me as I was never involved in Scouting as a kid so I'm learning and going through the whole thing at the same time as my son.

Safety, DBS and Training

Our primary goal for each meeting and overall plan each term is to have fun while remaining safe.  The safety aspect isn't too onerous, mainly consisting of an initial DBS check that is trivial enough and then risk assessing activities that basically involves common sense to think things through in advance.  There is a reasonable amount of red tape involved in all this that the parents never really see.  There's also training to be done. To be in uniform, one must have a first aid qualification and undertake a bunch of formal training modules, providing evidence to an assessor that you've qualified to pass each module.  This must all be done within 3 years.  Once complete, you're awarded your wood beads - I'm currently most of the way there and have 1 year left to complete the remaining few modules.

Activities

Having never been involved in Scouts previously, pretty much everything that goes on comes as something new (if not a surprise) to me.  Camp fire, songs, cooking, crafting, knot tying and a raft of other outdoor and adventurous activities are all very cool things to get involved with and organise.

Visits and Visitors

I've not been scouting for particularly long, especially when compared to some of our volunteers that have been in the organisation 30+ years.  However, I've already racked up quite a few different venues and experiences:

  • Our Scout Hut (obviously), we call it "The Den"
  • Other Scout Huts
    • 8th Alton
    • Bentley
    • Four Marks
  • RAF Odiham
  • Boots Opticians
  • Local Library
  • Marks and Spencer
  • War Memorial
  • Chawton House
  • Uppark House (national trust)
  • Scout Camp sites
    • Garners Field
    • Lyons Copse
    • Bentley Cops
  • Various locations for walks and hikes e.g. Butser Hill
  • Local Care Homes
  • Our Visitors are varied, e.g.
    • Disabled people/groups
    • Dance and music groups
    • People with specialist skills/knowledge

New Thinkpad P15


This post continues a long running tradition and series of posts when I'm issued a new laptop at work.  I generally get quite a powerful and interesting machine as I'm a member of the IBM Hursley development laboratory and thus am issued a fairly beefy specification for a majority of desktop use rather than being a more mobile laptop.  I'm issued a new machine approximately every four years so my previous posts are about my:

It's interesting to see how the specification of machine has changed over time.  With the slowing (or disappearance) of Moore's Law, the speed advantage of more recent machines has come from other innovations (such as an SSD and an increased number of cores) rather than raw clock speed.  The highlight specifications for the P15 Gen 1 I have are...

  • Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz (5199.98 bogomips in Linux)
  • 32GB DDR4 2933MHz
  • Toshiba 512GB SSD XG6 M.2 2280
  • 15.6" 3840 x 2160 IPS (non touch)
  • Integrated Li-Po 94Wh battery
  • Wi-Fi 6
  • NVidia Quadro T1000M 4GB
  • Front Facing Web Cam, HDMI Out, Headphone, 2x USB3.2, 2x USB-C3.2 Gen 2, GBit Ethernet, Fingerprint Reader, SD card reader

There we have it, the top level specs aren't all that different to the 4 year old P50 machine I had previously.  In fact the CPU speeds have dropped slightly although the P15 does have 12 cores to the P50's 8. RAM and GPU memory have both stayed the same and I still have a 512GB SSD.  Interestingly, the battery is now integrated which has moved away from the long standing removable battery on these top line Thinkpad machines.  There's a huge increase in the screen resolution and I dare say the screen would also have been improved in areas such as peak brightness (600 nits for the P15) and support for Dolby Vision HDR (there's also support for Dolby Atmos sound which will be a bit lost on me for a business machine).  While sounding good, if you put a 4k resolution onto a 15" laptop screen you pretty much need a magnifying glass to see anything so it's more or less useless unless you're consuming 4k video content.  No wonder then that the Gnome desktop defaulted to running in 4k mode but at 200% scale (which I think takes it back down to HD size unless I'm mistaken).

The day-to-day running of the new machine has been pretty good.  Not noticeably different to that of the old machine. This goes to show the lack of improvement in specifications of these new machines in general.  It's something I've noticed with my ageing home machine as well (which is nearly 10 years old) where the processor benchmarks are very similar to today's processors on a core-for-core comparison and I still have things like a decent PCI 3 bus.  It's always nice to have a bit of a refresh though and the thing I'm liking most about the new machine is the addition of the built-in fingerprint reader.  This particular piece of hardware is now fully supported on Linux and very easy to configure using the Gnome settings tool.  It makes logging in with a massive password much less painful.  I hope more apps (such as 1password) will eventually find ways of integrating biometric security on Linux as well.  It's worth noting that this functionality hasn't come at all by accident and has been a lot of hard work and a long road between both Red Hat and Lenovo to ensure that all new Lenovo laptop machines are fully certified to have a hardware configuration that contains drivers and firmware compatible with Linux.

There are, of course, teething troubles with the new machine.  These are mostly related to graphical issues and NVidia.  More recently, I'd taken for granted my old machine just working in these respects.  My old machine had similar teething issues when it was new of course and these were gradually ironed out with driver updates as time progressed.  So right now it's weird to be back in the dark days of having to use the NVidia settings panel to configure the screen resolutions I want as for some reason the binary driver is only showing up the full 4k resolution to xrandr under Linux (yes I'm still using Xorg, not Wayland, yet).  It's also a bit fragile in terms of going into sleep mode and resuming from sleep, it all works but there can be graphical glitches (sometimes and sometimes not) which I may need to restart the gnome shell to cure (Alt+F2 then type r and hit Enter).  While this is frustrating for now, I'm fully expecting driver updates to catch up and this machine will gradually settle down into the same level of graphical performance I was used to on my old machine i.e. no problems at all and no need to open up NVidia settings.  Perhaps the thing that surprises me most about all this though is the very fact that all of this has regressed.  I'm no expert in the graphical stack on Linux but it's rather unfortunate that I seem to experience the same pains and teething problems upon the issue of every new laptop.  It'll all get there.  One day!

Solving a Rubik's Cube

We bought a Rubik's Cube while away on our summer holiday this year and spent some time playing around with it and then learning to solve it.  We've returned to it over the Christmas break.  The solve we're learning is based on a very detailed video (below) from Wired Magazine.  I'm at the stage where I can remember most of the solve shown in the video but haven't memorised the final few steps involving the algorithms to position the final corner pieces in the top layer.



Since the video is so long and detailed, when you're at the stage of being able to remember most of the solve, you just need a quick memory jogger rather than wading through the video (even though it has some nice chapter markings).  Hence, this post is my quick reference guide for the future.  I'll walk through the whole solve very briefly (use the video for the detail)...

Introduction

Learn the right trigger and left trigger moves (involves holding the cube correctly)

Learn cube notation (rotate the appropriate face clockwise from the point of view of you looking straight onto that face)

  • F = Front Face

  • B = Back

  • L = Left

  • R = Right Face

  • D = Downward Face

  • U = Upward Face

  • ‘ = Spin anti clockwise e.g. F’ = Front Face anti clockwise

  • 2 Spin twice e.g. F2 = Front Face rotates by 180 degrees (spun twice)

 

Solve the bottom layer

Video Link
  1. Make the daisy
    1. Position white edge pieces around the yellow centre
  2. Make a white cross on the bottom face
    1. Line up white edge stickers on the top face with their middle colour on the side face, rotate the twice to move the white sticker from the top face to the bottom face
    2. The white face should remain pointing down for the rest of the solve
  3. Position the bottom row corners
    1. Search for corner pieces in the top layer that have a white piece
      1. If found, match the non-white colour of the corner piece in the top layer (ignore the top face) with the coloured center.
      2. With the non-white coloured center facing you observe whether the white sticker is on the left or right side of the cube. If left, perform the left trigger; if right, perform the right trigger.
    2. If there are no white pieces in the top layer but you have one in the bottom layer. Determine if the white piece in the bottom layer is on the left or right and with the white piece facing you, perform the left or right trigger (depending on whether the white piece is on the left or right side of the cube). The white piece will now be on the top of the cube.
    3. If there are no white pieces in the top layer but you have one on the top face. Rotate the top face such that the white sticker is not opposite a white sticker on the bottom face i.e. the white sticker on the top face should be opposite a coloured sticker on the bottom face. Then reposition the sticker so it’s in the top layer.
      1. Perform the left or right trigger twice in succession if depending on whether the white sticker on the top face is on the left or right side of the cube.  Now refer to 3.1 to place the corner correctly onto the bottom face.

 

Solve the middle layer

  1. Search the top layer for edge pieces that do not have any yellow stickers (either on the edge face or the top face).  Then once found move that piece to the second layer.
    1. Rotate the top layer such that the sticker on the edge face is colour matched with its centre sticker.
    2. Examine the colour of the sticker on the top face of the edge piece you're moving, it should match the left or right centre stickers.
      1. If it matches the left then rotate the top face anti clockwise by 90 degrees (a U' in notation form) then perform a left trigger
      2. If it matches the right then rotate the top face clockwise by 90 degrees (a U in notation form) then perform a right trigger
    3. You've now displaced one of the white stickers.  It will be in the top row so simply place it back to the bottom face using the same method as when solving the bottom layer.
  2. Repeat the above until there are no more non-yellow stickers on the top face.
  3. You may occasionally find a situation where there are no edge pieces in the top layer without yellow stickers but the second layer is not complete.  If this happens:
    1. Examine the middle layer for the piece that is not right.  With that piece facing you, examine if it's on the left or right of the cube.  Perform either the left or right trigger according to which side its on.
    2. Fix the displaced white piece.
    3. There will now be an edge piece in the top layer that doesn't have a yellow sticker so continue solving as per the first step of the middle layer.
 

Create a Yellow Cross (top face)

Required Algorithm: F U R U' R' F'

  1. Repeat these steps depending on the number of yellow edge pieces (on the top face):
    1. If there are no yellow edge pieces (on the top face), perform the algorithm.
    2. If there are two yellow edge pieces in a line, orient the line so it faces up/down, perform the algorithm.
    3. If there are two yellow edge pieces next to each other, place them in the 12 and 9 position on the top face, perform the algorithm.
 

Solve the Yellow Face


Required Algorithm: R U R' U R U2 R'
  1. Look at the top (yellow) face and repeat these steps depending on the number of yellow corner pieces (on the top face):
    1. If there are zero or two yellow corner stickers, rotate the top face until there is a yellow sticker in the top right position of the left face, perform the algorithm.
    2. If there is one yellow corner sticker, rotate the top face until the corner sticker is in the bottom right position, perform the algorithm.
 

Position the Top Layer Corners

 
Required Algorithm: L' U R U' L U R' R U R' U R U2 R'

  1. Look at the top corner pieces of the top layer, repeat these steps:
    1. If there are no matching corner pieces, perform the algorithm.
    2. If one of the faces has matching corner pieces, place that face in the left hand so it's pointing left, perform the algorithm.
  2. When all of the corner pieces match, rotate the upper face so the corner pieces on each face match up with their centre colour.

Position the Top Layer Edges


Required Algorithm (Clockwise) i: F2 U R' L F2 L' R U F2
 
Required Algorithm (Anticlockwise) ii: F2 U' R' L F2 L' R U' F2

  1. If one of the sides is solved, face it away from you then look at the edge pieces on the remaining unsolved 3 sides
    1. Swap the edges clockwise, perform algorithm i
    2. Swap the edges anticlockwise, perform algorithm ii
  2. If none of the sides are solved, perform algorithm ii, reposition the cube to face the solved face away from you and then perform algorithm ii again.

IoT Christmas Tree Tech

In my previous post where I was showing off my IoT Christmas Trees, I described the project and what the trees are intended to do.  This post features the inside track information about how I put the project together.  So the first post is supposed to be non-technical if you will and this post details the info more of interest to those interested in the technical make-up of the project.  Since this is effectively a recipe for how to create one of my IoT Christmas Trees, we'll start with the list of things needed.

Ingredients

For the Tree

Bits you need to make one tree:
  • Wood
  • Paint
  • Arduino ESP32
  • Micro USB Cable
  • Veroboard
  • LED String
  • Buttons

Server and Connection Requirements

These are Internet connected of course so needed here are:
  • An MQTT Broker
  • An HTTP Server
  • WiFi with Internet Connection
  • Another WiFi capable device

Other Useful Bits

Probably staple provisions of any maker's tool box:
  • Glue Gun
  • Soldering Iron and Solder
  • Heat Shrink Tubing
  • Wire (22AWG Solid Core)

 

Putting it all together

 


The wooden bits

You need enough wood in whatever design you prefer.  I made a point of making each of my trees in a different style.  It caused more work that way of course but there's something nice about each tree being unique.  My designs are all between about 25cm and 40cm tall without much thought put into why.  However, this sort of size seems to work well with the light string I chose.  Given the light string is 5 metres long and I used half a string for each tree I needed to hang 2.5m of string around each tree (containing 25 lights per tree).  Therefore, you need to size the wooden structure appropriately for the length of string or number of lights you intend to hang.

Painting

I decided to use spray paint to try and achieve a really smooth and high quality finish.  I selected Montana paints as they provided all the types of paint I wanted to use (primer, metallic top coat, glitter and varnish).  I went with 4 of their colours, Aztec Gold, Avocado Green, Titanium, Red; and glitter effects in Silver and Dusty Gold.  Everything came from Graff City.

A couple of coats of primer were needed followed by a couple of coats of metallic.  The glitter paint goes on as a varnish layer with particles of coloured glitter in it and they recommend a top coat of varnish is used to seal the glitter coat and prevent and loose glitter falling off.  The result is a really nice layered effect where the fairly reflective metallic coating is set off against a subtle glitter effect and everything is finished off with the sheen of varnish.

Electronics

The main driver of the project, electronically, is an ESP32 board.  The board I chose is a knock off of the Lolin D32 simply because it has everything I needed (including having the pins not soldered) and much cheaper than locally available boards in the UK.  While the board was cheaper, I did discover some of the drawbacks in as much as it doesn't have built-in pull up resistors on its GPIO pins and as such I had a lot more soldering to do to work in a 10kΩ resistor into a little veroboard circuit to wire in the buttons I chose.

The string of LED lights looks very cool with some quite presentable wiring between the lights when compared to an LED strip.  They're completely WS2812/NeoPixel compatible so they're easy to programme and there's some nifty libraries out there already available.  They are, however, fairly hellish to solder since the wires are coated with some very thin plastic insulation which is either difficult to remove or otherwise burn through with your solder.

Finally, the whole thing is powered via Micro USB so I ordered a pile 3m long USB-A to Micro-USB cables so the trees can be sited a reasonable distance from a power socket and I didn't have to worry about batteries or charging although battery powering these units would definitely be possible.

Firmware

The firmware is a fairly standard Arduino implementation for the main loop and using WifiManager to configure the ESP32 WiFi to connect to an SSID.  It wont come as any surprise to find the messaging component I'm using is based on MQTT and so I'm using Nick's pubsubclient library on the client side.

One of the more interesting things I've done with the firmware is to attempt to make it as remotely configurable as possible without the need to rely on over-the-air updates for the firmware.  To this end, I'm using the inih library and the firmware downloads its configuration as an ini formatted file from a remote location that allows me to configure as much as possible, currently: mqtt hostname:port, username, password, SSL settings, publish and subscribe topics, device name; and then configurations for the lights for things like which colours to cycle around, how long the "Merry Christmas" setting is maintained before reverting to the previous setting.  The ini file format also allows an easy "global" configuration to apply across each tree while also allowing a per-tree customisation.  Should the WiFi connection not be available or the configuration file not be available then the tree reverts to a sensible set of defaults.

Once running, WiFi connected, MQTT connected and the configuration has been downloaded and applied there's some basic login there to cycle between the different light configurations when the left button on the tree is pressed.  This is all done locally on each tree.  The right button, when pressed, sends a (configurable) message to the other trees that tells them what to do.  So it would be possible for each tree to have its own specific "Merry Christmas" pattern so you could, for example, work out who had sent you the message by the pattern/colour of the light flashes on the receiving tree.

I have also built in a simple command protocol to further take advantage of the MQTT connectivity.  This allows me to send a "ping" to each tree to see which are currently alive, connected and working properly.  The second command I have is a "reload" command that will cause the tree to download and re-apply the configuration from its remote location, noting of course that the configuration could have changed and so I can cause a tree to remotely update its configuration.  Finally, there is a "reconnect" command that will cause the tree to disconnect and connect to its configured MQTT broker.  This is useful in the rare circumstance where the IP address of the broker may have changed in which case I can update the configuration, have the tree read a new configuration, then have the tree disconnect from the current broker and connect to a new broker.

MQTT Broker

The MQTT broker is provided courtesy of my still relatively new Pi4 home server and the rather excellent Mosquitto MQTT broker.  Since I'm running this all myself and not using a cloud based MQTT service I've screwed it all down as far as I can from a security perspective but there's nothing like running your own services to make you feel vulnerable!

IoT Christmas Trees

My latest maker project has been running for a good chunk of the year and has been a really cool thing to do to keep me occupied during lockdown since I'm not really one to binge watch box sets.  I've been making Internet connected Christmas trees as gifts for close family.  They're designed to be ambient ornamental decorative pieces with a funky twist of interactivity.  The video I produced probably explains them best...


As you can see from the video, each tree is deliberately different.  I thought it would be more interesting to have a completely different wooden design for each.  My dad did the woodwork which saved me quite a lot of time and allowed me to concentrate on writing the software and doing the electronics not to mention building and painting each tree.  There's a good few hours work in each of these things.

Operation of the trees is pretty simple with the most complicated bit, like with most of these things, being the initial set up to get them onto your WiFi network.  For that, and to explain the basics of how they work along with a bit of troubleshooting information, I wrote a little user manual to go with them.  After all, it will be our closest family members that receive one of these and so I can always help out remotely (and potentially fix any issues that arise).  The left button on each tree cycles round a set of pre-defined colour schemes.  However, these can be changed on a per tree basis should someone want a different colour or configuration.  Similarly, the trees can be operated remotely but I've yet to write a decent interface to do so.  The right button on each tree is where the fun is, causing all the other trees to change light pattern for some period of time.  Again, this is all configurable per tree but by default they all cause the same green and red frenzied light pattern which should be very noticeable in the corner of your room should it occur.

Finally, big thanks to James Sutton and his original iotree project was in no small amount an inspiration for my work here.  Although, the implementations are different both physically and in software there's still a huge amount of overlap.  James was also responsible for running an IoT hackathon at work (we're colleagues) introducing a lot of the technology I would need in order to perform this build.  Thanks again, James!

I've also written more details on the technical implementation.  But, I'll sign this post off with a closer look at each tree in pictures...








 

Yet Another New Home Server

This year has seen me doing more in the way of little tech projects at home than I have done for a while, perhaps due to covid lock downs so if that's the case then I'll take this small positive from an otherwise rubbish situation.  Typically for me, these projects have focused around open source projects and some IoT.  More on those in some separate blog posts when I get around to writing them up.  But for now, I wanted to make some notes on my new home server set up.

I've had an array of different low powered home servers of the years that I've previously written about, namely the NSLU2, TinyTuxBox, Joggler and for the past many years a simple ReadyNAS box that I specifically bought for the Intel processor as it made compiling different bits and pieces a whole lot easier back in the day.  However, I have recently relegated the ReadyNAS box from home serving duties, keeping it only for its native NAS services because using it for other things has become increasingly difficult without updating the entire base OS (which is possible by I'm reluctant to do) due to down level software libraries like an ancient version of openssl.

In with the new then and I moved away from Intel architecture as it's now so much easier to compile for Arm chips and went with the, wait for it, drum roll, rather obvious choice of a Raspberry Pi 4.  Specifically, a Pi 4 Model B, 4GB.  I've paired it with the official Pi case power supply, micro HDMI cable and shoved in an A2  SanDisk Extreme 64GB SDXC card.

And so to the notes, my initial target for this new box would be as follows:

The Lounge

IRC might be a bit old hat but tons of open source project still use it for their more synchronous communications.  ZNC is the choice of old for staying connected to your IRC channels.  For those not familiar, it acts as a relay to the IRC servers you want to connect to.  Effectively, it connects as your IRC client to the servers and presents your local IRC client with an endpoint through which you can connect.  This allows you never to miss any messages and see the IRC conversation even when you're not actually online.  Matrix seems to be taking some of the old IRC community's attention with various projects setting up bridges between Matrix and IRC.  However, the relative newcomer project called The Lounge shows just how far web technologies and web sockets have come.  It's a darned site (pun intended) easier to install configure and use than ZNC so I'm a massive convert and big fan of the project.

The project is relatively stable in the master branch and doesn't release particularly often so I've open for the run from source approach to take advantage of all the latest development.  Other than that, I've only made 3 changes to the default configuration prior to starting up my The Lounge server:
  1. host: "127.0.0.1"
  2. reverseProxy: true
  3. theme: "morning"
As you can see, these are all pretty simple and somewhat trivial changes.  The host setting binds the listener to the localhost interface, thus making it suitable for use with a reverse proxy and not exposing the service outside of the Pi 4.  The reverseProxy setting tells the server it's expecting to run behind a reverse proxy (the clue is in the name I guess).  Finally, I've switched to using a dark mode theme rather than the default light mode.  That's it, the remainder of the configuration is all about which IRC servers and channels to connect to along with the usual IRC bits of registering your nick and logging into the nick server.

Mosquitto

This is even simpler to get going than The Lounge due to the fact it's bundled with Raspbian so you can just apt-get install it.  I've created a configuration based on the bundled example config file but changing:
  1. pid_file (probably just because I'm old fashioned like that)
  2. user (to drop privileges)
  3. listener (to specify a port number)
  4. certfile and keyfile (for SSL)
  5. log_dest (create a specific log file for the broker)
  6. clientid_prefixes (a bit of added security to only allow certain client IDs to connect to the broker)
  7. allow_anonymous (quite an important one!)
  8. password_file (so that connections are authenticated)
Hopefully, that gives me something secure as well as providing me with the broker functionality that I need.

Node Red

Again, simple to install as it's bundled with Raspbian.  It does like to run under the default "pi" user though, which is a bit of a shame security wise.  All I've done to the configuration is ensure it's listening only on the local interface and enable the adminAuth section such that I'm required to enter a user name and password to access the user interface.

NGINX
 
Another simple install due to using the bundled version that comes with Raspbian.  However, this time around there's a lot more configuration to do since I'm using it to front a reverse proxy onto The Lounge and Node Red.  This gives me a few advantages such as being able to restart NGINX in order to load new SSL certificates without interrupting the underlying services i.e. something like IRC can stay connected even though new certs are loaded.  Both The Lounge and Node Red support SSL in their configuration so this also means I only need to configure certificates in one place and have a single route through which I can access all my home services.  The idea and bulk of the configuration for doing this comes directly from one of the guides available for The Lounge.

server {
    # redirect HTTP traffic to HTTPS
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    return 301 https://$host$request_uri;
}

server {
    # SSL configuration
    #
    listen 443 ssl default_server;
    listen [::]:443 ssl default_server;
    ssl_certificate /path/to/your/server.crt;
    ssl_certificate_key /path/to/your/server.key;

    server_name your.server.name.com;
 
    # Add this if you want to do web serving as well
    root /var/www/html;
    index index.html index.htm;
 
    location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ =404;
    }
 
    # Configure reverse proxy for The Lounge
    location ^~ /YOUR_PREFERRED_IRC_URL_GOES_HERE/ {
        proxy_pass http://127.0.0.1:9000/;
        proxy_http_version 1.1;
        proxy_set_header Connection "upgrade";
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;

        # by default nginx times out connections in one minute
        proxy_read_timeout 1d;
    }

    # Configure reverse proxy for Node Red
    location ^~ /YOUR_PREFERRED_NODERED_URL_GOES_HERE/ {
        proxy_pass http://127.0.0.1:1880/;
        proxy_http_version 1.1;
        proxy_set_header Connection "upgrade";
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;

        # by default nginx times out connections in one minute
        proxy_read_timeout 1d;
    }
}
 
Let's Encrypt
From Wikipedia: "Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security encryption at no charge."

The model for using letsencrypt is pretty simple.  They sign your SSL certificates, free of charge, but their signing expires within 90 days.  Hence, they're encouraging a high turnover of certificates by regular renewals.  This means that realistically you need to automate the process of certificate signing.  To do this I'm using the getssl script which makes life extremely easy when coupled with a cron job to kick off the script on a regular basis.  I'm running it every day and the script decides whether to replace my existing certificates.  It all sits there quite nicely running in the background and doesn't get in the way at all, restarting NGINX only when a new certificate is put in place.  Due to the fact that NGINX is decoupled from the services it is proxying the other services aren't interrupted.

Open Sourcing a NetworkManager VPN Plugin

It's not every day I find myself publishing a new project to open source and even less so when that requires release approval at work.  I hope, over the years, I've written some useful bits and pieces and this time around I was keen to publish my work on the Internet rather than internally within the company.  This requires following due process of course and seeking the relevant approval for the publication to take place.

Fortunately, in the right circumstances, IBM are very amenable to releasing code to open source.  I was convinced enough that a NetworkManager plugin to add to the existing list of VPN plugins would not conflict with the business that an open source approval would be fairly trivial.  Happily, I was correct, and going through the process wasn't too arduous with a few forms to fill in.  These were, of course, designed much more for bigger releases than I planned so vastly over-engineered for this particular release but at least due diligence was applied.

On to the project and the code.  It's not a world-changer but a small VPN plugin for NetworkManager to drive Cisco AnyConnect and made available as NetworkManager-anyconnect on GitHub.  So I now know more than I'd care to mention about the inner workings of NetworkManager VPN plugins.  They're not very well documented (hardly documented at all in fact) so they're quite hard work to produce by looking over existing code in available plugins.  I started off from the OpenVPN plugin which turned out to be a mistake as the code base is vastly bigger than that required for a plugin as simple as the one I wanted to write.  Were I to start again, I would recommend starting from the SSH VPN plugin instead as this is actually very nicely set out and doesn't include a lot of the shared bloat that comes with other plugins that are formally a part of NetworkManager.