Posts

Yet Another New Home Server

This year has seen me doing more in the way of little tech projects at home than I have done for a while, perhaps due to covid lock downs so if that's the case then I'll take this small positive from an otherwise rubbish situation.  Typically for me, these projects have focused around open source projects and some IoT.  More on those in some separate blog posts when I get around to writing them up.  But for now, I wanted to make some notes on my new home server set up.

I've had an array of different low powered home servers of the years that I've previously written about, namely the NSLU2, TinyTuxBox, Joggler and for the past many years a simple ReadyNAS box that I specifically bought for the Intel processor as it made compiling different bits and pieces a whole lot easier back in the day.  However, I have recently relegated the ReadyNAS box from home serving duties, keeping it only for its native NAS services because using it for other things has become increasingly difficult without updating the entire base OS (which is possible by I'm reluctant to do) due to down level software libraries like an ancient version of openssl.

In with the new then and I moved away from Intel architecture as it's now so much easier to compile for Arm chips and went with the, wait for it, drum roll, rather obvious choice of a Raspberry Pi 4.  Specifically, a Pi 4 Model B, 4GB.  I've paired it with the official Pi case power supply, micro HDMI cable and shoved in an A2  SanDisk Extreme 64GB SDXC card.

And so to the notes, my initial target for this new box would be as follows:

The Lounge

IRC might be a bit old hat but tons of open source project still use it for their more synchronous communications.  ZNC is the choice of old for staying connected to your IRC channels.  For those not familiar, it acts as a relay to the IRC servers you want to connect to.  Effectively, it connects as your IRC client to the servers and presents your local IRC client with an endpoint through which you can connect.  This allows you never to miss any messages and see the IRC conversation even when you're not actually online.  Matrix seems to be taking some of the old IRC community's attention with various projects setting up bridges between Matrix and IRC.  However, the relative newcomer project called The Lounge shows just how far web technologies and web sockets have come.  It's a darned site (pun intended) easier to install configure and use than ZNC so I'm a massive convert and big fan of the project.

The project is relatively stable in the master branch and doesn't release particularly often so I've open for the run from source approach to take advantage of all the latest development.  Other than that, I've only made 3 changes to the default configuration prior to starting up my The Lounge server:
  1. host: "127.0.0.1"
  2. reverseProxy: true
  3. theme: "morning"
As you can see, these are all pretty simple and somewhat trivial changes.  The host setting binds the listener to the localhost interface, thus making it suitable for use with a reverse proxy and not exposing the service outside of the Pi 4.  The reverseProxy setting tells the server it's expecting to run behind a reverse proxy (the clue is in the name I guess).  Finally, I've switched to using a dark mode theme rather than the default light mode.  That's it, the remainder of the configuration is all about which IRC servers and channels to connect to along with the usual IRC bits of registering your nick and logging into the nick server.

Mosquitto

This is even simpler to get going than The Lounge due to the fact it's bundled with Raspbian so you can just apt-get install it.  I've created a configuration based on the bundled example config file but changing:
  1. pid_file (probably just because I'm old fashioned like that)
  2. user (to drop privileges)
  3. listener (to specify a port number)
  4. certfile and keyfile (for SSL)
  5. log_dest (create a specific log file for the broker)
  6. clientid_prefixes (a bit of added security to only allow certain client IDs to connect to the broker)
  7. allow_anonymous (quite an important one!)
  8. password_file (so that connections are authenticated)
Hopefully, that gives me something secure as well as providing me with the broker functionality that I need.

Node Red

Again, simple to install as it's bundled with Raspbian.  It does like to run under the default "pi" user though, which is a bit of a shame security wise.  All I've done to the configuration is ensure it's listening only on the local interface and enable the adminAuth section such that I'm required to enter a user name and password to access the user interface.

NGINX
 
Another simple install due to using the bundled version that comes with Raspbian.  However, this time around there's a lot more configuration to do since I'm using it to front a reverse proxy onto The Lounge and Node Red.  This gives me a few advantages such as being able to restart NGINX in order to load new SSL certificates without interrupting the underlying services i.e. something like IRC can stay connected even though new certs are loaded.  Both The Lounge and Node Red support SSL in their configuration so this also means I only need to configure certificates in one place and have a single route through which I can access all my home services.  The idea and bulk of the configuration for doing this comes directly from one of the guides available for The Lounge.

server {
    # redirect HTTP traffic to HTTPS
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    return 301 https://$host$request_uri;
}

server {
    # SSL configuration
    #
    listen 443 ssl default_server;
    listen [::]:443 ssl default_server;
    ssl_certificate /path/to/your/server.crt;
    ssl_certificate_key /path/to/your/server.key;

    server_name your.server.name.com;
 
    # Add this if you want to do web serving as well
    root /var/www/html;
    index index.html index.htm;
 
    location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ =404;
    }
 
    # Configure reverse proxy for The Lounge
    location ^~ /YOUR_PREFERRED_IRC_URL_GOES_HERE/ {
        proxy_pass http://127.0.0.1:9000/;
        proxy_http_version 1.1;
        proxy_set_header Connection "upgrade";
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;

        # by default nginx times out connections in one minute
        proxy_read_timeout 1d;
    }

    # Configure reverse proxy for Node Red
    location ^~ /YOUR_PREFERRED_NODERED_URL_GOES_HERE/ {
        proxy_pass http://127.0.0.1:1880/;
        proxy_http_version 1.1;
        proxy_set_header Connection "upgrade";
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;

        # by default nginx times out connections in one minute
        proxy_read_timeout 1d;
    }
}
 
Let's Encrypt
From Wikipedia: "Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security encryption at no charge."

The model for using letsencrypt is pretty simple.  They sign your SSL certificates, free of charge, but their signing expires within 90 days.  Hence, they're encouraging a high turnover of certificates by regular renewals.  This means that realistically you need to automate the process of certificate signing.  To do this I'm using the getssl script which makes life extremely easy when coupled with a cron job to kick off the script on a regular basis.  I'm running it every day and the script decides whether to replace my existing certificates.  It all sits there quite nicely running in the background and doesn't get in the way at all, restarting NGINX only when a new certificate is put in place.  Due to the fact that NGINX is decoupled from the services it is proxying the other services aren't interrupted.

Open Sourcing a NetworkManager VPN Plugin

It's not every day I find myself publishing a new project to open source and even less so when that requires release approval at work.  I hope, over the years, I've written some useful bits and pieces and this time around I was keen to publish my work on the Internet rather than internally within the company.  This requires following due process of course and seeking the relevant approval for the publication to take place.

Fortunately, in the right circumstances, IBM are very amenable to releasing code to open source.  I was convinced enough that a NetworkManager plugin to add to the existing list of VPN plugins would not conflict with the business that an open source approval would be fairly trivial.  Happily, I was correct, and going through the process wasn't too arduous with a few forms to fill in.  These were, of course, designed much more for bigger releases than I planned so vastly over-engineered for this particular release but at least due diligence was applied.

On to the project and the code.  It's not a world-changer but a small VPN plugin for NetworkManager to drive Cisco AnyConnect and made available as NetworkManager-anyconnect on GitHub.  So I now know more than I'd care to mention about the inner workings of NetworkManager VPN plugins.  They're not very well documented (hardly documented at all in fact) so they're quite hard work to produce by looking over existing code in available plugins.  I started off from the OpenVPN plugin which turned out to be a mistake as the code base is vastly bigger than that required for a plugin as simple as the one I wanted to write.  Were I to start again, I would recommend starting from the SSH VPN plugin instead as this is actually very nicely set out and doesn't include a lot of the shared bloat that comes with other plugins that are formally a part of NetworkManager.


Installing Tensorflow GPU on Fedora Linux

Following on from my previous notes on building Tensorflow for a GPU on Fedora, I find myself back at it again.  I recently upgraded my GPU at home and time has moved on too so this is my current set of notes for what I'm doing with Tensorflow on Fedora.  This method, however, differs from my previous notes in as much as I'm using the pre-built Tensorflow rather than building my own.  I've found that Tensorflow is so brittle during the build process it's much easier to work with pre-built binaries and set up my system to match their build.

In my previous blog post I benchmarked the CPU versus GPU using the Keras MNIST CNN example and so I thought it would be interesting to offer the same for this new install on my home machine.  The results are  :
  • 12 minutes and 14 seconds on my CPU
  • 1 minutes and 14 seconds on my GPU
That's just over 9.9 as fast on my GPU as my CPU!

Some info on my machine and config:
  • Custom Built Home PC
  • Intel Core i5-3570K CPU @ 3.40GHz (4 cores)
  • 16GB RAM
  • NVidia GeForce GTX 1660 (CUDA Compute Capability 7.5)
  • Fedora 30 Workstation running kernel 5.2.9-200.fc30.x86_64
Background Information for NVidia Drivers
Previously, I've always used the Negativo17 repository for all my NVidia driver and CUDA needs.  However, the software versions available there are too up-to-date to allow Tensorflow GPU to be installed in a way that works.  This repository provides CUDA 10.1 where as Tensorflow, currently at version 1.14, only supports CUDA 10.0.  So we must use another source for the NVidia software that provides back-level versions.  Fortunately, there is an official NVidia repository providing drivers and CUDA for Linux, so let's use that since it also works quite nicely with the RPM Fusion repositories as well.  Hence, this method relies purely on RPM Fusion and the official NVidia repository and does not require or use the Negativo17 repository (although it would be possible to do so).

Install Required NVidia Driver
The RPM Fusion NVidia instructions can be used here for more detail, but in brief simply install the display drivers:
  • dnf install xorg-x11-drv-nvidia akmod-nvidia xorg-x11-drv-nvidia-cuda
There are some other bits you might want from this repository as well such as:
    • dnf install vdpauinfo libva-vdpau-driver libva-utils nvidia-modprobe
    Wait for the driver to build and reboot to get things up and running.

    Install Required NVidia CUDA and Machine Learning Libraries
    This step relies on using the official nvidia repositories with a little more information available in the RPM Fusion CUDA instructions.

    First of all, add a new yum configuration file.  Copy the following to /etc/yum.repos.d/nvidia.repo:

    [nvidia-cuda]
    name=nvidia-cuda
    enabled=1
    gpgcheck=1
    gpgkey=http://developer.download.nvidia.com/compute/cuda/repos/fedora27/x86_64/7fa2af80.pub
    exclude=akmod-nvidia*,kmod-nvidia*,*nvidia*,nvidia-*,cuda-nvidia-kmod-common,dkms-nvidia,nvidia-libXNVCtrl

    [nvidia-machine-learning]

    name=nvidia-machine-learning
    baseurl=http://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/
    enabled=1
    gpgcheck=1
    gpgkey=http://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/7fa2af80.pub
    exclude=libcudnn7*.cuda10.1,libnccl*.cuda10.1



    Note that the configuration above deliberately targets the fedora27 repository from NVidia.  This is because it is the location at which we can find CUDA 10.0 compatible libraries rather than CUDA 10.1 libraries that will be found in later repositories.  So the configuration above is likely to need to change over time but essentially the message here is that we can match the version of CUDA required by targeting the appropriate repository from NVidia.  These libraries will be binary compatible with future versions of Fedora so this action should be safe to do for some time yet.



    With the following configuration in place we can now install CUDA 10.0 and the machine learning libraries required for Tensorflow GPU support and all of the libraries get installed in the correct places that Tensorflow expects.

    To install, run:
    • dnf install cuda libcudnn7 libnccl

    Install Tensorflow GPU
    The final piece of the puzzle is to install Tensorflow GPU which is now as easy as:
    • pip3 install tensorflow-gpu

    Migrating to Gnome 3

    I'm a massive laggard in the move to a Gnome 3 desktop.  Colleagues and friends have been using it for years and to be honest, I've never been comfortable using it.  But, that changed recently and I've actually grown to quite like the new desktop environment I find myself working in on a daily basis.  So I've made a full-blooded leap to a modern desktop.

    Way-back when I started using Linux as a serious desktop alternative to Windows (in about 2000-2001 ish) I was running Gnome.  I migrated away from that to KDE 3 and switched to Gnome 2 when KDE 4 was released as I didn't like the changes they had made and the new KDE 4 desktop was horribly buggy and unstable in my experience.  (Maybe there's something about brand new desktops and my not taking a liking to them?)  When Gnome released Gnome 3 I absolutely hated the user experience and used XFCE for a while before settling on the MATE desktop which I've been using for quite a few years now.

    Trying out Gnome 3 again recently and I was pleasantly surprised that the desktop has progressed significantly since those first few releases I couldn't get along with.  But it's the addition of extensions that are the final straw in my move as I've found with just the right mix I can craft a desktop that gives me a nice balance between the new world and the old, much more familiar, world.

    (final update 24th April 2024) Note: there is an updated list of the extensions I use in my newer Gnome Extensions blog post.

    So, the real purpose of this post is to share the extensions I've discovered.  I'll document these below in brief but would also be interested to find others that are useful:

    Applications Menu (updated 5th Feb 2021 - no longer in use, see below)
    This was right at the very top of my list of requirements for Gnome 3 usability.  It simply puts an old school applications menu in the top bar, a bit like your old fashioned Windows start menu or similar from other desktops.  I am, however, finding I use this very little now as the search hot-key in Gnome 3 does seem to be a quicker way of finding and starting programs.

    Frippery Bottom Panel
    This is another of my top requirements for Gnome 3 usability.  It gives you a panel at the bottom of the screen (D'uh) that allows you to switch easily between applications you have running.  It also has a small workspace switcher which is why I like the Frippery version of this type of extension versus some of the others that don't have a workspace switcher capability.

    Top Icons Plus (update 5th Feb 2021 - no longer in use, replaced with AppIndicator, see below)
    Either the Top Icons or the Top Icons Plus extension that I'm using here seem so ubiquitous for Gnome 3 users I wonder why on earth they're not a default option, aside from the fact the Gnome 3 developers do seem to retain their keen vision on what a modern desktop should look like and "old" system tray icons are not part of that outlook.  This extension, if you're not already using it, allows you to see system tray icons such as the ones used by Virt Manager or Slack, for example.

    GPaste (update 4th Nov 2021 - no longer in use, too difficult to configure and doesn't always work the way I expected, replaced with Clipboard Indicator, updated again 8th Mar 2023, replaced with Pano Clipboard Manager)
    A clipboard management system that has a nice integration with the Gnome 3 panel.  I was previously using apps like ClipIt or Parcelite that do pretty much the same job.

    Lock Screen (update 5th Feb 2021 - no longer in use, I wasn't using this as I just hit Win+L to lock)
    This adds a button to the gnome panel that, when clicked, locks your desktop.  This would be the same as pressing Win+L on the keyboard.  I was in the habit of using a graphical button on MATE so having this back in Gnome 3 gives me the experience I'm used to.

    No TopLeft Hot Corner (update 5th Feb 2021 - no longer in use, Gnome Tweaks as a toggle for "Activities Overview Hot Corner" from the Top Bar options.
    I find the Gnome 3 facility to show activities when you mouse to the top left corner really annoying and it detracts from my productivity when it happens automatically.  Fortunately, this extension disables that feature.  It does make it more awkward to reach activities with the mouse (I'd have to click the applications menu first then select "Activities Overview") but I more or less always use the Windows key anyway.

    Places Status Indicator (update 8th Mar 2023 - no longer in use, I wasn't using it so have stopped installing it)
    This adds the old Gnome 2 style places menu to the Gnome 3 panel.  I find I flip between using this menu to start navigating directories and just starting Gnome Files and going from there.  Any which way, having this menu back on my desktop just makes it feel a bit more familiar and comfortable.

    Remove Dropdown Arrows (update 5th Feb 2021 - no longer in use, Gnome seems to have gotten rid of most of these by default)
    The Gnome 3 panel insists on having an arrow indicator to show items that pull down a menu when clicked.  These menus seem obvious to me and the arrows look rubbish and take up space, so this extension gets rid of them completely.  Happy days.

    Suspend Button (update 5th Feb 2021 - no longer in use, Gnome now has a built-in suspend button)
    I run from a laptop most of the time and use the suspend feature every time I "shut down" my laptop.  Bizarrely, there's no graphical facility (that I can find) in Gnome to suspend my machine.  This extension adds a nice button to the status menu that immediately suspends my machine.  Perfect.

    System Monitor
    Adds little graphs to the Gnome panel that show resource usage.  The extension is pretty configurable but I have it showing CPU, memory and network utilisation.  This allows me to keep an easy eye on my machine and how loaded it is at the current time.  Extremely useful for spotting those occasional rogue apps that start eating an entire core of my CPU.

    Media Keys (update 5th Feb 2021 - no longer in use, Gnome has media controls built-in)
    I haven't decided how useful this one is going to be yet and it's currently turned off.  However, when listening to Music through services like Amazon Music from a web browser it's nice to be able to control the audio without having to revert back to the browser ever time.  This extension simply adds a few buttons to the Gnome panel to control your media.  Handy if you haven't got the physical buttons on your keyboard too.

    Do Not Disturb Button (update 5th Feb 2021 - no longer in use, Gnome has the button built-in)
    I generally leave this extension disabled but it's useful to have installed and running when presenting or screen sharing.  It saves any embarrassing situations of people being able to read your notifications while they're looking at your screen.  Basically, it simply stops notifications being displayed, they're still received so you can go read them later.


    Blog edited with more extensions added on 28th August 2019:
    Frippery Panel Favourites
    I'm not quite sure how I missed this from my original list as it's an extension I've been using more or less since day one in Gnome 3.  It takes your favourite menu and adds this as a set of icons to the top of the Gnome Panel.  Makes for extra quick access to your commonly used apps.

    Some more extensions have been brought to my attention since writing the list above.  I've tried out all of the ones mentioned to me but these additions (below) are the ones that seem to have stuck.

    Caffeine (update 5th Feb 2021 - no longer in use, using Gnome's built-in do not disturb button instead)
    This extension sits fairly well alongside the Do Not Disturb Button extension in my original list.  This one simply disables the screen saver and auto suspend.  Hence, in conjunction with Do Not Disturb, will make a good presentation or screen sharing environment.

    GTile
    This is a genius little extension that allows you to easily resize your windows in order to tile them across your display.  I love the side-snapping in Gnome 3 that allows you to size a window to half the screen size.  In my older desktops I also had corner snapping to size a window to a quarter of the screen, Gnome 3 doesn't have this by default.  However, GTile adds an icon to your Gnome Panel that, when clicked, allows you to size to any area of your screen across a pre-defined grid - you can even change the grid size.  Brilliant for usability with lots of on-screen windows at the same time.


    Blog edited to update the list of extensions I'm using on 5th February 2021:

    Applications Menu
    Just a note to say that after using Gnome 3 for quite some time now, I rarely (if ever) use the Applications Menu any longer.  I tend to start applications either by pressing the Gnome hot key (Windows Key by default) and type in the search box, or by clicking on one of the favourites in the panel via the "Panel Favourites" extension.

    While the free desktop standard continues to specify the classic "icon tray" that was supported by extensions such as TopIcons, in reality few of the modern desktops (I'm referring to Gnome and KDE) support them.  The modern take on the tray icon is an AppIndicator icon and many modern applications are written to use this standard (and perhaps fall back to a tray icon).

    Blog edited to update the list of extensions I'm using on 8th March 2023:
    It's interesting to see how much use of extensions has changed over time.  I'm using considerably fewer now than I was when I first started using Gnome 3.  I put this down to two things: (1) Gnome is better at operating the way user's expect by removing the need for extensions such as the Suspend Button in the list above; and (2) I've become more institutionalised to the way that Gnome works, I'm much more familiar with it and have grown to like much of the way it works.

    Helping Disabled Passengers Travel with Confidence

    This is a reproduction of a blog post originally made on the IBM Emerging Technology blog at https://www.ibm.com/blogs/emerging-technology/helping-disabled-passengers-travel-with-confidence/ (link now dead).  Original article published on 3rd April 2019, re-publication to this blog was on 11th June 2021.

    Introduction

    Disabled or disadvantaged passengers have every right to fear travelling with multiple high profile cases of unfair and appalling levels of service evident in the media, particularly for rail passengers. Our team have been tackling this problem recently in collaboration with the IBM Travel and Transport team and 50% funding through Innovate UK’s call for Accelerating Innovation in Rail (round 4). The project was the brain child of Sam Hopkins and won the internal IBM Hybrid Hackathon event in 2016.

    The Problem

     The rail industry in the UK is a complex one and travelling on our railways can be a difficult experience for anybody. If you’re disabled then the experience can be difficult at best and demeaning or impossible at worst. Depending on the type and severity of their disability, disabled travellers need to plan for eventualities such as whether a station is fully accessible, whether and when staff are available to help, when and where they might be able to use a toilet, how and where to board and alight the train, the list is as long as the wide range of issues people may have. Today, assistance is on offer, but this is complex due to the nature of our fragmented system and passengers (or their carers) may need to contact multiple companies 24 hours in advance of travel when planning their journey. This process of organising their journey is extremely time consuming and can be as much of a frustration as the journey itself.

    The Challenges 

    The UK rail industry is highly fragmented with different parts of the operation split among different companies. For example, Network Rail oversee the infrastructure such as maintaining tracks and signalling; Rolling Stock Operating Companies (ROSCOs) own the trains; and Train Operating Companies (TOCs) operate the trains and run passenger services in different regions around the UK. In order for a disabled passenger to make a journey, the Train Operating Companies are legally obliged to provide support both off the train at the station and on board the train during the journey. However, a passenger may be travelling through a large number of regions and across services provides and hence need to understand which company will be responsible for their journey at any given stage. Understanding this takes quite a bit of research and can be difficult, it also requires that everything they require on their journey lines up across the network they’re travelling on and this is certainly not guaranteed on our network today.

    Our Solution

    We have created a mobile application for disabled passengers, supported by an enterprise grade back end architecture. During the project we established a principle that the disabled passenger should always be connected to a member of staff. This connection provides continuous support available whenever required by the passenger and is designed to help

    alleviate the anxiety felt by this group of passengers when travelling. The connection comes in the form of a chat application similar to how passengers may already use common applications such as WhatsApp or Facebook Messenger. The passenger can type anything they want to the member of staff they’re connected to and get a human-level response to their query from staff local to their situation. During their journey, the passenger is handed over between staff members such that they’re always connected with a local staff member who can offer physical assistance if required.

    A typical journey story starts with the user programming their journey in a similar way to how they might on existing rail applications. They search for their departure and destination stations and select a time and route they wish to travel. Some time later, they embark upon their journey. When they arrive at the station, they either press a button to start the process or the app uses geolocation to detect their arrival at the station. They are connected to a member of the station staff who can answer questions and provide physical assistance to board their first train. Once boarded, the station staff hand the passenger over to a member of staff on board the train. This hand-over process is transparent to the user and they will simply see a new greeting from the next member of staff in the chain once they have been connected. The process continues with the on train staff handing over to station staff at the destination station in order to provide assistance alighting the train. This process can repeat as many times as necessary until the final destination is reached.

    Technical Implementation 

    Our solution has a strong server-side implementation with an MVP level front end to exercise the APIs provided.

    We base the solution entirely within the IBM Cloud, using a Cloudant instance for database hosting, App ID for authentication services, Message Sight for highly reliable messaging built on top of the MQTT protocol, and NodeJS for writing our APIs on top of the Express framework and where necessary calling out to other APIs such as the Transport API. Security and privacy have been key concerns in the design to ensure chat messages are securely delivered only to their intended recipient. The application itself is authenticated at each of its routes and all of the server-side APIs also use the same App ID authentication. Novel to this solution is the capability of third-party authentication from Message Sight to App ID, a first-of-a-kind implementation.

    The front end application is currently written in VueJS. This has been designed with accessibility in mind given the intended audience. It is a simple interface conforming to web accessibility standards that is compatible with screen readers. This component was never intended to be the final solution and we see another interface being developed in the future that is likely to build upon the current one. For example, a fully native solution or perhaps one written to be more native-like using technologies such as Native Script.

    Moving Forward 

    The solution is complete and ready, subject to customisation, to be adopted by the UK train operating companies. However, we do see the need for an improved user interface to be developed before the solution is truly ready to be used in the wild.

    Further to the current solution for the rail industry, we recognise the issues faced by disabled travellers don’t stop with the UK rail system. Similar issues are faced when using our road network, taxi services, ferries and of course our airports. The principle established within this project of alleviating anxiety through the connection to a supporting human member of staff is likely to remain. We hope this is extended into the other areas we’re considering with the next most obvious choice to tackle being the airport use case.

    Self-Signing SSL/TLS Certificates

    Things have changed a bit since I last looked into setting up a Certificate Authority (CA) and using that to self-sign my own certificates, not least that the use of the Common Name (CN) field appears to have changed. Chrome in particular seems to insist on the use of the Subject Alternative Names (SAN) extension rather than (or in addition to) using the CN field. So these are my notes on how to set up your own CA and use that to sign certificates. I'm conscious this is bound to go out of date so at the time of writing I'm working with Firefox 64, Chrome 71 and OpenSSL 1.1.1.

    Setup
    First of all, create a config file along the lines of the following and call it anything you like but for these notes I'm going to call it ssl.conf.  Note, if you want to you can start with a different template or look at your own openssl.cnf file which on Linux is commonly found at /etc/pki/tls/openssl.cnf.

    [ req ]
    default_bits       = 4096
    distinguished_name = req_distinguished_name
    req_extensions     = req_ext
    
    [ req_distinguished_name ]
    countryName                 = Country Name (2 letter code)
    countryName_default         = GB
    stateOrProvinceName         = State or Province Name (full name)
    stateOrProvinceName_default = England
    localityName                = Locality Name (eg, city)
    localityName_default        = MyCity
    organizationName            = Organization Name (eg, company)
    organizationName_default    = MyOrg
    commonName                  = Common Name (e.g. server FQDN or YOUR name)
    commonName_max              = 64
    commonName_default          = localhost
    
    [ req_ext ]
    subjectAltName = @alt_names
    
    [alt_names]
    DNS.1 = localhost
    

    You can change any of this template and indeed you'll need to change the common name for the certificates you're generating. The CN can be changed either on the command line during certificate creation or by changing the default in ssl.conf. You will also need to change the list of names under the "alt_names" section, this list should contain one line for each host name your machine might be known as. The list starts at DNS.1 for the first entry, then you can add DNS.2 for the second entry and so on.

    NOTE: the specification and a lot of the documents available in this space indicate that an IP address can be used in the CN.  My testing seems to indicate that while this is the case, certificates produced in this way will be rejected by modern browsers.  Hence, you should list only hostnames as the CN but IP addresses still appear to be acceptable in as "alt_names".

    Create a Certificate Authority
    You'll need a certificate and key file to act as your own CA:

    openssl genrsa -out RootCA.key 4096
    openssl req -x509 -new -nodes -key RootCA.key -sha256 -days 3650 -out RootCA.pem -config ssl.conf

    You can inspect the certificate with:
    openssl x509 -in RootCA.pem -text -noout

    Create a Certificate Signing Request (CSR)
    Now you have a CA you can create a CSR that can be used with your CA certificate to generate a client certificate:

    openssl genrsa -out server.key 4096
    openssl req -new -key server.key -out server.csr -config ssl.conf

    You can inspect the certificate with:
    openssl req -text -noout -verify -in server.csr

    This time it's really important to ensure your host names are listed under the "X509v3 Subject Alternative Name" section of the certificate.

    Generate a Signed Certificate
    You can now use the CSR to create a signed certificate that can be used to serve up content over a secure connection:

    openssl x509 -req -in server.csr -CA RootCA.pem -CAkey RootCA.key -CAcreateserial -out server.pem -days 3650 -sha256 -extensions req_ext -extfile ssl.conf

    Note: if you want to create a different format of certificate here you can simply replace server.pem in the above command with something like server.crt, for example.

    You can inspect the certificate with:
    openssl x509 -in server.pem -text -noout

    Again, it's really important to ensure your host names are listed under the "X509v3 Subject Alternative Name" section of the certificate.

    Use the Certificate Server Side
    You can now put your server.pem and server.keyfiles to work and serve up content over a secure connection.  There's too many ways to do that to list here but it could be used with a web server to serve HTTPS or a websockets server to serve some sort of socket connection as a couple of examples.

    If you want to inspect the certificate that's being used on the server, replace <hostname> and <port> in the command below:

    openssl s_client -connect <hostname>:<port> | openssl x509 -noout -text

    Use the Certificate Client Side
    My use case here is with a web browser and so you'll want to import your <i>RootCA.pem</i> into your browser environment.  There are two main ways of achieving this, you can either:
    1. Import directly to the browser
    2. Import to the key store on your operating system
    It's quicker and easier to import directly to the browser but this will of course only cover that one browser on your system whereas if you use the operating system method then any application that consults the OS for certificates will see your CA certificate.

    For Firefox, go to "View Certificates" in the preferences; click the "Authorities" tab and then the "Import" button; select your <i>RootCA.pem</i> file and click OK.

    For Chrome, go to "Manage Certificates" in the settings; click the "Authorities" tab and then the "Import" button; select your <i>RootCA.pem</i> file; click the check boxes to trust the certificate and click OK.

    Building Tensorflow GPU on Fedora Linux

    <update on Sept 13th 2019>
    I have written another post on how to install (rather than build) Tensorflow GPU for Fedora that uses a different and much simpler method.  See Installing Tensorflow GPU on Fedora Linux.
    </update>

    First off, let's say that there are easy ways of configuring Tensorflow for GPU usage such as using one of the docker images.  However, I'm a bit old school for some things and having always done so I've recently got Tensorflow going on my machine using my GPU.  Tensorflow CPU support is quite easy to do and generally works quite nicely using the pip install method.  GPU support, I've always found, is quite a bit more difficult as there are a whole bunch of things that need to be at just the right level for everything to work i.e. it's quite brittle!

    What follows are my notes (it's in the name of the blog) for how to build Tensorflow from scratch to enable GPU support and I do this on Fedora Linux.  If you want to know why it's worth bothering going to this effort, I've tested the Keras MNIST CNN example as a bench mark.  It takes:
    • 11 minutes 7 seconds on my CPU
    • 2 minutes 55 seconds on my GPU
    That's just over 3.8 as fast on my GPU as per my CPU so for large jobs this will be huge!

    Some info on my machine and config:
    • Lenovo P50 Laptop
    • Intel Core i7-6820HQ CPU @ 2.70GHz (4 core with hyper threading)
    • 32GB RAM
    • Nvidia Quadro M1000M (CUDA compute capability 5.0)
    • Fedora 28 running kernel 4.18.18-200.fc28.x86_64
    Install Required Nvidia RPMs
    You need to get everything Nvidia and CUDA installed on your machine first.   I quite like the Negativo17 repository for Nvidia on Fedora Linux and so I use this but you could also go with RPM Fusion or even download everything directly from Nvidia.  For me, right now, I have this little lot installed:
    cuda-9.2.148.1-2.fc28.x86_64
    cuda-cli-tools-9.2.148.1-2.fc28.x86_64
    cuda-cublas-9.2.148.1-2.fc28.x86_64
    cuda-cublas-devel-9.2.148.1-2.fc28.x86_64
    cuda-cudart-9.2.148.1-2.fc28.x86_64
    cuda-cudart-devel-9.2.148.1-2.fc28.x86_64
    cuda-cudnn-7.2.1.38-1.fc28.x86_64
    cuda-cudnn-devel-7.2.1.38-1.fc28.x86_64
    cuda-cufft-9.2.148.1-2.fc28.x86_64
    cuda-cufft-devel-9.2.148.1-2.fc28.x86_64
    cuda-cupti-9.2.148.1-2.fc28.x86_64
    cuda-cupti-devel-9.2.148.1-2.fc28.x86_64
    cuda-curand-9.2.148.1-2.fc28.x86_64
    cuda-curand-devel-9.2.148.1-2.fc28.x86_64
    cuda-cusolver-9.2.148.1-2.fc28.x86_64
    cuda-cusolver-devel-9.2.148.1-2.fc28.x86_64
    cuda-cusparse-9.2.148.1-2.fc28.x86_64
    cuda-cusparse-devel-9.2.148.1-2.fc28.x86_64
    cuda-devel-9.2.148.1-2.fc28.x86_64
    cuda-docs-9.2.148.1-2.fc28.noarch
    cuda-gcc-7.3.0-1.fc28.x86_64
    cuda-gcc-c++-7.3.0-1.fc28.x86_64
    cuda-gcc-gfortran-7.3.0-1.fc28.x86_64
    cuda-libs-9.2.148.1-2.fc28.x86_64
    cuda-npp-9.2.148.1-2.fc28.x86_64
    cuda-npp-devel-9.2.148.1-2.fc28.x86_64
    cuda-nvgraph-9.2.148.1-2.fc28.x86_64
    cuda-nvgraph-devel-9.2.148.1-2.fc28.x86_64
    cuda-nvml-devel-9.2.148.1-2.fc28.x86_64
    cuda-nvrtc-9.2.148.1-2.fc28.x86_64
    cuda-nvrtc-devel-9.2.148.1-2.fc28.x86_64
    cuda-nvtx-9.2.148.1-2.fc28.x86_64
    cuda-nvtx-devel-9.2.148.1-2.fc28.x86_64
    nvidia-driver-cuda-libs-410.73-4.fc28.x86_64

    You might wonder about some of the above, particularly why you might need a back level version of GCC.  When Fedora 28 has a quite capable GCC version 8 why on earth would you want version 7?  The answer lies in my comment about things being difficult or brittle, it's quite simply that CUDA doesn't yet support GCC 8 so you do need a back level compiler for this

    Install NVidia NCCL
    This library isn't available through an RPM installation or the Negativo17 repository and so you must:
    • Go to the Nvidia NCCL home page 
    • Click the link to download NCCL (requires an Nvidia developer login account)
    • Agree to the Terms and Conditions
    • Download the NCCL zipped tar file that matches your CUDA version (9.2 for this blog post)

    At the time of writing the file required is nccl_2.3.7-1+cuda9.2_x86_64.txz

    I simply untar this file into /usr/local and create a symbolic link as follows:
    • cd /usr/local
    • sudo tar -xf /path/to/file/nccl_2.3.7-1+cuda9.2_x86_64.txz
    • sudo ln -s nccl_2.3.7-1+cuda9.2_x86_64.txz nccl


    Install the Bazel Build Tool
    You're going to need a build tool called Bazel which isn't directly available in the Fedora repositories (that I know of at least) but fortunately there's a version in a copr repository you can use as documented run the following commands:
    •  dnf copr enable vbatts/bazel
    •  dnf install bazel
    Get a Copy of Tensorflow Source
    For this it's just as easy to use git as it is anything else.  You can directly clone the 1.12 release of Tensorflow into a new directory by running:
    • git clone --single-branch -b r1.12 https://github.com/tensorflow/tensorflow tensorflow-r1.12
    • cd tensorflow-r1.12
    Simply replace r1.12 in the above commands if you want to use a different Tensorflow release.

    Run the Tensorflow Configure Script
    This step is actually quite simple but you'll need the answers to some questions to hand, simply run:
    • ./configure
    I accept all the default options with the exception of:
    • "location of python" set to /usr/bin/python3 since Fedora still uses Python 2.7 as the default version at /usr/bin/python
    • "build TensorFlow with CUDA support" set to Yes
    • "CUDA SDK version" set to 9.2 (this value should match the cuda version you have installed and at the time of writing 9.2 is the current version from the Negativo17 repository)
    • "location where CUDA 9.2 toolkit is installed" set to /usr
    • "cuDNN version" set to 7.2 (similar to the cuda version above, this value should match the cuda-cudnn package version and 7.2 is the current version from the Negativo17 repository)
    • "NCCL version" set to 2.3
    • "location where NCCL 2 library is installed" set to /usr/local/nccl
    • "Cuda compute capabilities you want to build with" set to 5.0 (but this value should match the CUDA compute capability of the GPU in the machine you're building for)
    • "which gcc" set to /usr/bin/cuda-gcc (to use the back level GCC version 7)


    Fix Bazel Config
    The above config command writes a file but the location isn't compatible with the latest version of Bazel.  Presumably this issue will be fixed at some point in the future, it's not an issue with Bazel 0.18 and below as far as I'm aware, but has just become an issue on 0.19.  Simply copy the config to the correct place:
    • cat tools/bazel.rc >> .tf_configure.bazelrc
    Build Tensorflow with GPU Support
    This took around an hour to complete on my machine:
    • bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
    • bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
    The first step is the long one for the build, the second simply builds the python wheel file.

    Install Tensorflow with GPU Support
    You've got your wheel file so simply install and enjoy:
    • pip3 install tensorflow-1.12.0-cp36-cp36m-linux_x86_64.whl 
    Run Some Code
    The first time I attempted to run some code to test I got an error:
    • failed call to cuInit: CUDA_ERROR_UNKNOWN
    This can be solved by making sure you have the nvidia-modprobe package installed.  Alternatively, you can run the little script below the following explanation.

    This seems to be some sort of permissions issue and running the following simple script to output the GPUs available on my machine but as root seems to have fixed the above issue i.e. put the following into a script, run that script as root, then any time you want to run code as an unprivileged user the above issue is fixed and the code will work:
    from keras import backend as K
    K.tensorflow_backend._get_available_gpus()

    If the above works then you can try out the Keras MNIST CNN example code.

    VueJS Example for IBM App ID

    I was recently working on a project in VueJS that needed an authorisation layer added to it.  It turns out there aren't any existing examples of how to do this anywhere, unusually not even on Stack Overflow.  So I set about writing one and thought I would share it.  My work was based upon some other useful examples and information, particularly a blog post from the IBM Cloud blog.

    Before I go any further, the code samples are available and documented on GitHub as follows:

    1. IBM App ID API Server
    2. App ID VueJS Client

    The code is deliberately split into two such that:
    1. the API Server is used to demonstrate how to secure an API on the server side.  This is done with the WebAppStrategy of App ID which is simply an implementation of a strategy package for passportjs.  The code here isn't anything particularly new over existing examples you can find on the web but it's necessary in order to fully demonstrate the capabilities of the client code.
    2. the VueJS Client is used to demonstrate two things:
      1. how to secure a VueJS route for which I can currently find no example implementations on the web
      2. how to call an API that has been secured by App ID by passing credentials through from the client application to the API server
    The API Server should be relatively trivial to get up and running as it's a standard NodeJS API implementation using Express.  If you refer to the WebAppStrategy and the blog post I mention above then you'll see the sample code I've come up with is broadly the same i.e. an amalgamation of the two.

    The VueJS Client code can be simple to get up and running as well but it's probably more important to understand how it was created such that you can apply the same principles in your own application(s).  For this then, the explanation is a little longer...

    Start by running the VueJS command line client (cli) to create a bare project and for the sample to make sense you will need to add VueX and Router components using the tool:
    vue create vue-client
    Then understand the 3 modifications you need to make in order to have a working set of authenticated routes.

    1. A store for state. 
    It doesn't really matter how you achieve this in VueJS, you can use any form of local state storage.  The example code I have come up with uses VueX and a modification to the store.js code you get from the client above.  The idea of this is such that the client application can cache whether the user has already authenticated themselves.  If they have not then the client must request authentication via the server.  If they have, then all the credentials required for making an authenticated call to a server-side API are already available in the browser.  Essentially, this is a speed-up mechanism that stops the client from requesting client credentials on each API call since the session store for the authentication actually lives on the server side when using App ID.

    2. A new VueJS Component
    This is the component whose route is to be protected via authentication.  In the case of the example code below the standard vue cli "About" component has been used and modified slightly to include an authenticated call to the server API.  The thing to note here is that the credentials from the client side must be sent over to the server with each API call.  Using the fetch API as per the below to implement your GET request means you have to add the credentials: 'include' parameter.

    <template>
      <div class="about">
        <h1>This is a protected page</h1>
        <h2>hello: {{ hello }}</h2>
      </div>
    </template>
    
    <script>
    export default {
      data: function () {
        return {
          hello: undefined
        }
      },
      computed: {
        user () {
          return this.$store.state.user
        }
      },
      methods: {
        getProtectedAPI () {
          fetch('http://localhost:3000/protected/get-some-info',{
                credentials: 'include',
              }).then(res => res.text())
              .then(body => {
                console.dir(body)
                this.hello = JSON.parse(body).hello
              })
        },
      },
      created() {
        this.getProtectedAPI()
      }
    } 
    </script>

    3. A VueJS Navigation Guard
    You need to write a function that will be added as a VueJS middleware upon each route change.  The middleware is inserted automatically by the VueJS route code when using the beforeEnter call on a route.  This is known in VueJS as a Navigation Guard.

    function requireAuth(to, from, next) {
      // Testing authentication state of the user
      if (!store.state.user.logged) {
        // Not sure if user is logged in yet, testing their login
        const isLoggedUrl = "http://localhost:3000/auth/logged"
        fetch(isLoggedUrl, {credentials: 'include'}).then(res => res.json()).then(isLogged => {
          if (isLogged.logged) {
            // User is already logged in, storing
            store.commit("setUser", isLogged)
            next()
          } else {
            // User is not logged in, redirecting to App ID
            window.location.href=`http://localhost:3000/auth/login?redirect=${to.fullPath}`
          }
        }).catch(e => {
          // TODO: do something sensible here so the user sees their login has failed
          console.log("Testing user login failed - D'oh!")
        })
      } else {
        // User already logged in
        next()
      }
    }

    The requireAuth function does the following in plain English:

    1. Using the VueJS client side cache, test if the user is already logged in
    2. If they are not. then ask the server if the user is already logged in
      1. If they are not, then redirect them to the server login page
      2. If they are, then cache the information and load the next piece of middleware
    3. If they are, then simply load the next piece of middleware


    Each route you want to protect with the above function must have a beforeEnter: requireAuth parameter specified on the route.  When this is done, VueJS will call the requireAuth function before the component specified by the route is loaded.

    {
      path: '/protected',
      name: 'protected',
      beforeEnter: requireAuth,
      component: Protected
    }

    Note: there are methods by which you don't have do call window.location.href to redirect the user to the login page (which does seem like a bit of a nasty hack.  However, these methods require the modification of the webpack configuration and so were kept out of scope of this example for the purposes of being simple.

    South Downs Way Walk

    I've just finished a pretty extraordinary journey, both physically and mentally, here's the story...

    The Short Version - 100 Miles, 4 Days, 2 Charities

    This bit of the blog post is a summary for the TL;DR brigade...
    • Starting in Eastbourne on 31st May
    • More or less 4 marathons in 4 days
    • Finishing in Winchester on 3rd June
    • Raising funds for:
      • The National Eczema Society
      • Parkinson's UK
    • You can still donate (please do):
    • We raised over £4500 (more than £5500 including gift aid)

    I have made all the pictures available on Flickr in my South Downs Way Walk Album.


    The Longer Version

    My best mate decided he wanted to challenge himself as a consequence of coming to terms with his 40th birthday.  He called a few of us to the pub a few months back.  We settled on the idea of walking the South Downs Way in 4 days for 2 charities.  I was nuts enough to go along with this idea and I'm writing the morning after successfully finishing the entire route with him.

    All done and in the restaurant at the finish
    In the car on the way to get started...
    The people you need to know about in this story are:
    • Matt Wettone - team leader
    • Me (obviously)
    • Andy McGrath - team super hero
    • Pete and Phil - team members
    • Stephen Warwick - honorary team member
    • Tim - Matt's dad, team logistics
    • Linda - Matt's mum, team chef

    All done and in the restaurant at the finish
    ...all done and in the restaurant at the finish

    The reason for doing this has already been much better explained by Matt than I'll manage to come up with so if you're interested in that then head over to our donation page or I've saved a copy of his text at the bottom of this post.  My personal reasons are fairly simple in as much as I'd find it hard to turn Matt down, doing something for charity is always worthwhile and challenging yourself every now and then has to be worth something as well.

    It turns out that it certainly was a challenge both physically and mentally.  I'll try to be brief below with a summary of each day...

    Day 1 - Eastbourne to Housedene Farm

    Vital statistics: 27.4 miles in 8 hours 26 minutes moving time (3.25 MPH, moving average), about 58,000 steps and 3968 feet of elevation gain.


    We planned Day 1 to be 24 miles long but ended up doing 27.4 miles.  This was due to starting further back than the documented start of the South Downs Way, we started at Eastbourne Pier, another 1.5 miles along the coast.  Also, the distance charts for the Eastern half of the walk seem to be about 10% out in terms of accuracy so we picked up another couple of miles we weren't expecting.

    Matt, Andy and I left Eastbourne in heavy rain.  I think we were all excited and apprehensive but keen to get started.  After the first couple of miles, we climbed up onto the Seven Sisters and for the next 10 miles or so found ourselves repeatedly going up and down each of the 7 cliffs.  That was really hard work, especially in poor weather, and set the tone for the rest of the day as being a really tough day.

    We pretty much dodged a bullet with the weather though, the rain cleared up by around 11am and it became overcast and warm for the remainder of the day, perfect walking weather.  The rest of the country seemingly got drowned in thundery showers that day so we were really lucky.

    We finished the day in good spirits but with Andy starting to struggle a bit with a knee problem and having picked up a blister along the route somewhere.

    Day 2 - Housedene Farm Camp Site to Amberley

    Vital statistics: 29.7 miles in 9 hours 5 minutes moving time (3.23 MPH, moving average), about 62,000 steps and 3186 feet of elevation gain.



    This route was also longer than the planned 26.5 miles we had in mind for day 2.  We picked up the trail where we left off the previous evening at Housedene Farm and walked to Amberley.  Pete joined us, meeting at the start point in the morning and intending to walk the next 2 days.

    Andy put in a heroic effort just to be ready for the start.  From the previous day, all of us thought there was a real danger he may have to pull out for at least day 2, but somehow he managed to be ready and start walking with us again.

    The weather was decent enough for walking in as much as it wasn't sunny and the temperature was probably mid teens.  However, we basically spent the entire day walking through cloud.  The most we saw all day was dew drops forming in our hair from all the moisture in the air.  The dampness did make it a little tougher but we found the biggest drawback of this weather was the lack of reward for climbing some enormous hills.  For example, we walked the length of the south rim of Devil's Dyke and none of us have the foggiest (pun intended) idea what it looks like.  Visibility was between 20 and 100 yards all day!

    At about 18 miles Pete wasn't able to continue, having aggravated an old knee injury.  We decided it was best for him to leave the trail and get picked up early so he headed to a nearby road for rescue.  At that point, Andy took the tactical decision to accompany Pete on the journey home to give himself more time and a better chance of recovery to finish the route, leaving just Matt and I.  This was definitely one of the low points for all of us in the team that day.

    Matt and I soldiered on through the cloud for another 11-12 miles after that.  We were due to be picked up at the agreed point but due to a traffic accident and resulting traffic chaos, Tim was unable to meet us until 40 minutes or so after our agreed meet time.  Matt and I decided to add some more miles to make day 3 a bit shorter and easier.


    Day 3 - Amberley to Queen Elizabeth Country Park

    Vital statistics: 25.2 miles in 8 hours 3 minutes moving time (3.13 MPH, moving average), about 54,000 steps and 3287 feet of elevation gain.


    Day 3 was a huge mental struggle for me.  We had lost Pete the day before, Andy was with us again and struggling through.  But the main problem was having eaten fish and chips for dinner the night before and not slept a wink resulting from the indigestion that followed, I was both extremely tired and feeling rather nauseous as well.  I kept going, putting faith in my physical ability (physically I felt fine), force fed myself as many calories as I could (I really didn't want to eat) and by around 2pm was starting to feel a little better.

    There were a huge number of highlights on Day 3 that really helped:

    • We were joined in a fairly last minute addition to the plan by Stephen Warwick.  It was lovely to have someone new in the team with fresh legs and a different dynamic.  Stephen's used to lengthy exercise having walked some trails in the Himalayas and also ridden from Lands End to John O'Groats.  He really helped me get my head in the right place to soldier on with some good team talks and deliberately poor maths.  "We're half way" he said at 11 miles.  "No we're not, go back and re-sit your GCSE" I replied knowing full well that we'd be the wrong side of 25 miles by the time we'd finished the day.
    • The weather was much, much better.  We could see.  It was warm, not hot.  That was lovely.
    • We'd previously walked from Buriton to Didling during training, about the last third of Day 3.  So there was a point when we reached the part of the trail we knew.  That was a huge moment for all of us.  We stopped there and celebrated with a banana each (we know how to live).  Matt and I had now walked to the point where we knew we'd walked the entire length of the remaining trail into Winchester during training and most of that done in a day.  We started to believe this might actually be possible.
    • Amy, Matt's sister, joined us at Harting Down and walked the last 8 miles or so with us.  That was another huge boost to us - thank you, Amy!
    • Linda also joined us for the last 2 miles into Queen Elizabeth Country Park.  Since she's suffering with Parkinson's, I thought that it was particularly poignant for her to join us.  I'm really glad she did. 
    • Donations were still coming in, we'd had a particularly good donation day on Day 2, and for those people who decided to wait until we started walking - many thanks - you helped keep us going and our spirits up.
    The last few miles were really tough for Andy.  Carrying his injury, he hobbled along to the finish, doubtful about whether he could complete Day 4.  Testament to Andy though, he sat down in a disabled parking space in the car park, causing much hilarity.  Other than in quite a lot of pain, I'm not sure exactly how he was feeling, but I imagine very proud of getting this far but also worried about the potential disappointment of not being able to finish the whole thing on Day 4.


    Day 4 - Queen Elizabeth Country Park to Winchester

    Vital statistics: 24.1 miles in 7 hours 12 minutes moving time (3.35 MPH, moving average), about 51,000 steps and 1972 feet of elevation gain.


    The start of day 4 was met with mixed feelings.  We had all stayed at home the previous night, our first night home for the previous 3 nights.  Seeing family and sleeping in your own bed was great.  However, Andy was unable to recover from the previous day in time and we had a message saying that he wasn't going to be able to join us for the final day.  That must have been a very hard decision to take but you'd never know it, having been delivered with the sense of humour and good grace Andy had shown throughout the walk.  We'd miss him but we did have Phil joining me and Matt with his fresh legs for the final day.

    Matt and I were buoyed by the fact we'd walked the entire length of this day during training and done so in a single day.  That feeling of confidence and the fact we knew where we were going and exactly what lay ahead at each stage of the walk certainly helped.  It made things a lot easier when we were getting tired during the day.  It's probably due to a mixture of factors but we walked a little faster on day 4 too.

    Andy parked in Winchester and walked backwards along our route to meet us and walk into town the last few miles.  Still struggling, that must have been a hard walk for him too but it was great to cross the line with the main team of 3 of us and Phil who joined for the final day.

    We arrived in Winchester a bit earlier than any of us might have expected.  But with due warning, friends and family were dotted throughout the last couple of miles to meet and greet us.  That felt amazing, picking up more people as we got closer to the end point.  We were clearly going to make it, feeling extremely tired, but unstoppable at the same time.

    We went for food and drink.  Left the building to be driven, a long way home but a shorter distance than we'd walked each day, that felt weird.

    We're done.  This is now history.  But please consider a donation, if you haven't already.


    Matt's Original Text

    "Thank you so much for visiting my page. My name is Matt and this year I will be turning the grand age of 40. I was inspired to take on a challenge to mark the occasion and hopefully do some good while I'm at it. 
    On 31st May I will be walking the South Downs Way. This is a walk that is 100 miles long and I will be attempting to cover it in 4 days. This means covering something close to a marathon each day. To help me do this I have put together a team of close friends. They will be joining me for either all (Graham & Andy) or part (Phil and Pete) of the walk. 
    Together we are raising money for The National Eczema Society and Parkinsons UK. Here's why... 
    I was diagnosed with atopic eczema aged just 3 months and I still struggle with the disease to this day. With the help of my parents, my family, close friends, some incredible doctors and some equally astounding nurses, my skin is in a far better condition that it ever was as a child. In July of 2017 my wife and I welcomed our second child into the world. Sadly he too has been suffering with eczema. 
    The work that the National Eczema Society do is vitally important in the development of new treatments, drugs and support for sufferers of this condition. I am passionate to find something that not only helps my life but helps the generations to come (including my son). 
    In September 2017 my Mum was diagnosed with Parkinson's. Prior to the diagnosis as a family we had seen her deteriorating. She was walking much slower than normal, she dragged her feet, her right arm developed a tremor, her speech was much quieter and she wasn't herself. 
    My sister got married in early September and while helping to setup the day before Mum tripped on a kerb and broke her arm. Having recognised she wasn’t herself, Mum had been having various tests. A cardiologist she’d been sent to contacted the GP to say he thought Mum should be referred to a neurologist because he thought she had Parkinson’s disease. It is very likely that not picking her feet up (due to Parkinson's) caused her to trip and fall. 
    Since then she has been given a medication, seen a Parkinsons nurse and a physiotherapist. This has meant a significant improvement and she is now standing much taller (quite a significant given how 'tall' she is), she is speaking more freely, she is interacting with all her grandchildren with a vibrancy that was missing. 
    The work that Parkinsons UK do is vital for people with this disease. They provide invaluable support and research that could not be possible without the support of donors. 
    I would be so grateful if you would like to sponsor my team and I as we embark on this challenge and help raise some money for some fabulous charities. 
    Don’t forget to gift aid your donation so we can get even more from the Taxman."

    Last, but very much not least, a great debt of thanks to everyone who has supported us.  Those of you donating your hard-earned cash to our two fantastic causes, thank you.  To our wives and families who we've abandoned to fulfil our own personal challenge, you're tired too from holding the fort while we've been gone, thank you.  Finally, particular thanks to Linda and Tim for the logistical support, you were a part of the team as well and we wouldn't have completed the journey without you.