Monitoring Site and Service Uptime

Testing a site is operating correctly, and it's required services are also available, can be challenging. There are various surface metrics you can test but often they are not reliable and are unable to give any depth of information about many important factors.

Surface Level Data

When it comes to web services you can get a good set of working or broken tests running with ease. By testing only surface data you can find out with some certainty if all services are up or if they are not.

There are loads of online systems that offer free site uptime checks. I've used Pingdom for it before but there are many others. The Jetpack WordPress plugin also has an uptime monitor feature which I have used.

Pinging Hosts

Many hosts are accessible through the internet and they will respond when you ask them to. You can ping the domain and assume a response from the host means it's ok. Checking ping response times and packet loss is a decent metric as well.

This doesn't check that what you want returned to user requests is what is being sent through. It only checks if the host is accessible.

Making HTTP Requests

When checking a website is running you can go a step farther that pinging and send an http request to the site. Every http response should contain a code number which can be used to determine success or failure.

When the http service returns code 200 it indicates success. The downside of relying on http response codes is that even success codes don't necessarily mean that a site is running. Other services might not be working correctly and the site might not be giving the correct output.

One way to enhance http testing for site and service uptime is to do additional checks when success codes are returned. Testing the response for some known output (for example look for a certain tag in the header, perhaps inclusion of a style.css file). If your known output doesn't exist in the response and a success code is returned then there is a chance a supporting service is down.

Deeper System Health Metrics

Surface level metrics can be an easy way to test for mostly all working or something is broken somewhere. It often doesn't give any insight into what is broken or how well working services are performing. 

You can get all kinds of information from the server that runs your sites and services if you are able to open a session to the host.

Shared hosts rarely give access to shell, when they do it's always severely limited to ensure security between customers.

System Monitor

Even in a limited shell you can probably get information about your own running processes. Linux shells usually have access to the `top` command. It's essentially a task manager that shows things like CPU usage, Memory usage etc.

In top you should be able to see the total CPU cores, RAM, Virtual Memory, average system load and some detailed information about the processes running. In limited shells you may only see processes running from your user accounts but on a dedicated server or VM you will probably be able to see all of the processes and which is using what system resource and how often.

Realtime system metrics like this can show what is happening right now on a host.

Checking on Important Services

There are a number of ways to check status of different services.

Upstart Scripts

Many services will provide a way to check their status. Often these are provided as scripts for your operating system to execute. I've heard them called startup scripts, upstart scripts, init scripts.

Depending on your OS commands like these could be used to check on some service statuses.

service httpd status
service mysqld status
service memcached status
/etc/init.d/mysql status
/etc/init.d/apache2 status
/etc/init.d/memcached status

Checking Log files

Most production softwares have in-built logging facilities. They can push data into different system logs or to their own logging mechanisms. Usually logs end up as easily readable text files stored somewhere on a system. Many *nix systems store a lot of the logs in /var/log/ or /home/[username]/logs/.

When it comes to running websites the most common setup is a LAMP stack. Default settings are usually to log requests, some types of queries and php errors in those systems.

Reading logs will be able to give you all kinds of useful information about services. There are also ways to configure some services to output more verbose data to the logs.

External Site, Service & Infrastructure Monitors

There are a number of dedicated server health monitoring suites available. Premium services like New Relic and DataDog are capable of tracking all kinds of deep level data using specifically built reporting agents capable of running on a system and reporting all of that deep data from your servers and processes.

Until very recently I was a customer of New Relic for personal sites. I used them especially for infrastructure monitoring and deep error reporting and I would highly recommend them if that's what your looking for. NOTE: New Relic also offer other services I did not use, check them out to see all features.

Open Source Monitoring Suites

In addition to premium services available for monitoring there is also some fairly strong contenders in the Open Source market that are very capable.

Most can run on a host and check metrics for it and many can also check remote hosts as well.

Nagios springs to mind right away. It can do various service checks, pings, resource monitoring and system tests in a very configurable way. It's highly configurable nature makes it extremely powerful.

Munin is another software I've used to keep track of things like network and disk IO as well as post queue monitoring.

Nagios and Munin I can recommend as potential monitoring solutions if you want to self-host them.

Two Weeks with Gutenberg

Time flys by. It's been about 2-3 weeks since I first tried the Gutenberg editor. I've seen 3 updates bringing various tweaks here and there plus the addition of a new block type.

Gutenberg Editor – Updates are coming, not much is changing.

The Verse block was added in version 0.5.0.

Poets rejoice in the days of Gutenberg.
Published and shared for all to see.

Writing. Editing. Merging of content.

New experiences, outlets, sharing and connecting.
Words. A Voice.

Obviously you can see that I'm no poet. The verse block seems to be not much more than a stylized text box. The same thing could be achieved with a standard textbox or even a blockquote.

Thoughts of Gutenberg So Far

My opinion of the future core editor for WordPress has not changed since I started to use it. I feel like it's perfectly fine for writing text content, it's still lacking variety of blocks and that I am concerned over how data is stored.

I provided a short review noting my concerns over data storage at the plugin page in the repo. If you have any feedback you should provide it to the authors as well, either in the plugin repo or at the github project.

From what I see with some of the block types the intention will be for themes to style them to match their designs. So far no themes I have looked at using Gutenberg contain any such styles. Until the end result of using Gutenberg exceeds the result of doing a similar thing in the existing editor I don't think this project is anywhere near close to inclusion in core.

Basic Social Research Opportunities On Twitter

With a whole bundle of data right at the end of a simple Twitter search I've always thought it would be an awesome idea to somehow make use of it for user research. Specifically vernacular and terminology research.

  • How do they arrange what they say?
  • What words do they use together,
  • What way do they ask for help?

I previously had ambitions of building some kind of machine learning system to extrapolate all kinds of awesome metrics from that data. That project is semi-on-the-shelf for the moment but that doesn't mean I can't still somehow use the search data in a more high level way.

Using Tweets to Get and Idea of the Language People Use for a Given Topic.

Take a particular blog post that was written some time ago but does not perform as well as you feel it could. Head to Twitter advanced search and enter a few key terms from the post to bring up tweets somehow related to your topic.

Read through the list, note some down into a list, refine the search, note down more. Be sure to get a lot – try make sure you have mostly directly related tweets to what your topic is but also include some loosely related items and a handful that are borderline.

Partial match data is still good at this point but do exclude any that are obviously entirely unrelated to your needs. In a machine learning environment unrelated items would be good test data but manually they'll just add clutter.

Spotting Connection Phrases and Linking Words

Once you have a nice big list of tweets somehow linked to your topic choice take another read through them. Pay attention to the connecting words and phrases in them people use to bind the topic and objects together. Those are the words you'll use in linking phrases for an article.

Sometimes its harder to spot commonality within these linking phrase because the words don't have as much force as the specific key phrases we are searching for. That's why it's important to pay attention to them as much as you can – they are hard to discern from data gathered from searching only key phrases.

Find the Questions People Have About the Subject

The first thing to do is to find the questions people are asking about the subject matter. Are many people familiar with it? Do people have similar complaints? See the same question being asked again and again?

Finding questions can be done multiple ways. Checking for shares to sites you know people ask questions on is a good way. Searching for words that can indicate questions (‘Who’, ‘What’, ‘When’, ‘Where’, ‘Why’,’ Will’, ‘How’ and ‘?’).

Knowing what questions people ask is a good way to spot any sticking points at various levels of expertise in the subject.

A side benefit of searching for shares to question sites is that it may also lead you to a better description of that question. Sometimes even the answer to many of those questions are at the links.

Knowing both the questions people have and the answers to those questions can be a great place to start refining posts or any content ideas you may have.

Connect the Unrelated Objects to the Related Ones

Sometimes there can be affinities between various topics that are seemingly completely unrelated. In any given group the people who like one things might majoritarily like something else. I cannot think of any real-world examples that have been proven to be accurate however I can give a few examples.

Lets say in a group of 10 people there are 5 cat owners and 5 dog owners. 4 of the cat owners like smooth peanut butter. 2 of the dog lovers like it too. You could say there is a strange affinity between cat owners and a preference for smooth peanut butter.

Another take on the above example might be that since 6 out of a total 10 pet owners prefer smooth that might imply that pet owners have an affinity with smooth peanut butter.

That's only a single, made-up, scenario with 2 provided perspectives. There are so many unseen affinities within different groups of people and subject matters that being able to correctly identify the ones that fit your audience profile is a huge boost to how likely people are to identify with the content you create for them.

Also if my above example is true then it makes total sense to somehow include smooth peanut butter on all of your cat related content. Keep that in mind for the future 😉

One Post A Week In Gutenberg

I don't recall who said it or where I read it (and I frantically scrolled back through many days of social feeds to find it) but at some point in the last 2 weeks I'd heard someone say to try writing a post every week in the new Gutenberg editor.

I'm going to try do just that and see how it goes. There are going to be regular updates and tweaks so it may be fun to see a full evolution from now till the merge with core.

Why I Think This Is A Good Idea

There are two reason why I think this could be a good experiment. Especially for myself.

  1. Following development, reporting bugs, creating feature requests. The more I use Gutenberg the more I can be sure that the final product works how I want to use it.
  2. I've been incredibly lax on writing blog posts and creating new content in general in recent years. Partly due to time demands of a growing family but some other factors played a part in that too.

I would like to change the fact that I haven't done nearly as much writing as I could have. Encouraging myself to write something at least once a week I hope will build a habit that I can easily return to.

Ready For Production…?

I don't think Gutenberg is ready to use in production just yet. Perhaps in a month it could be closer but currently a lack of content blocks (or options for blocks) I feel would be useful – and many missing or broken styles – make is so usage is limited to personal sites or side-projects.

Posts like this one will be how I use Gutenberg and for the most part I expect primarily to use it for text. Paragraphs, headers, lists and links.

Project Gutenberg

Project Gutenberg is the vision for the new editor experience in WordPress. It offers a content block type solution to adding different elements within your posts or pages in a simple and convenient way.

The editor will endeavour to create a new page and post building experience that makes writing rich posts effortless, and has “blocks” to make it easy what today might take shortcodes, custom HTML, or “mystery meat” embed discovery.

Matt Mullenweg

The existing solution in WordPress uses Tinymce. For most people it’s a perfectly fine way to write blog posts and create pages for their site.

Using a combination of adding text, the formatting options in the Tinymce, shortcodes, HTML and CSS most people are able to add the content they want and format it to look how they desire. Essentially the new editor flow would enable you to separate each of those elements out into individual blocks, each with their own settings and content.

Trying Project #Gutentberg

As a WordPress user I’m interested in the future of the editor experience. What am I going to be writing posts in? What will I be working with?

I wanted to try early and make sure the vision for the future meets my needs. If it doesn’t then there’s still time for me to give feedback and make sure that it does 🙂

Currently Gutenberg is available to download from the WP repo as a plugin. Install, activate and give it a test run.

https://twitter.com/Will_Patton_88/status/881065615158644736
I tweeted that out this morning. Right now I’m writing this post in Gutenberg. 

And so there are some bugs to work out, some issues with theme styling compatibility etc. What I’ve found in my limited use so far is that it’s not quite polished enough to use everywhere. It’s more than sufficient for writing blogs posts like this one though.

It’s possible to save posts and swap between editors if you choose. If you need to fill in any meta boxes for a post, such as any SEO or post type specific meta boxes, you’ll still need to use the existing WP editor to fill those.

Providing Feedback – Reporting Bugs/Issues and making Feature Requests.

Development is very active with a plan to issue 1 release a week with fixes, improvements and changes. Discussion happens on the make.wordpress.org blog and in the slack #core-editor channel.

You can report bugs in the plugin’s support forum but a better place is to create issues for them in the Project Gutenberg repo at GitHub.

Just so I can test the button content block there’s one below directly to the GitHub issues page for the project.

Peer Review On Stack Overflow – it’s What Makes it Awesome!

stackoverflow stickers with and brand logo and name text

Stack Overflow has to be the number 1 place to visit when facing programming challenges. I visit it regularly and every other developer I know has been to it at least once during their career.

There are a swath of different network sites each with a focus on a specific set of challenges faced by a particular group. The point of all the sites is to get help, find answers and to get peer review for your ideas, questions or solutions.

The reason that Stack Exchange sites are so prevalent, and so useful, is because they follow some guidelines that ensure only high quality, useful, content.

Stack Exchange Questions

As a general rule Questions on Stack Overflow should be directly related to programming and Answers should be direct answers to the questions posed. Questions on other SE network sites may not be related to programming – that depends on the site but it should always be directly related to the site topic.

This way other people with the same question can find the solution they need a lot quicker than if questions, or answers, are too broad and do not include good examples.

Questions should follow a format that includes a minimum, complete, and verifiable example of the issue at hand. Ensuring questions have examples, capable of being reproduced, detailing the problem gives a greater chance of that question, and it’s answers, being valuable to people with similar issues.

Peer Review at Stack Overflow

I often run quickly through the question scenarios to verify issues or to give additional clarity to a question. I comment often but answer infrequently.

That is the essence of the peer review system. You provide whatever input or help you can in a given situation and are rewarded with Rep points for useful insight. You do not need to be providing the answers – all you need to do is provide some helpful input.

Rep Points

You gain Rep points from various positive site actions. Reaching certain milestones unlock some privileges.

The Rep system was not built to be treated as a status symbol. Benefit unlocks are infrequent, and often negligible, but you consistently unlock news ways to be helpful to other users and the site as a whole.

The points you accrue unlock milestones that allow you to be helpful in different ways.

Recently I crossed the 500 Rep mark. It’s not a very high milestone but it’s an important one.

At the 500 point mark you unlock access to some review queues. What I learned from looking through these queues is just how much peer review goes into everything you see on the Stack Exchange network of sites.

Peer Review – Review Queues

All questions from new users are reviewed. First answers are reviewed. New answers added to old questions are reviewed.

Almost every question is checked over to make sure it’s appropriate and that it meets a minimum standard expected for creating good questions.

One of the earliest review queues you can access is the new Documentation review queue. You can access it at 100 Rep. New docs are checked and edits to existing docs are checked to ensure that they add value in some way. This section needs more eyes to build it into a valuable resource but has less activity than some of the other queues.

At 500 points you can access:

  • Triage – to help identify good posts from ones that need work.
  • First Posts – you can use this as an opportunity to teach new users how to ask questions that result in good answers.
  • Late Answers – used to spot hidden Gems that may be missed due to the age of a question. This is also used to help filter out people adding answers purely for the Rep points.

People with review privileges are able to look through these posts and do quality checking on them. Several users check each post so a consensus can be met and people are encouraged to edit or comment on posts if they feel they can be improved.

This gives existing users a way to filter out low quality posts while helping improve useful content. It goes beyond simply spotting spam and removing answers that have no value. It’s about improving things as a whole.

Teach Users to Use Stack Overflow and They’ll Make It a Better Place for Everyone.

Aside from being filled with great answers one of the most instrumental things in making SO so useful is the help that users provide. It’s what keeps SO the dominant source of Questions and Answers to specific programming questions. Users ensure the site contains high quality content – and remove or edit content that is not useful or otherwise considered low quality.

Teach users to ask good questions and they’ll ask them better.

Show users the correct network to ask their question on and they’ll get a better chance at a good answer.

Point out an inefficient method in a question or solution.

Provide links to proper documentation to flesh out a fuller answer.

The point is improve things in a way that makes it more useful for everyone. Users might be asking the question or providing solutions – it doesn’t matter. The end result is a system that enables people to, find, provide or ask the right questions and answers to any given problem.

Peer review is instrumental in making that happen.

Best WordPress Plugins For A Successful Blog

Originally I wrote this answer as a first draft to a question I read on Quora. Figured it’d be worth posting here as well because it’s unlikely to get much views on the question – but I still wanted to answer it anyhow 🙂

In my experience there is, honestly, no plugins required to make a successful blog. In fact, more often than not, my advice to clients is usually about removing plugins not required rather than adding more.

WordPress, right out-of-box, is an excellent platform for content management. In terms of being purely a means to share content online (like a blog, as opposed to being an online store or some other product/service provider) there is nothing that fits the bill for as many uses as WordPress does without modification. Personally I think there are a couple of shortfalls, which I’ll detail a little later in the answer, but those are easily filled by a small collection of plugins.

  • Form builder/Form processor – WordPress has no form builder in core. You can certainly write the markup yourself and use sanitization and validation functions from WordPress during form processing but that’s custom code and not a feature available out the box. My recommendation is Gravity Forms (premium) but free alternatives are available. Contact Forms 7 is an excellent free plugin that works similarly.
  • Caching – WordPress, on it’s own, provides an excellent base for database caching – Transients. What the Transients API provides is essentially an object cache that stores the results of certain database queries (querying the database is often one of the slowest operations of sending the end-user the page they requested) so that the query only needs run one time and the query results can be obtained with a single lookup. I see this as both a benefit and one of the shortfalls – because it stores the objects in the database! It does speed up getting the data on second request but it still needs a DB lookup all the same. The best extension to this is to put that object cache into RAM using an in-memory cache – such as MemCached. My choice of plugin for doing this (and other cache/performance related tweaks) is W3 Total Cache. The other popular choice is WP Super Cache. Both are good, and have very expansive options. WP Rocket is also an incredible caching plugin but it’s a premium plugin. Another plugin which is recommended to me by another WP developer is Simple Cache. It was described as having an on/off switch and no complicated options and can put your object cache into Redis/MemCached.
  • Security – before any recommendations are made here it’s worth noting that WordPress core is extremely secure and the core team are incredibly fast acting when it comes to security exploits. When you hear about WordPress site comprises it’s rarely, if ever, the fault of WP core and almost always the fault of code that extends it – such as that in plugins or themes. When it comes to security and plugins what you’re looking at is enhancement. Things like temporarily locking an account with too many failed login attempts. Temp or perma-ban on IP addresses and hosts that repeatedly fail logins. Scanning for file changes when you haven’t changed any files. You can do these things with the free version of WordFence.

In addition to Form builder/processing, Caching and Security plugins it’s certainly a good idea to take backups. Plugins are available for backing up your site files, uploads and database. Personally I can’t make a recommendation amongst the best of bunch backup plugins because I don’t use them on my own sites. I favour a server side solution for backups because it’s usually easier to handle a restore. We all know backups aren’t about storing your data – they’re about restoring it, right?

BONUS

These 2 plugins are here in the bonus section because many people consider them to be overkill.

Jetpack is a massive plugin, offering many features. Most notably the functions it provides are simple off-site stats gathering, social publishing and a 1-click image optimization CDN. It might be said that Jetpack is overkill for these features since the plugin is so huge and offers so much more. There’s a lot of truth to that however I see Jetpack as a relatively good way to get these features easily without any need to worry about complex setup or config – a real bonus if your focus is primarily on creating content rather than spending a lot of time setting up features.

Akismet – for vetting comments and form fills to check it for potential spam. Since Akismet has such a massive database of known spam, IP addresses and identification patterns it’s one of the better choices. Some people find certain rules applied by Akismet does block legitimate comments because they look a little bit like spam according to their rules (and no rules are ever perfect).

Akismet is a large (in terms of the shear amount of code it adds) plugin for what it does and some consider this overkill. If you find it’s giving false positives on your site or want a more lightweight solution one lesser used option is Growmap Anti-Spambot Plugin. It hasn’t been updated in 2 years but I’m certain it still works. It essentially adds a honeybot type block that is able to block unsophisticated spambot (which is probably 90% of them or more).

A Raspberry Pi Twitter Bot In Python

NOTE: This post was sitting unpublished for almost exactly 1 year. I went ahead and gave it database storage and implemented scheduled posting You can find the tweetbot on GitHub and I even have a working version that is deamonized.

I’ve wanted to build a Twitter bot for some time. Mostly just something to send the occasional tweet. That could easily be extended to something that would become a scheduled tweet bot and a database could even be added to store future tweets.

I also wanted to monitor for mentions and notify me of them. Watching for something to occur and then running an action could also be extended in many ways, especially if a live search stream were to be added to the mix.

The basics of what the bot does is relatively simple. It needs to be able to access various streams (my notifications, a search stream). It has to be able to parse them and invoke something based on a given result. It needs to be capable of posting a tweet from my account.

Since I plan on using my Raspberry Pi for this and Python is a popular language to use on it I looked around for some reference points. There’s a very nice Python library written that is capable of doing the heavy lifting of sending requests to the Twitter API for me. It’s called Tweepy and I found it through GitHub.

Using Tweepy I should be able to easily connect and post/get to the Twitter API. Let’s see how that goes.

You will need to create an app and get some access credentials from Twitter to make your API calls – especially since the plan is to make it actually post to accounts.

Installing Tweepy

First I need to install Tweepy. You can run pip install tweepy to do it – and I did on my laptop and that worked just fine. On my RPi though I will be cloning it from Github and installing manually. There are certain base level dependencies of Tweepy, or of it’s dependencies, that are probably already installed on most systems. They were not available on my Pi though and the setup.py script doesn’t handle those. A quick Google of the problem told me to run pip install --upgrade pip to get them. That worked.

git clone https://github.com/tweepy/tweepy.git
cd tweepy
sudo python setup.py install

Since I also plan to eventually use a database to store things in I also installed mysql-server but that’s not absolutly necessary for right now.

sudo apt-get install mysql-server

Writing the Bot Script

After that I used the code I found on this site to make a bot that was able to tweet things out that it read from a text file. I called the script bot.py and the text file with the tweets tweets.txt.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# from: http://www.dototot.com/how-to-write-a-twitter-bot-with-python-and-tweepy/
import tweepy, time, sys

argfile = str(sys.argv[1])

#enter the corresponding information from your Twitter application:
CONSUMER_KEY = '123456'#keep the quotes, replace this with your consumer key
CONSUMER_SECRET = '123456'#keep the quotes, replace this with your consumer secret key
ACCESS_KEY = '123456-abcdefg'#keep the quotes, replace this with your access token
ACCESS_SECRET = '123456'#keep the quotes, replace this with your access token secret
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)

filename=open(argfile,'r')
f=filename.readlines()
filename.close()

for line in f:
api.update_status(line)
time.sleep(60)#Tweet every 1 minute

The script needs to be given a text file containing the tweets you want it to post. Make a .txt file in the same directory containing some tweets. Then call the script passing the .txt file. Assuming the script is called ‘bot.py’ and the tweets are in a file called ‘tweets.txt’ this is the command.

python bot.py tweets.txt

It’ll run for as long as it takes to post all the tweets from your file and it’ll wait 60 seconds between posting each one. When I ran it myself I got an InsecurePlatformWarning. It seems that’s down to the version of Python that I ran it with and the version of requests that it uses. To fix it I ran installed the requests[security] package as per this StackOverflow answer.

As of now you should be totally up and running with a Twitter Bot that can post tweets for you. It’s not the most useful of things considering it’ll only post through a list from a text file at a fixed interval.

Next steps in this project will be to add database support and time scheduling into the system.

Could I Teach a Machine to Learn?

It’s pretty obvious to anyone who knows me that computers fascinate me. The hardware, the software, their uses. Everything about them intrigues me.

What tells packets where to go once they are out on the open web? How does a computer generate a random number? What allows memory to hold a persistent electrical signal? I encourage you to find out the answers to each of those in your spare time – everything about it is fascinating.

One of the particular things that I am interested in is Artificial Intelligence. It just so happens that one of my favorite YouTube channels Computerphile has several recent videos that are extremely informative on AI. They also have videos about Machine Learning and Search Engines in videos from recent months. All worth watching. Each of the topics are somewhat related to each other and yet each is distinctly different.

After watching them it got me to thinking about Structured Data and how exactly the structure is given or defined. At small scale you can take a dataset find common attributes and organize it by that criteria.

You manually set the criteria and the amount of categories then sort them into each pile. It’s easy.

How exactly would that be done with data that has no labels or clear set of common attributes? Taking unorganized data and indexing it, assigning labels, working out attributes. Finding better and more efficient ways of doing that is part of the improvement process of Machine Learning.

That’s exactly what I’m going to investigate doing in a long running project. Extremely efficient indexation and giving structure to random data is kind of how search engines work. There’s a strong correlation between the kind of thing I want to do and how search engines provide the most relevant result for a given terms.

I’m going to grab my data from Twitter and store it, index it, categorize it and learn from it. The data from Twitter already has somewhat of a structure to start with but that exact structure might not be what I’m after. I want to structure it in many more ways.

I’m going to make use of what I learn in… maybe no ways at all but I’m gonna do it anyhow haha!

  • Make a Twitter Bot with search capabilities.
  • Store Tweets in a database.
  • Index them.
  • Categorize the data.
  • Learn and Enjoy!

I hope that I’ll learn an awful lot from doing this. Probably not directly from the data I gather but definitely in terms of skills. Plus everyone needs a project to keep them focused. Some of the elements of this have been on my project list for a long time, now is as good a time as any to make some headway.