Open Source Projects That I Rely On To Effectively Do My Job – Part 2

There are a number of things that exist in the open source world without which I do no think I could do my Job. I am a Web Developer. I work on a range of projects using different systems, languages and processes. I work a lot with WordPress as well.

Many aspects of my work revolve around scanning logs, writing and reading code in a text editor and browsing the internet. I have my prefered programs for doing each of those tasks.

This is a set of articles that look at a lof of the open source projects that I rely on to do my job and do it effectively.

Software And Tool Choices

My job consists of 3 primary task types and I have my preference of what software to use for each of the tasks.

  • Analysing log files.
  • Reading and writing code.
  • Browsing the internet.

Most of the time I opt for open source over closed and choose cross-platform options where available.

Browser Choice – Chrome/Firefox

As a browser I want to say I use a fully open software. I do not. I use Google Chrome primarily (Firefox secondary which is open source though. Half a point for that maybe???).

Chrome is based on the open source Chromium so it's origins are open. It may also still follow Chromium as upstream. I use Chromium on minimal virtual machines but not often.

There is tracking and closed systems built into Chrome which I make use of. Cloud syncing is useful for me.

Chrome is not fully open but it was forked from open software and for me the closed source parts are an acceptable drawback.

Plus it's the most popular browser choice from users. I need to see the web in the same way that most people see it.

Reading and Writing Code – Atom 

Reading and writing code I do in Atom Editor. It's fully open, started internally at Github and is built by them and others to be the best open source editor it can be. 

For anyone working in with code and do not need a special proprietary IDE (most people working with code) for a given purpose I highly recommend Atom. It's well maintained, constantly developed and improved based on the needs of developers using it.

Atom is built with a framework called Electron (again open, from Github) which helps compile and run JavaScript (Node) as desktop applications and allows building for desktop to be very akin to building for the web meaning transferable skills for developers.

If Atom didn't exist I would use Lime Text (OSS variant of Sublime Text) or Notepad++.

Scanning Logs – Terminal and BASH

I do a lot of work in the terminal. Often in several terminals at the same time. Working with them using CLI is actually an incredible way to multi-task and effectively monitor progress. Most of the time when on command line I'm using BASH syntax. Sometimes it's powerShell… let's avoid that conversation lol!

I use Ubuntu as my main dev machine. Ubuntu ships with terminals that run BASH. Most Linux OS run BASH as well so connection to another machines command line is familiar regardless of what machine.

Logs are usually files containing plain text. Many command line tools exist to read through text files. An incredibly useful tool is called grep. It is used to search input for strings or regex matches.

4 Tips To Writing More & Better Blog Posts

I used to write a lot of blog posts on a number of different topics. I even had paid positions for weekly articles.

The last few years I’ve written less and less. Subjects have narrowed to mainly web developer focused topics as I no longer have the time or the inclination to explore such widely diverse topics in-depth to write about them.

What I learned might be useful tips for others. Here’s a couple of takeaways from sharing blog content online for the last 8-10 years.

1. Write What I know Already

You don’t always need to write about brand new topics or vary the discussion with other points of view. It’s ok to sometimes just write what you know and are good at.

I am happy to write about what I know. Realizing that fact has allowed me to start writing more frequently and more

The words flow easier, it requires less research and reference material and I can be more confident what I am saying is accurate.

2. Enjoy It, Even when Rambling

When I write I often ramble a lot. A simple idea may be 5 or 6 paragraphs by the time I’m done. during editing it becomes more concise.

I should write it all down while I am enjoying it.

3. Edit After Some Time, But Not Too Much Time.

I have a terrible habit of part writing posts. 1000 words and in 1 session burns me out. I take a break and come back later, sometimes later is weeks later. The longer between sessions the hard it is to pick back up on the flow.

The same is true between writing and editing. If you wait too long you can’t remember what you intended during a ramble and you may not edit it to give proper clarity because of that.

4. Keep All Drafts

I write many intros and parts of posts. I sometimes come back to them in a few days or weeks. Sometimes I’ve even came back to a post in drafts after 3 years.

When you’re inspired the words come easy, when you loose that inspirations it’s hard to keep going. The inspiration can come back or something in the future can make the post more relevant or topical.

5. Incoherent Thoughts Are Sometimes Useful

Sometimes when you write stuff down it comes out wrong. Other times it is jumbled and badly arranged. I’ve even written things that on re-read make absolutely no sense.

Even those incoherent thoughts are worth keeping. I mean there’s no reason not to keep them but you might be surprised how looking back on those can give new ideas or a burst of fresh inspiration.

Open Source Projects That I Rely On To Effectively Do My Job – Part 1

There are a number of things that exist in the open source world without which I do no think I could do my Job. I am a Web Developer. I work on a range of projects using different systems, languages and processes. I work a lot with WordPress as well.

Many aspects of my work revolve around scanning logs, writing and reading code in a text editor and browsing the internet. I have my prefered programs for doing each of those tasks.

This is a set of articles that look at a lof of the open source projects that I rely on to do my job and do it effectively.

Open Source Operating Systems and Server Software

A lot of open source code is enabled by other software, tools, specifications and systems that are also open source. The most obvious enabler is the availability of open source Operating Systems. These are used on local machines but even more common in infrastructure powering systems and services.

Operating Systems

Open Source OS are only possible because of the ability to take many other pieces of OSS and link or modify it in such a way that it works well together as a whole.

I mainly use Linux OS. Ubuntu, Cent OS, CoreOS, Arch. At the heart of them all is the Linux Kernel. All open, all developed in public.

Server Software – Specifically HTTP Servers

Another specific type of software that I rely on is HTTP servers. These servers allow requests and responses to be made between clients and servers – in a user friendly way returning the rich content we expect on the web today.

There are 2 specific softwares that dominate the http server domain. Apache and NGINX. 

I'd take a guess at 75% or more of all http requests made over the internet would be responded to by one or the other.

Without both OSs and HTTP servers being available as open source I doubt that the web would be what it is. I expect my job may not exist.

PHP & JavaScript

WordPress is primarily written in PHP with many JavaScript components for use in the browser. PHP is itself an open source language and JavaScript is an open specification.

Coding for WordPress most of the time involves working with pure PHP or JavaScript and then hooking that code into WP with some more code.

MySQL

The application layer of most applications, including WordPress, connect to a data layer that is often a MySQL database. MySQL is another open source project (although at the time of MariaDB creation that was very up in arms).

Node

Node is another popular system that I work with a lot. Essentially it runs JavaScript without a browser.

Many people are first introduced to Node as part of build tools – especially since the usage of task runnings become more popular. Grunt and Gulp run in Node. If you've ever ran a npm install command you've used Node.

Nginx Reverse Proxy Cache of WordPress on Apache

An NGINX reverse proxy for WordPress sites running on Apache is my standard setup for running WP sites. I've got a pretty slick setup running entirely self-contained NGINX reverse proxy to WP on Apache PHP7 using Docker to Proxy Multiple WordPress Instances.

Every single shared and manage host I've personally used in the last 10-15 years ran Apache as the default http server. Every client I've ever had with a shared or managed account too. I've only every once been offered the option of anything different, it was not default configuration though.

NGINX is very capable of doing the exact same thing as Apache but I see it used more commonly as a proxy. You can also use Apache for a proxy if you want to.

Apache and NGINX are both http servers, they are pretty interchangeable if you are only interested in an end result being a page reaching the requesting user.

Some Key High Level Differences Between Apache and NGINX

Apache is incredibly well supported and used by a huge amount of servers. It can be installed and works almost right out of the box. It's modular, works on many systems and is capable of hosting a wide range of sites with relatively minimal configuration.

It's the default http server of choice for so many for a reason – it copes well with most situations and is generally simple to configure.

On the other hand NGINX has a smaller market share, can be a little more tricky to install, make it work right – and may require additional setup for particular applications.

It's not as modular (turning on features sometimes requires complete rebuild from source) but it performs a lot better than non-tuned Apache installs. It is less memory hungry and handles static content way better than Apache. In comparisons is excels particularly well when handling concurrent connections.

Why Put An HTTP Server In Front Of An HTTP Server?

I get asked this by site builders a lot more than I ever thought I would. There are several technical reasons and infrastructure reasons why you may want to do this. There's also performance reasons and privacy reasons. I won't go into great detail about any of them but I encourage you to Google for more detail if you are intrigued.

There are 2 simple reasons why I do this that are both related to separating the access to a site from the operation of a site.

  1. Isolating front-end from back-end means that I can have specially tweaked configurations, run necessary services spanning multiple host machines and know that all of that in transparent to the end user.
  2. The other reason is performance based. The front-end does nothing dynamic, it serves only static html and other static content that it is provided from the backend services. It can manage load balancing and handle service failover. It can cache many of the resources it has – this results in less dynamic work generating pages and more work actually serving the pages once they have been generated.

When To Cache A Site At The Proxy

I cache almost every request to WordPress sites when users are not logged in. Images, styles and scripts, the generated html. Cache it all, and for a long time.

That is because the kinds of sites I host and almost completely content providing sites. They are blogs, service sites and resources. I think most sites fit into that same bucket.

These kinds of sites are not always updated daily, comments on some posts are days or weeks between them. Single pages often stay the same for a long time, homepages and tax pages may need updated more often but still not as often as to require a freshly generated page every time.

Some Particular Caching Rules and Configs For These Sites

A good baseline confg for my kind of sites would follow rules similar to these:

  • Default cache time of 1 month.
  • Default cache pragma of public
  • Cache statics, like images and scripts, on first request – cache for 1 year. 
  • Cache html only after 2 requests, pass back 5-10% of requests to backend to check for updated page.
  • Allow serving of stale objects and do a refresh check in the background when it occurs.
  • Clear unrequested objects every 7 days.

A long default cache lifetime is good to start with, I'd even default to 1 year in some instances. 1 month is more appropriate for more cases though.

Setting cache type to public means that not just browsers will cache but also other services as well between request and response.

Static resources are unlikely to change ever. Long cache lifetimes for these items. Some single pages may have content that doesn't ever change but the markup can still be different sometimes – maybe there's a widget of latest articles or comments that would output a new item every now and again.

Because of that you should send some of the requests to the backend to check for an updated page. Depending on how much traffic you have and how dynamic the pages are you can tweak the percentage.

The reason that html is set not to be cached on the first 2 requests is because the backend sometimes does it's own caching and optimizations that require 1 or 2 requests to start showing. We should let the backend have some requests to prime it's cache so that when it is cached at the proxy it is caching the fully optimized version of the page.

Serving stale objects while grabbing new ones from the backend helps to ensure that as many requests as possible are cached. If the backend object hasn't changed then the cache just has it's date changed but if it is update then the cache is updated with the new item.

Clearing out cached items that were never requested every so often helps to keep filesize down for the total cache.

Ensuring Email Deliverability – SPF, DKIM & DMARC

Email deliverability is deceptively complex. For most people it just works. You write an email, send it and it arrives at the other end. A lot goes on between when you click send and when it is accepted at the other end.

What goes on between clients/mail servers – and mail server/mail server – is complicated enough but people also need to make sure when they get there they don't end up in the SPAM folder.

There is so much SPAM email being sent that almost every email sent goes through more than one SPAM check on it's journey between sender and receiver.

Different places do different kinds of checks. Often when email is sent from your computer or phone it goes up to an external outgoing mail server to be sent. Even at that early stage some checks might be done – your mail client might do SPAM score checking and the mail server should certainly require authentication for outgoing mail.

When it leaves your server it bounces through routers and switches, different hosts and relays, before arriving at the receiving mail server. Checks may be done in the process of its transfer.

When the end server receives the message it will probably do more checks before putting it into the mailbox of the receiver. In the end the receiver might even do additional checks in the mail client.

Securing Your Outgoing Mail

There are a handful of accepted standards to help make sure mail you send gets to where it needs to be and that it stays out of the SPAM folder.  They also help prevent anyone sending mail and spoofing your address or pretending to be you.

Mail Missing In Transit

Mail from known bad hosts, IP ranges and domains are often terminated en-route.

You want this to happen. You should not be sending mail from any known bad addresses.

The most commonly used method to ensure the host sending outgoing mail is authorised to send for that domain is called SPF.

SPF – Sender Prefered From

At the DNS server you can add some records that inform others which hosts and IPs you want to allow mail to be sent from. You also set default actions to take when messages fail SPF check.

Not everyone treats SPF records with the respect they deserve. It's because a lot of SPF records are actually misconfigured. Trusting a system which many obviously have misconfigured would not be great for everyone.

The next common way to secure your outgoing mail is DKIM.

DKIM – DomainKeys Identified Mail

DKIM is a method to cryptographically sign a message, either as the origin or an authorised intermediary host. Receivers can use the key to confirm the signature of the message and that it's authorised and untampered.

Since DKIM requires key generation and is underpinned by a more complex set of sub-systems it is often treated with much more authority than SPF.

The final piece of the trio is DMARC.

DMARC – Domain-based Message Authentication, Reporting & Conformance

Some mail hosts will use SPF or DKIM for to validate a message. Some hosts don't. And many treat failures differently.

DMARC allows you to instruct mail servers who listen exactly what you want to happen to messages that fail those SPF or DKIM checks.

You can set a policy of:

  • do nothing
  • quarantine (goes to spam)
  • or reject

As well as the percentage of mails to apply the policy to (this helps during initial testing and when any changes are made).

What it also does is allow a method for mail receivers to easily contact you and report results of the mail they have processed for you. They will report sending IPs and results from SPF/DKIM as well as what they done with the message in the end.

That information is extremely useful to anyone managing an outgoing mail server and can be used to spot problems with sending (or fake senders) very quickly.

When You Want Mail To Be Terminated In Transit

If mail is received and you have not authorised it then you want it to be terminated before it gets into anyone's mailbox. At the very least you will want it to go to SPAM.

Mail failing authorisation is probably using a spoofed from address or is otherwise illegitimate.

SPF, DKIM and DMARC combined helps to stop any mail you did not authorise to send from ending up in front of the user. That prevents server algorithms picking up on cues from the user when they delete without opening or throw messages into spam folders.

When Termination In Transit Is A Problem

I'm going to say that you always want unauthenticated mail to be terminated. No exceptions. The problem is that very often other sites spoof your email for a legitimate reason.

Say you fill in a form online and add your email address, often that notification is sent to a site owner via email with your address as the FROM address.

Those messages will fail your checks (actually sometimes they might not and instead be allowed through but treated as a soft failure).

It's a common practice but I'm going to say it right now. It's just plain wrong. You should never be sending mail with a FROM address that you are not explicitly allowed to send for.

The proper configuration is this, please use it:

  • FROM: [server address]
  • TO: [receiver address]
  • REPLYTO: [form filler address]

Deliverability for Senders with SPF, DKIM and DMARC is Dramatically Improved

No matter what you are sending mail for: it could be personal mail or business mail; follow ups, outreach messages or newsletters. No matter the purpose of the mail it's always better when it arrives at it's destination.

Using these systems helps to build domain trust from receivers and shows you have taken steps to secure your mail. Deliverability of mail that's taken step to ensure it arrives is generally better than mail sent with no thoughts about that.

The only messages you do not want to arrive are SPAM messages you have not authorised. These systems allow you to publish policies instructing receiving servers that you do not want that unauthorised mail to arrive.

Terminating mail that is questionable before users see it also means that cues used by email providers to spot messages users consider as SPAM are never shown on your messages. This increases the domain trust even more.

Gutenberg Update Skips A Week – Pushes Many Fixes

The planned release schedule for the Gutenberg Editor plugin is once a week on a Friday. Last week release was missed and it jamp from 0.4.0 to 0.6.0 today.

There are many improvements and tweaks to the editor. Most notably for me was addition of validation of blocks and detection of modification outside of Gutenberg. I spotted this immediately as Cover Image block markup has changed and block validation detected every block I had previously added as being modified.

Modified blocks get locked in the visual editor to prevent breaking of any customizations added.

Also since cover image markup was changes every one I had previously added had broken styles. That is what happens using early-access and heavily in development software lol

New Block – Cover Text

The Cover Text block was added as a variant of the cover image block.

This is mainly a stylized text block with background and text color options.

Multiple lines and text styles can be used as well as adding links. There are 3 style selectors.

This is mainly a stylized text block with background and text color options.

Multiple lines and text styles can be used as well as adding links. There are 3 style selectors.

This is mainly a stylized text block with background and text color options.

Multiple lines and text styles can be used as well as adding links. There are 3 style selectors.

Above are all 3 of the different included formats and each has different colored text. At this exact moment the text color does not change. This is because of a small bug in the output of these blocks. I made an issue and submitted a PR with a fix. Hopefully it's fixed in next version.

Monitoring Site and Service Uptime

Testing a site is operating correctly, and it's required services are also available, can be challenging. There are various surface metrics you can test but often they are not reliable and are unable to give any depth of information about many important factors.

Surface Level Data

When it comes to web services you can get a good set of working or broken tests running with ease. By testing only surface data you can find out with some certainty if all services are up or if they are not.

There are loads of online systems that offer free site uptime checks. I've used Pingdom for it before but there are many others. The Jetpack WordPress plugin also has an uptime monitor feature which I have used.

Pinging Hosts

Many hosts are accessible through the internet and they will respond when you ask them to. You can ping the domain and assume a response from the host means it's ok. Checking ping response times and packet loss is a decent metric as well.

This doesn't check that what you want returned to user requests is what is being sent through. It only checks if the host is accessible.

Making HTTP Requests

When checking a website is running you can go a step farther that pinging and send an http request to the site. Every http response should contain a code number which can be used to determine success or failure.

When the http service returns code 200 it indicates success. The downside of relying on http response codes is that even success codes don't necessarily mean that a site is running. Other services might not be working correctly and the site might not be giving the correct output.

One way to enhance http testing for site and service uptime is to do additional checks when success codes are returned. Testing the response for some known output (for example look for a certain tag in the header, perhaps inclusion of a style.css file). If your known output doesn't exist in the response and a success code is returned then there is a chance a supporting service is down.

Deeper System Health Metrics

Surface level metrics can be an easy way to test for mostly all working or something is broken somewhere. It often doesn't give any insight into what is broken or how well working services are performing. 

You can get all kinds of information from the server that runs your sites and services if you are able to open a session to the host.

Shared hosts rarely give access to shell, when they do it's always severely limited to ensure security between customers.

System Monitor

Even in a limited shell you can probably get information about your own running processes. Linux shells usually have access to the `top` command. It's essentially a task manager that shows things like CPU usage, Memory usage etc.

In top you should be able to see the total CPU cores, RAM, Virtual Memory, average system load and some detailed information about the processes running. In limited shells you may only see processes running from your user accounts but on a dedicated server or VM you will probably be able to see all of the processes and which is using what system resource and how often.

Realtime system metrics like this can show what is happening right now on a host.

Checking on Important Services

There are a number of ways to check status of different services.

Upstart Scripts

Many services will provide a way to check their status. Often these are provided as scripts for your operating system to execute. I've heard them called startup scripts, upstart scripts, init scripts.

Depending on your OS commands like these could be used to check on some service statuses.

service httpd status
service mysqld status
service memcached status
/etc/init.d/mysql status
/etc/init.d/apache2 status
/etc/init.d/memcached status

Checking Log files

Most production softwares have in-built logging facilities. They can push data into different system logs or to their own logging mechanisms. Usually logs end up as easily readable text files stored somewhere on a system. Many *nix systems store a lot of the logs in /var/log/ or /home/[username]/logs/.

When it comes to running websites the most common setup is a LAMP stack. Default settings are usually to log requests, some types of queries and php errors in those systems.

Reading logs will be able to give you all kinds of useful information about services. There are also ways to configure some services to output more verbose data to the logs.

External Site, Service & Infrastructure Monitors

There are a number of dedicated server health monitoring suites available. Premium services like New Relic and DataDog are capable of tracking all kinds of deep level data using specifically built reporting agents capable of running on a system and reporting all of that deep data from your servers and processes.

Until very recently I was a customer of New Relic for personal sites. I used them especially for infrastructure monitoring and deep error reporting and I would highly recommend them if that's what your looking for. NOTE: New Relic also offer other services I did not use, check them out to see all features.

Open Source Monitoring Suites

In addition to premium services available for monitoring there is also some fairly strong contenders in the Open Source market that are very capable.

Most can run on a host and check metrics for it and many can also check remote hosts as well.

Nagios springs to mind right away. It can do various service checks, pings, resource monitoring and system tests in a very configurable way. It's highly configurable nature makes it extremely powerful.

Munin is another software I've used to keep track of things like network and disk IO as well as post queue monitoring.

Nagios and Munin I can recommend as potential monitoring solutions if you want to self-host them.

Two Weeks with Gutenberg

Time flys by. It's been about 2-3 weeks since I first tried the Gutenberg editor. I've seen 3 updates bringing various tweaks here and there plus the addition of a new block type.

Gutenberg Editor – Updates are coming, not much is changing.

The Verse block was added in version 0.5.0.

Poets rejoice in the days of Gutenberg.
Published and shared for all to see.

Writing. Editing. Merging of content.

New experiences, outlets, sharing and connecting.
Words. A Voice.

Obviously you can see that I'm no poet. The verse block seems to be not much more than a stylized text box. The same thing could be achieved with a standard textbox or even a blockquote.

Thoughts of Gutenberg So Far

My opinion of the future core editor for WordPress has not changed since I started to use it. I feel like it's perfectly fine for writing text content, it's still lacking variety of blocks and that I am concerned over how data is stored.

I provided a short review noting my concerns over data storage at the plugin page in the repo. If you have any feedback you should provide it to the authors as well, either in the plugin repo or at the github project.

From what I see with some of the block types the intention will be for themes to style them to match their designs. So far no themes I have looked at using Gutenberg contain any such styles. Until the end result of using Gutenberg exceeds the result of doing a similar thing in the existing editor I don't think this project is anywhere near close to inclusion in core.

Basic Social Research Opportunities On Twitter

With a whole bundle of data right at the end of a simple Twitter search I've always thought it would be an awesome idea to somehow make use of it for user research. Specifically vernacular and terminology research.

  • How do they arrange what they say?
  • What words do they use together,
  • What way do they ask for help?

I previously had ambitions of building some kind of machine learning system to extrapolate all kinds of awesome metrics from that data. That project is semi-on-the-shelf for the moment but that doesn't mean I can't still somehow use the search data in a more high level way.

Using Tweets to Get and Idea of the Language People Use for a Given Topic.

Take a particular blog post that was written some time ago but does not perform as well as you feel it could. Head to Twitter advanced search and enter a few key terms from the post to bring up tweets somehow related to your topic.

Read through the list, note some down into a list, refine the search, note down more. Be sure to get a lot – try make sure you have mostly directly related tweets to what your topic is but also include some loosely related items and a handful that are borderline.

Partial match data is still good at this point but do exclude any that are obviously entirely unrelated to your needs. In a machine learning environment unrelated items would be good test data but manually they'll just add clutter.

Spotting Connection Phrases and Linking Words

Once you have a nice big list of tweets somehow linked to your topic choice take another read through them. Pay attention to the connecting words and phrases in them people use to bind the topic and objects together. Those are the words you'll use in linking phrases for an article.

Sometimes its harder to spot commonality within these linking phrase because the words don't have as much force as the specific key phrases we are searching for. That's why it's important to pay attention to them as much as you can – they are hard to discern from data gathered from searching only key phrases.

Find the Questions People Have About the Subject

The first thing to do is to find the questions people are asking about the subject matter. Are many people familiar with it? Do people have similar complaints? See the same question being asked again and again?

Finding questions can be done multiple ways. Checking for shares to sites you know people ask questions on is a good way. Searching for words that can indicate questions (‘Who’, ‘What’, ‘When’, ‘Where’, ‘Why’,’ Will’, ‘How’ and ‘?’).

Knowing what questions people ask is a good way to spot any sticking points at various levels of expertise in the subject.

A side benefit of searching for shares to question sites is that it may also lead you to a better description of that question. Sometimes even the answer to many of those questions are at the links.

Knowing both the questions people have and the answers to those questions can be a great place to start refining posts or any content ideas you may have.

Connect the Unrelated Objects to the Related Ones

Sometimes there can be affinities between various topics that are seemingly completely unrelated. In any given group the people who like one things might majoritarily like something else. I cannot think of any real-world examples that have been proven to be accurate however I can give a few examples.

Lets say in a group of 10 people there are 5 cat owners and 5 dog owners. 4 of the cat owners like smooth peanut butter. 2 of the dog lovers like it too. You could say there is a strange affinity between cat owners and a preference for smooth peanut butter.

Another take on the above example might be that since 6 out of a total 10 pet owners prefer smooth that might imply that pet owners have an affinity with smooth peanut butter.

That's only a single, made-up, scenario with 2 provided perspectives. There are so many unseen affinities within different groups of people and subject matters that being able to correctly identify the ones that fit your audience profile is a huge boost to how likely people are to identify with the content you create for them.

Also if my above example is true then it makes total sense to somehow include smooth peanut butter on all of your cat related content. Keep that in mind for the future 😉