People can make several versions of the exact same front page and for them to be accessible at several different of URLs.
Making a page template (with a file containing an opening comment something like: Template Name: Front Page) coded specifically for use as the frontpage is the wrong way to do it. People can make several versions of the exact same front page and for them to be accessible at several different of URLs. Not good.
WordPress has specially designated templates for use as the front page and as the blog page. They are front-page.php and home.php respectively. If these templates are in place and static page is defined then they will be used automatically.
There are a number of things that exist in the open source world without which I do no think I could do my Job. I am a Web Developer. I work on a range of projects using different systems, languages and processes. I work a lot with WordPress as well.
Many aspects of my work revolve around scanning logs, writing and reading code in a text editor and browsing the internet. I have my prefered programs for doing each of those tasks.
This is a set of articles that look at a lof of the open source projects that I rely on to do my job and do it effectively.
Online Applications
Some of the tools I use are online services or applications. In the open source world people build things and they share them. Since I am in the web developer sphere that means a lot of the circles I am in people build online software.
Online software is convenient because they are more portable and often accessible from a variety of devices. A lot of online services are powered by open source software (and that's not counting the unlaying OS or the fact that it probably uses Apache or NGINX to respond to people's browsers).
WordPress
A lot of the work that I do relates back to WordPress in some way. It powers a huge amount of the publicly accessible internet. Sometimes I build for WP or extend it, other times I build things to work alongside it. Sometimes I just build server stacks capable of running it.
If WordPress was closed source, or did not exist, a god portion of my work would not come in.
GitHub – And Git
GitHub is a giant when it comes to source code management. GitHub manages code using an unlaying software called git. That software was started by the same man who started the linux kernel.
GitHub itself is not an open source application. I can't download a copy of it and run my own private version of it (but you can have private instances setup and managed by them, either hosted in the cloud or in-house). It is powered by an open software and also values open source greatly. Most projects hosted there are under some kind of open licence.
Other Online Git Services – BitBucket, GitLab
There are other repo hosts available. Bitbucket is a good choice. GitLab is also a good choice.
GitLab is an online service where you can host your code as well but it's an open software too. You can download it to run on your own server managed by yourself. It is extremely full featured – offering much of the same as github and bitbucket – as well as a lot of integrated CI and tooling.
Communications – Slack
Even talking to yourself can be useful at times, communication is better when more people can be involved and the conversations can be archived and searched. Slack lets that happen. It's actually not an open source project as such but a tool for communication that isn't email is essential when working online with others.
Conversations happen in Chat Rooms. Slack provides nice rooms to have those conversations.
I've been thinking that I should write some development logs for some work that I do because it may be useful for others. Plus it gets me to writing more which is something I'm trying my hardest to make a habit.
This log is about some updates I'm making to a theme I have hosted in the .org theme repo.
This theme uses Bootstrap 4 for a framework. It has a top navigation bar with a menu using the navwalker class that I help maintain. It also has a search bar and is styled with a custom theme specific colored button.
The search bar is always on, first I want to make it possible to turn it off if users do not want it. Then I plan to offer color choice selections.
Adding an on/off toggle to theme options. This is easy. A checkbox in the customizer and a test for it's value at page generation.
A Checkbox On/Off Toggle In Customizer
Start with adding a section for the header nav options.
This uses a custom sanitization callback that simply checks value is either 1 or 0 – TRUE or FALSE.
/**
* Sanitization for checkbox input
*
* @param booleen $input we either have a value or it's empty to depeict
* a checkbox state.
* @return booleen $output
*/
function best_reloaded_sanitize_checkbox( $input ) {
if ( $input ) {
$output = true;
} else {
$output = false;
}
return $output;
}
The final part of this is testing the value of the option and outputting the search form when it's set to 'on'.
// if the navbar search is on then output search form.
if ( get_theme_mod( 'display_navbar_search', true ) ) {
get_search_form();
}
This is a screenshot of it in action in the customizer.
Navbar Brand Options
Next thing I was wanting to add was the ability to add a small branding icon to the navbar. Bootstrap has some styles and classes that allow this so let's look at what we need for this.
An On/Off toggle for navbar brand.
Option to select an image from media library
Checkbox to include the site title as text.
This time there are 3 options to add to the customizer. It's 2 checkboxes again and another for image upload. Sanitization for an image upload is a little different than with checkboxes.
Sanitizing Values With Image Uploads
When it comes to sanitizing the values from image uploads what you are actually working with is text strings. Urls in fact.
You get a string with the url to the file. First you want to check that you have a valid extension for the file it points to. WP has a function to do this – wp_check_filetype()
Once you're sure it's the right filetype then you can escape it as a url at return.
/**
* Santization for image uploads.
*
* @param string $input This should be a direct url to an image file..
* @return string Return an excaped url to a file.
*/
function best_reloaded_sanitize_image( $input ) {
// allowed file types.
$mimes = array(
'jpg|jpeg|jpe' => 'image/jpeg',
'gif' => 'image/gif',
'png' => 'image/png',
);
// check file type from file name.
$file_ext = wp_check_filetype( $input, $mimes );
// if filetype matches the allowed types set above then cast to output,
// otherwise pass empty string.
$output = ( $file_ext['ext'] ? $input : '' );
// if file has a valid mime type return it as valud url.
return esc_url_raw( $output );
}
Controls and Settings for Branding Options and Image Upload
There's 3 sets of controls and settings here for each of the options we need set above. The most complicated one is the image upload control as it's building it's control from the class of a core control. It's a little more complicated to look at but works essentially the same.
// on/off toggle.
$wp_customize->add_setting( 'display_navbar_brand', array(
'default' => 0,
'sanitize_callback' => 'best_reloaded_sanitize_checkbox',
) );
$wp_customize->add_control( 'display_navbar_brand', array(
'label' => __( 'Enable the navbar branding options which can be a small image and the site-title.', 'best-reloaded' ),
'section' => 'best_reloaded_navbar',
'settings' => 'display_navbar_brand',
'type' => 'checkbox',
) );
// brand image.
$wp_customize->add_setting( 'brand_image', array(
'default' => '',
'sanitize_callback' => 'best_reloaded_sanitize_image',
) );
$wp_customize->add_control(
new WP_Customize_Image_Control(
$wp_customize,
'brand_image',
array(
'label' => __( 'Add a brand image to the navbar.', 'best-reloaded' ),
'section' => 'best_reloaded_navbar',
'settings' => 'brand_image',
'description' => __( 'Choose an image to use for brancd image in navbar. Leave empty for no image.', 'best-reloaded' ),
)
)
);
/ toggle text on/off in brand.
$wp_customize->add_setting( 'display_brand_text', array(
'default' => 0,
'sanitize_callback' => 'best_reloaded_sanitize_checkbox',
) );
$wp_customize->add_control( 'display_brand_text', array(
'label' => __( 'Select the checkbox to display the site title in the navbar as brand text.', 'best-reloaded' ),
'section' => 'best_reloaded_navbar',
'settings' => 'display_brand_text',
'type' => 'checkbox',
) );
Outputting Navbar Brand in a Bootstrap Theme
Now at this point I realised that output would be slightly more complicated than just echoing values. I also spotted that very long titles could break layout of navbar quite easily so I needed to account for that.
When the brand is turned on you can output 3 things.
The Brand Image
The Site Title
Brand Image + Site Title
Some logic for deciding what is output is needed at runtime so instead of echoing values to in the template file I added an action hook instead. The hook will trigger, check if we should output a brand, try to build the brand and then ultimately output it if we have a brand to use.
The Hook & Action
The hook is a standard action hook for WP.
/**
* Fires the navbar-brand action hook.
*
* @since 1.2.0
*/
function best_reloaded_do_navbar_brand() {
/**
* Used to output whatever featured content is desired in for the navbar brand.
*/
do_action( 'best_reloaded_do_navbar_brand' );
}
The action calls a function to perform the output logic and stores the value. It then tests if it has a value, sanitizes it against a list of accepted html tags and attributes then echoes it to the page.
/**
* Echos the markup output by navbar branding function.
*
* @return void
*/
function best_reloaded_output_navbar_brand() {
// try get the branding markup.
$output = best_reloaded_navbar_branding();
// if we have output to use then sanitize and echo it.
if ( $output ) {
$allowed_brand_tags = array(
'span' => array(
'class' => array(),
),
'img' => array(
'id' => array(),
'class' => array(),
'src' => array(),
'alt' => array(),
'width' => array(),
'height' => array(),
'style' => array(),
),
);
echo wp_kses( apply_filters( 'best_reloaded_filter_navbar_brand', best_reloaded_navbar_branding() ), $allowed_brand_tags );
}
}
add_action( 'best_reloaded_do_navbar_brand', 'best_reloaded_output_navbar_brand' );
Function to Generate Navbar Brand Markup
The function that generates the markup also handles the logic of what is output and deals with the issue of long titles breaking things.
I added a character cap by default of 30 chars and another customizer option for an override to allow long titles if the site owner wants to.
$wp_customize->add_setting( 'allow_long_brand', array(
'default' => 0,
'sanitize_callback' => 'best_reloaded_sanitize_checkbox',
) );
$wp_customize->add_control( 'allow_long_brand', array(
'label' => __( 'Very long titles break the default navbar layout, if you want to allow very long titles here then check this box. NOTE: You can also turn off the search form for more space.', 'best-reloaded' ),
'section' => 'best_reloaded_navbar',
'settings' => 'allow_long_brand',
'type' => 'checkbox',
) );
The function that returns the markup looks like this:
/**
* Builds out a .navbar-brand based on options set in the theme.
*
* @return string containing html markup for brand
*/
function best_reloaded_navbar_branding() {
// initial value for the output is false.
$brand_output = false;
// check for image set in theme options theme options.
$brand_image = get_theme_mod( 'brand_image', '' );
// Did we get an image or is the brand text turned on?
if ( $brand_image || get_theme_mod( 'display_brand_text', false ) ) {
// since we have at least 1 of the items then start the output.
$brand_output = '<span class="h1 navbar-brand mb-0">';
if ( $brand_image ) {
// we have an image.
$brand_output .= '<img id="brand-img" class="d-inline-block align-top mr-2" src="' . esc_url( $brand_image ) . '" >';
}
if ( get_theme_mod( 'display_brand_text' ) ) {
// text is toggled on, get site title.
$site_title = get_bloginfo( 'name', 'display' );
// very long site titles break the navbar so cap it at a generous 50 chars.
if ( strlen( $site_title ) <= 50 || get_theme_mod( 'allow_long_brand', false ) ) {
$brand_output .= esc_html( $site_title );
}
}
$brand_output .= '</span>';
}
// this will return the markup if we have any or it will return false.
return $brand_output;
}
Next Steps
Now that this works and I've tested it I will push the update to the .org repo and think about my next set of tweaks and changes.
There are a number of things that exist in the open source world without which I do no think I could do my Job. I am a Web Developer. I work on a range of projects using different systems, languages and processes. I work a lot with WordPress as well.
Many aspects of my work revolve around scanning logs, writing and reading code in a text editor and browsing the internet. I have my prefered programs for doing each of those tasks.
This is a set of articles that look at a lof of the open source projects that I rely on to do my job and do it effectively.
Software And Tool Choices
My job consists of 3 primary task types and I have my preference of what software to use for each of the tasks.
Analysing log files.
Reading and writing code.
Browsing the internet.
Most of the time I opt for open source over closed and choose cross-platform options where available.
Browser Choice – Chrome/Firefox
As a browser I want to say I use a fully open software. I do not. I use Google Chrome primarily (Firefox secondary which is open source though. Half a point for that maybe???).
Chrome is based on the open source Chromium so it's origins are open. It may also still follow Chromium as upstream. I use Chromium on minimal virtual machines but not often.
There is tracking and closed systems built into Chrome which I make use of. Cloud syncing is useful for me.
Chrome is not fully open but it was forked from open software and for me the closed source parts are an acceptable drawback.
Plus it's the most popular browser choice from users. I need to see the web in the same way that most people see it.
Reading and Writing Code – Atom
Reading and writing code I do in Atom Editor. It's fully open, started internally at Github and is built by them and others to be the best open source editor it can be.
For anyone working in with code and do not need a special proprietary IDE (most people working with code) for a given purpose I highly recommend Atom. It's well maintained, constantly developed and improved based on the needs of developers using it.
Atom is built with a framework called Electron (again open, from Github) which helps compile and run JavaScript (Node) as desktop applications and allows building for desktop to be very akin to building for the web meaning transferable skills for developers.
If Atom didn't exist I would use Lime Text (OSS variant of Sublime Text) or Notepad++.
Scanning Logs – Terminal and BASH
I do a lot of work in the terminal. Often in several terminals at the same time. Working with them using CLI is actually an incredible way to multi-task and effectively monitor progress. Most of the time when on command line I'm using BASH syntax. Sometimes it's powerShell… let's avoid that conversation lol!
I use Ubuntu as my main dev machine. Ubuntu ships with terminals that run BASH. Most Linux OS run BASH as well so connection to another machines command line is familiar regardless of what machine.
Logs are usually files containing plain text. Many command line tools exist to read through text files. An incredibly useful tool is called grep. It is used to search input for strings or regex matches.
I used to write a lot of blog posts on a number of different topics. I even had paid positions for weekly articles.
The last few years I’ve written less and less. Subjects have narrowed to mainly web developer focused topics as I no longer have the time or the inclination to explore such widely diverse topics in-depth to write about them.
What I learned might be useful tips for others. Here’s a couple of takeaways from sharing blog content online for the last 8-10 years.
1. Write What I know Already
You don’t always need to write about brand new topics or vary the discussion with other points of view. It’s ok to sometimes just write what you know and are good at.
I am happy to write about what I know. Realizing that fact has allowed me to start writing more frequently and more
The words flow easier, it requires less research and reference material and I can be more confident what I am saying is accurate.
2. Enjoy It, Even when Rambling
When I write I often ramble a lot. A simple idea may be 5 or 6 paragraphs by the time I’m done. during editing it becomes more concise.
I should write it all down while I am enjoying it.
3. Edit After Some Time, But Not Too Much Time.
I have a terrible habit of part writing posts. 1000 words and in 1 session burns me out. I take a break and come back later, sometimes later is weeks later. The longer between sessions the hard it is to pick back up on the flow.
The same is true between writing and editing. If you wait too long you can’t remember what you intended during a ramble and you may not edit it to give proper clarity because of that.
4. Keep All Drafts
I write many intros and parts of posts. I sometimes come back to them in a few days or weeks. Sometimes I’ve even came back to a post in drafts after 3 years.
When you’re inspired the words come easy, when you loose that inspirations it’s hard to keep going. The inspiration can come back or something in the future can make the post more relevant or topical.
5. Incoherent Thoughts Are Sometimes Useful
Sometimes when you write stuff down it comes out wrong. Other times it is jumbled and badly arranged. I’ve even written things that on re-read make absolutely no sense.
Even those incoherent thoughts are worth keeping. I mean there’s no reason not to keep them but you might be surprised how looking back on those can give new ideas or a burst of fresh inspiration.
There are a number of things that exist in the open source world without which I do no think I could do my Job. I am a Web Developer. I work on a range of projects using different systems, languages and processes. I work a lot with WordPress as well.
Many aspects of my work revolve around scanning logs, writing and reading code in a text editor and browsing the internet. I have my prefered programs for doing each of those tasks.
This is a set of articles that look at a lof of the open source projects that I rely on to do my job and do it effectively.
Open Source Operating Systems and Server Software
A lot of open source code is enabled by other software, tools, specifications and systems that are also open source. The most obvious enabler is the availability of open source Operating Systems. These are used on local machines but even more common in infrastructure powering systems and services.
Operating Systems
Open Source OS are only possible because of the ability to take many other pieces of OSS and link or modify it in such a way that it works well together as a whole.
I mainly use Linux OS. Ubuntu, Cent OS, CoreOS, Arch. At the heart of them all is the Linux Kernel. All open, all developed in public.
Server Software – Specifically HTTP Servers
Another specific type of software that I rely on is HTTP servers. These servers allow requests and responses to be made between clients and servers – in a user friendly way returning the rich content we expect on the web today.
There are 2 specific softwares that dominate the http server domain. Apache and NGINX.
I'd take a guess at 75% or more of all http requests made over the internet would be responded to by one or the other.
Without both OSs and HTTP servers being available as open source I doubt that the web would be what it is. I expect my job may not exist.
PHP & JavaScript
WordPress is primarily written in PHP with many JavaScript components for use in the browser. PHP is itself an open source language and JavaScript is an open specification.
Coding for WordPress most of the time involves working with pure PHP or JavaScript and then hooking that code into WP with some more code.
MySQL
The application layer of most applications, including WordPress, connect to a data layer that is often a MySQL database. MySQL is another open source project (although at the time of MariaDB creation that was very up in arms).
Node
Node is another popular system that I work with a lot. Essentially it runs JavaScript without a browser.
Many people are first introduced to Node as part of build tools – especially since the usage of task runnings become more popular. Grunt and Gulp run in Node. If you've ever ran a npm install command you've used Node.
An NGINX reverse proxy for WordPress sites running on Apache is my standard setup for running WP sites. I've got a pretty slick setup running entirely self-contained NGINX reverse proxy to WP on Apache PHP7 using Docker to Proxy Multiple WordPress Instances.
Every single shared and manage host I've personally used in the last 10-15 years ran Apache as the default http server. Every client I've ever had with a shared or managed account too. I've only every once been offered the option of anything different, it was not default configuration though.
NGINX is very capable of doing the exact same thing as Apache but I see it used more commonly as a proxy. You can also use Apache for a proxy if you want to.
Apache and NGINX are both http servers, they are pretty interchangeable if you are only interested in an end result being a page reaching the requesting user.
Some Key High Level Differences Between Apache and NGINX
Apache is incredibly well supported and used by a huge amount of servers. It can be installed and works almost right out of the box. It's modular, works on many systems and is capable of hosting a wide range of sites with relatively minimal configuration.
It's the default http server of choice for so many for a reason – it copes well with most situations and is generally simple to configure.
On the other hand NGINX has a smaller market share, can be a little more tricky to install, make it work right – and may require additional setup for particular applications.
It's not as modular (turning on features sometimes requires complete rebuild from source) but it performs a lot better than non-tuned Apache installs. It is less memory hungry and handles static content way better than Apache. In comparisons is excels particularly well when handling concurrent connections.
Why Put An HTTP Server In Front Of An HTTP Server?
I get asked this by site builders a lot more than I ever thought I would. There are several technical reasons and infrastructure reasons why you may want to do this. There's also performance reasons and privacy reasons. I won't go into great detail about any of them but I encourage you to Google for more detail if you are intrigued.
There are 2 simple reasons why I do this that are both related to separating the access to a site from the operation of a site.
Isolating front-end from back-end means that I can have specially tweaked configurations, run necessary services spanning multiple host machines and know that all of that in transparent to the end user.
The other reason is performance based. The front-end does nothing dynamic, it serves only static html and other static content that it is provided from the backend services. It can manage load balancing and handle service failover. It can cache many of the resources it has – this results in less dynamic work generating pages and more work actually serving the pages once they have been generated.
When To Cache A Site At The Proxy
I cache almost every request to WordPress sites when users are not logged in. Images, styles and scripts, the generated html. Cache it all, and for a long time.
That is because the kinds of sites I host and almost completely content providing sites. They are blogs, service sites and resources. I think most sites fit into that same bucket.
These kinds of sites are not always updated daily, comments on some posts are days or weeks between them. Single pages often stay the same for a long time, homepages and tax pages may need updated more often but still not as often as to require a freshly generated page every time.
Some Particular Caching Rules and Configs For These Sites
A good baseline confg for my kind of sites would follow rules similar to these:
Default cache time of 1 month.
Default cache pragma of public
Cache statics, like images and scripts, on first request – cache for 1 year.
Cache html only after 2 requests, pass back 5-10% of requests to backend to check for updated page.
Allow serving of stale objects and do a refresh check in the background when it occurs.
Clear unrequested objects every 7 days.
A long default cache lifetime is good to start with, I'd even default to 1 year in some instances. 1 month is more appropriate for more cases though.
Setting cache type to public means that not just browsers will cache but also other services as well between request and response.
Static resources are unlikely to change ever. Long cache lifetimes for these items. Some single pages may have content that doesn't ever change but the markup can still be different sometimes – maybe there's a widget of latest articles or comments that would output a new item every now and again.
Because of that you should send some of the requests to the backend to check for an updated page. Depending on how much traffic you have and how dynamic the pages are you can tweak the percentage.
The reason that html is set not to be cached on the first 2 requests is because the backend sometimes does it's own caching and optimizations that require 1 or 2 requests to start showing. We should let the backend have some requests to prime it's cache so that when it is cached at the proxy it is caching the fully optimized version of the page.
Serving stale objects while grabbing new ones from the backend helps to ensure that as many requests as possible are cached. If the backend object hasn't changed then the cache just has it's date changed but if it is update then the cache is updated with the new item.
Clearing out cached items that were never requested every so often helps to keep filesize down for the total cache.
Email deliverability is deceptively complex. For most people it just works. You write an email, send it and it arrives at the other end. A lot goes on between when you click send and when it is accepted at the other end.
What goes on between clients/mail servers – and mail server/mail server – is complicated enough but people also need to make sure when they get there they don't end up in the SPAM folder.
Ensuring Email Deliverability – SPF, DKIM & DMARC
There is so much SPAM email being sent that almost every email sent goes through more than one SPAM check on it's journey between sender and receiver.
Different places do different kinds of checks. Often when email is sent from your computer or phone it goes up to an external outgoing mail server to be sent. Even at that early stage some checks might be done – your mail client might do SPAM score checking and the mail server should certainly require authentication for outgoing mail.
When it leaves your server it bounces through routers and switches, different hosts and relays, before arriving at the receiving mail server. Checks may be done in the process of its transfer.
When the end server receives the message it will probably do more checks before putting it into the mailbox of the receiver. In the end the receiver might even do additional checks in the mail client.
Securing Your Outgoing Mail
There are a handful of accepted standards to help make sure mail you send gets to where it needs to be and that it stays out of the SPAM folder. They also help prevent anyone sending mail and spoofing your address or pretending to be you.
Mail Missing In Transit
Mail from known bad hosts, IP ranges and domains are often terminated en-route.
You want this to happen. You should not be sending mail from any known bad addresses.
The most commonly used method to ensure the host sending outgoing mail is authorised to send for that domain is called SPF.
SPF – Sender Prefered From
At the DNS server you can add some records that inform others which hosts and IPs you want to allow mail to be sent from. You also set default actions to take when messages fail SPF check.
Not everyone treats SPF records with the respect they deserve. It's because a lot of SPF records are actually misconfigured. Trusting a system which many obviously have misconfigured would not be great for everyone.
The next common way to secure your outgoing mail is DKIM.
DKIM – DomainKeys Identified Mail
DKIM is a method to cryptographically sign a message, either as the origin or an authorised intermediary host. Receivers can use the key to confirm the signature of the message and that it's authorised and untampered.
Since DKIM requires key generation and is underpinned by a more complex set of sub-systems it is often treated with much more authority than SPF.
Some mail hosts will use SPF or DKIM for to validate a message. Some hosts don't. And many treat failures differently.
DMARC allows you to instruct mail servers who listen exactly what you want to happen to messages that fail those SPF or DKIM checks.
You can set a policy of:
do nothing
quarantine (goes to spam)
or reject
As well as the percentage of mails to apply the policy to (this helps during initial testing and when any changes are made).
What it also does is allow a method for mail receivers to easily contact you and report results of the mail they have processed for you. They will report sending IPs and results from SPF/DKIM as well as what they done with the message in the end.
That information is extremely useful to anyone managing an outgoing mail server and can be used to spot problems with sending (or fake senders) very quickly.
When You Want Mail To Be Terminated In Transit
If mail is received and you have not authorised it then you want it to be terminated before it gets into anyone's mailbox. At the very least you will want it to go to SPAM.
Mail failing authorisation is probably using a spoofed from address or is otherwise illegitimate.
SPF, DKIM and DMARC combined helps to stop any mail you did not authorise to send from ending up in front of the user. That prevents server algorithms picking up on cues from the user when they delete without opening or throw messages into spam folders.
When Termination In Transit Is A Problem
I'm going to say that you always want unauthenticated mail to be terminated. No exceptions. The problem is that very often other sites spoof your email for a legitimate reason.
Say you fill in a form online and add your email address, often that notification is sent to a site owner via email with your address as the FROM address.
Those messages will fail your checks (actually sometimes they might not and instead be allowed through but treated as a soft failure).
It's a common practice but I'm going to say it right now. It's just plain wrong. You should never be sending mail with a FROM address that you are not explicitly allowed to send for.
The proper configuration is this, please use it:
FROM: [server address]
TO: [receiver address]
REPLYTO: [form filler address]
Deliverability for Senders with SPF, DKIM and DMARC is Dramatically Improved
No matter what you are sending mail for: it could be personal mail or business mail; follow ups, outreach messages or newsletters. No matter the purpose of the mail it's always better when it arrives at it's destination.
Using these systems helps to build domain trust from receivers and shows you have taken steps to secure your mail. Deliverability of mail that's taken step to ensure it arrives is generally better than mail sent with no thoughts about that.
The only messages you do not want to arrive are SPAM messages you have not authorised. These systems allow you to publish policies instructing receiving servers that you do not want that unauthorised mail to arrive.
Terminating mail that is questionable before users see it also means that cues used by email providers to spot messages users consider as SPAM are never shown on your messages. This increases the domain trust even more.
The planned release schedule for the Gutenberg Editor plugin is once a week on a Friday. Last week release was missed and it jamp from 0.4.0 to 0.6.0 today.
There are many improvements and tweaks to the editor. Most notably for me was addition of validation of blocks and detection of modification outside of Gutenberg. I spotted this immediately as Cover Image block markup has changed and block validation detected every block I had previously added as being modified.
Modified blocks get locked in the visual editor to prevent breaking of any customizations added.
Also since cover image markup was changes every one I had previously added had broken styles. That is what happens using early-access and heavily in development software lol
New Block – Cover Text
The Cover Text block was added as a variant of the cover image block.
This is mainly a stylized text block with background and text color options.
Multiple lines and text stylescan be used as well as adding links. There are 3 style selectors.
This is mainly a stylized text block with background and text color options.
Multiple lines and text stylescan be used as well as adding links. There are 3 style selectors.
This is mainly a stylized text block with background and text color options.
Multiple lines and text stylescan be used as well as adding links. There are 3 style selectors.
Above are all 3 of the different included formats and each has different colored text. At this exact moment the text color does not change. This is because of a small bug in the output of these blocks. I made an issue and submitted a PR with a fix. Hopefully it's fixed in next version.
Testing a site is operating correctly, and it's required services are also available, can be challenging. There are various surface metrics you can test but often they are not reliable and are unable to give any depth of information about many important factors.
Surface Level Data
When it comes to web services you can get a good set of working or broken tests running with ease. By testing only surface data you can find out with some certainty if all services are up or if they are not.
There are loads of online systems that offer free site uptime checks. I've used Pingdom for it before but there are many others. The Jetpack WordPress plugin also has an uptime monitor feature which I have used.
Pinging Hosts
Many hosts are accessible through the internet and they will respond when you ask them to. You can ping the domain and assume a response from the host means it's ok. Checking ping response times and packet loss is a decent metric as well.
This doesn't check that what you want returned to user requests is what is being sent through. It only checks if the host is accessible.
Making HTTP Requests
When checking a website is running you can go a step farther that pinging and send an http request to the site. Every http response should contain a code number which can be used to determine success or failure.
When the http service returns code 200 it indicates success. The downside of relying on http response codes is that even success codes don't necessarily mean that a site is running. Other services might not be working correctly and the site might not be giving the correct output.
One way to enhance http testing for site and service uptime is to do additional checks when success codes are returned. Testing the response for some known output (for example look for a certain tag in the header, perhaps inclusion of a style.css file). If your known output doesn't exist in the response and a success code is returned then there is a chance a supporting service is down.
Deeper System Health Metrics
Surface level metrics can be an easy way to test for mostly all working or something is broken somewhere. It often doesn't give any insight into what is broken or how well working services are performing.
You can get all kinds of information from the server that runs your sites and services if you are able to open a session to the host.
Shared hosts rarely give access to shell, when they do it's always severely limited to ensure security between customers.
System Monitor
Even in a limited shell you can probably get information about your own running processes. Linux shells usually have access to the `top` command. It's essentially a task manager that shows things like CPU usage, Memory usage etc.
In top you should be able to see the total CPU cores, RAM, Virtual Memory, average system load and some detailed information about the processes running. In limited shells you may only see processes running from your user accounts but on a dedicated server or VM you will probably be able to see all of the processes and which is using what system resource and how often.
Realtime system metrics like this can show what is happening right now on a host.
Checking on Important Services
There are a number of ways to check status of different services.
Upstart Scripts
Many services will provide a way to check their status. Often these are provided as scripts for your operating system to execute. I've heard them called startup scripts, upstart scripts, init scripts.
Depending on your OS commands like these could be used to check on some service statuses.
service httpd status
service mysqld status
service memcached status
/etc/init.d/mysql status
/etc/init.d/apache2 status
/etc/init.d/memcached status
Checking Log files
Most production softwares have in-built logging facilities. They can push data into different system logs or to their own logging mechanisms. Usually logs end up as easily readable text files stored somewhere on a system. Many *nix systems store a lot of the logs in /var/log/ or /home/[username]/logs/.
When it comes to running websites the most common setup is a LAMP stack. Default settings are usually to log requests, some types of queries and php errors in those systems.
Reading logs will be able to give you all kinds of useful information about services. There are also ways to configure some services to output more verbose data to the logs.
External Site, Service & Infrastructure Monitors
There are a number of dedicated server health monitoring suites available. Premium services like New Relic and DataDog are capable of tracking all kinds of deep level data using specifically built reporting agents capable of running on a system and reporting all of that deep data from your servers and processes.
Until very recently I was a customer of New Relic for personal sites. I used them especially for infrastructure monitoring and deep error reporting and I would highly recommend them if that's what your looking for. NOTE: New Relic also offer other services I did not use, check them out to see all features.
Open Source Monitoring Suites
In addition to premium services available for monitoring there is also some fairly strong contenders in the Open Source market that are very capable.
Most can run on a host and check metrics for it and many can also check remote hosts as well.
Nagios springs to mind right away. It can do various service checks, pings, resource monitoring and system tests in a very configurable way. It's highly configurable nature makes it extremely powerful.
Munin is another software I've used to keep track of things like network and disk IO as well as post queue monitoring.
Nagios and Munin I can recommend as potential monitoring solutions if you want to self-host them.