terminal-app-icon

Removing SSH keys from authorized_keys file

If you have a server that receives connections from various computers you may have setup key based authorisation. This is a good move but occasionally you may be in a position where you want to remove a key so as to revoke access from one of the computers.

To do this you simply need to remove the public key from the authorized_keys file on the server. You can do this in a text editor, but it can get messy if there are dozens of entries in the file and you just want to remove one.

This is where SED is your friend. With this simple line you can delete the entry for the hostname you want to revoke, and also keep a backup of the file in case you make a mistake:

sed ‘/hostname/ D’ -i.tmp ~/.ssh/authorized_keys

“hostname” is the hostname of the computer connecting to the server. If you don’t have the exact hostname you can always use part of the public key in here, it will work the same way.

The command keeps a backup of the authorized_keys file as authorized_keys.tmp and saves the original file without the public key of the hostname you specified.

google-thumb

Should I use the keywords meta tags?

In case you are in a hurry, the short answer is No.

Google stopped using the keywords meta tags years ago, get people still seem to be trying to leverage these things to get their website “up the rankings”.

The reason Google stopped using them is people generally used them to try and get themselves up the ranking and play the game. They rarely provided much in the way of genuine info that Google couldn’t already get from analysing the data on the page.

Google does however use the Meta Description tag, often (if you have a good quality description) to populate their small excerpt you see on the google page.

So go and spend some time improving and updating the content of your site, but please don’t waste any time coming up with a list of keywords, it’s a waste of time.

data-backup-drive-thumbnail-large

Home backup strategy

Backup strategy is something I am passionate about and something I have dealt with for a lot of my working life. This post comes at it from the angle of a home user, with options to suit the average person who wishes to protect their data.

For most people it is great to have so many photos of friends and family floating around, but stop to think what it would be like if your computer died and you lost it all. It’s worth giving some thought to how you can protect your data.

I wrote a post about backing up your family photos. This post follows on from that and gives a much more bullet proof solution, which you can apply to your whole household.

3-2-1 backup strategy

The concept of the 3-2-1 backup strategy is that you should always have 3 copies of your data, two of which can be in the same place (but on DIFFERENT platforms) and one must be off site.

This can sound a little over the top at first glance, but when you analyse the strategy it makes a lot of sense.

Let’s take a look…

Firstly, what are we trying to protect against?

The answer is loss of data, but this can be a number of factors, but they come in two flavours:

Physical loss of data (fire, flood, theft etc).

To mitigate this factor we must have an OFF SITE copy. This is one of your 3 copies and it can either be kept in the cloud (dropbox, Google Drive, Crashplan etc) or it can be taken away on a hard drive (or whatever medium) and stored elsewhere. This only works if the data is static, and if you add to it you need to be quick to take a new copy off site. For most of us an automated cloud backup solution is better.

Digital loss of data

This can be data corruption, scratches (in terms of DVD’s etc) or other digital factors. Take DVD’s as an example. If you have your data on 2 DVD’s and they get scratched then you have lost your data (yes, you probably haven’t taken enough care, but nevertheless), or at the very least you are reliant on the cloud version which you may or may not have tested. There have been cases where people have gone to their cloud version and for whatever reason it was not up to date due to their Internet connection being slow or other factors. The cloud is a great backup, but shouldn’t be your only one.

That’s why the best scenario is where the other 2 copies that you have locally are on different platforms. This can be Hard Drive and DVD, or even two platforms like Time Machine and a backup to a local NAS drive.

An example of a good backup system

I often get asked for an example of the perfect setup. I don’t think a perfect system exists, as it all depends on your needs and how much data you have, but here is a good starter for six!

Backup Number 1 – Time Machine

I use a Mac, so Time Machine is built into the Operating system. If I plug a Hard Drive into my Mac it asks me if I want to set it up as a backup. It them backs up all my data as I create it to this external drive. You can even get wireless versions that sit on your network. There is an added advantage of being able to access deleted items and changed files.

Backup Number 2 – The NAS

If you have a home NAS (some routers allow you to plug in a hard drive to create shared storage) then you can schedule your Mac or PC to sync your crucial data to this shared storage either at set intervals or before you shut down. If you don’t want to do this then you can simply copy your data to this location as and when you need it (e.g. when you have downloaded new photos from your camera). Sometimes people use this second backup just to store critical stuff like family photos.

Backup Number 3 – Crashplan (The Cloud)

Crashplan can be set to run in the background and backup your data to the Crashplan servers. There is a charge for the product but if you have a friend you can set it up to backup to each others computers and it won’t cost you a penny other than the cost of the hard drive storage.

Another option for this is to simply use Dropbox, but you have to be careful not to exceed your account.

Conclusion

There you have it, a simple 3-2-1 backup strategy that gives you peace of mind that your data is safe in case of disaster.

data-backup-drive-thumbnail-large

Choosing a NAS drive

More and more data being captures in various ways, from music to photos to applications, on all sorts of devices. In the average household there are at least 2 mobile phones full of photos, often digital cameras, laptops, netbooks, tablets, all of which have vast amounts of data. The concept of a NAS (Network Available Storage) is a great idea to keep some of this data centrally and to back it up from it’s primary location, be it your phone or another device.

But what about the cloud?

The cloud is great for certain data, but for some things you can’t beat having a local copy. Often the files are big and the Internet connection is not fast enough to get you access to your data as quickly as you need it. Often you have hundreds or thousands of files on your hard drive that you want to offload. This is often easier to scan through on your home network than via a cloud interface.

Let’s just say the cloud is great, it has it’s place, but for the purpose of this post lets focus on NAS systems.

NAS systems are not all equal

In order to explore this statement it is necessary to ask the following questions:

  • What do I need a NAS for?
  • Does it matter if the data gets lost?
  • Do I just need it to store files or do I want it to give me more value?
  • Do I need to access it from outside my home?

First of all I am going to tackle the second point, “Does it matter if the data gets lost?”. This is crucial in terms of choosing a NAS. If the answer is yes then you MUST get a NAS with some form of redundant drive system (RAID or such like). Some NAS systems allow you to use 2 drives and use the total space (e.g. 1Tb + 1Tb = 2Tb). This is useless if a drive fails. What you need is a configuration where 1Tb + 1Tb = 1Tb, but if a drive fails you still have your data.

There are plenty of NAS devices that offer this, but this is where it gets tricky. Some NAS devices use proprietary systems, meaning the drives and the data are dependent on the hardware and the Operating System. This means that the drives may be fine, but if the hardware they are connected to fails then you are without your data until you can replace it with the same hardware (or similar from the same manufacturer). I have seen this before where someone had a NAD for years, the hardware failed and then they could no longer get hold of the compatible hardware.

The solution to this is to do one of the following:

  • Build your own Windows/Linux server (requires time and knowledge)
  • Buy spare hardware in case of failure
  • Use a software RAID system that is compatible with most hardware

The third option is quite popular, and my preferred solution is NAS software called UNRAID. This software allows you to connect any amount (well a LOT) of hard drives together with the largest one as the parity. Once you have your parity in place you total the storage of your other drives and that’s your NAS capacity. There are details of how it works on their site but it’s really pretty simple and works well. If you lose a drive you take it out and put in a new one and it rebuilds the data. Even if you lost 2 drives you have only lost the data on that drive, the other drive’s data remains in place. If you lose your hardware you just put the drives in a new PC or server and you can either access the data directly or install UNRAID on that server and you’re good to go again.

I will do a full post on UNRAID another time, but take a look at their site if you want more info.

To go back to the first bullet points, if you want more value or access from outside your home then some of the Synology kit is great, as is UNRAID o FreeNAS. They all provide plugins or apps that enable you to share out your music, access your videos from smart TV’s or even over the Internet. UNRAID has a near Docker system to let you install Plex, so you can access all your movies from all your devices (phones, tablets, laptops and smart TV’s). If your Internet connection is up to it you can even view them when you are out of the house.

Conclusion

Choosing a NAS is really down to your preference and needs. Depending on who you are you may choose a different one. Here are a couple of options:

For techies (or geeks?)

UNRAID is definitely my choice. It allows you great data protection, the facility to run all sorts of apps, and all on your own hardware. Pair it with an HP Microserver (for just over £100) and you have a great little NAS box that can even run Virtual Machines!

For less techie people

For people who just want it to work something like the Synology NAS systems are good. They do some of what the UNRAID system does but you are reliant on their hardware. They are a good brand though so you shouldn’t get caught out but then going out of business.

One last thought… if your data is important to you it is not enough to have it in one place, you should have it in 2 places, or preferably 3. It’s also preferable to have an off site backup, that’s where Crashplan comes in, but that’s another post in itself.

https

Force browsers to the HTTPS version of your site

After writing my posts about the benefits of HTTPS I thought it might be a good idea to write a short post letting you know how to easily send visitors to the HTTPS version of your site rather than the HTTP version.

Open your .htaccess file in the root of your website (where your main index file is). If you don’t see one then you either need to tell your FTP client to display hidden files or you may need to create a new .htaccess file.

Enter the text as follows, customising it to the URL of your site:

RewriteEngine On
RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://hostsynergy.co.uk/$1 [R,L]

That’s it, all users going to any page on your site should now have it server up using HTTPS

Simples!

99,9 % uptime

Uptime, and the 99.99% scam

If you have looked at the websites of hosting providers on the web you can’t fail to have seen claims of 99.9%, 99.99% or even 99.999% uptime. The higher the number, the better the deal, right?

The answer is “not necessarily”

It all boils down to the SLA (Service Level Agreement). This is an agreement that the hosting business gives to its customer to keep their site online, or rather it is an agreement that the downtime will not exceed a certain threshold.

So what exactly is 99.9%, or 99.99%?

Availability Downtime per year Downtime per month
99% 3.65 days 7.2 hours
99.9% 8.76 hours 43.8 mins
99.99% 55.5 mins 4.38 mins
99.999% 5.26 mins 25.9 seconds

Looking at the above figures you can see how little time per month a site can be down in order not go over the SLA.

Ask yourself the question, when can they reboot a server, apply security patches etc? In Enterprise environments (Banks, Medical environments etc) there is a LOT of infrastructure in place, failover servers, mirrored storage, even backup data centres! In all but the most expensive hosting solutions you cannot expect this same level of SLA…. yet some places advertise it?

So… what’s the catch? Is it too good to be true?

In a word, yes. Most of these places cannot promise to have uptime to that level. Realistically most months they will probably manage it easily, but when something goes wrong (as it does for all companies, even Microsoft) then there will be downtime and it will be more than 4.38 mins a month!

So how can companies advertise an uptime level they cannot hit? Are they lying?

This is the clever (or sneaky?) part. An SLA is an “agreement”, not a promise. It means that if the agreed level is breached then the business will compensate the customer, be it in service credits or cash refund. The level of this is completely dependent on the business terms and conditions (or contract).

I have seen some hosting companies (no names) state “we will compensate downtime that is in excess of our SLA at our standard rate”. In this case it was a cloud provider that charged by the minute. In effect they were just saying that for every minute your server is offline you will be refunded a minute’s charge. So on that deal they may as well say they have a 100% uptime SLA, it makes no difference.

In Enterprise situations there are vast penalties for exceeding SLA’s, so a lot of thought and planning goes into meeting them, but in your general Internet hosting provider’s case they generally have small print to avoid high penalties, so they can get away will boasting about massive uptime SLA’s in order to bring in new business, without worrying about what happens if things go wrong.

So are we all doomed?

No, not at all. servers are generally quite reliable, and month on month most providers will hopefully give you 100% uptime. The way to judge a good hosting provider is not what uptime SLA they promise, but how they react when things go wrong. Do they react quickly, do they keep you informed, and do they solve the problem and explain why it happened if you ask?

A good web host is worth their weight in gold, just not for the simple reason of a 99.999% SLA.

data-backup-drive-thumbnail-large

Backing up your family photos

It’s a while since I have posted about backup strategy but it’s such an important topic I thought it was worth a revisit.

If you are like me you probably have all sorts of data around your house, across multiple computers, phones and devices. While everyone is different, I think most people would agree  that in the event of losing their data “en mass” the most devastating would be the loss of their family photos.

While the risk of losing photos due to water damage, sunlight, or general wear and tear is a lot less, the biggest risk nowadays is hard drive failure. I have seen many examples of people taking their dead hard drive to their designated “techie friend” in the vain hope that it could be recovered. Sometimes it can, sometimes it can’t, and often even if it is possible it involves significant cost.

Companies are pushing bigger and bigger hard drives, NAS devices, USB sticks etc upon us, and it’s great that we  can now store years and years of data on these, but in actual fact the situation (and risk) is getting much worse. Where you used to have your photos stretched over 4, 5 or 6 devices, now you can fit them all on a single hard drive… so why not? The answer is simple, if that device fails you lose EVERYTHING!

Yes it is possible to have 2 hard drives and backup everything twice, but in reality how many people do that?

NAS devices also allow you to configure them in RAID mode, where you can use two disks together and the data will survive if one fails. The problem is you can also configure them where they use the full capacity of the two disks (RAID 0), which looks on paper to be great…. more disk space than you can shake a stick at. The problem is you’re back to losing a lot of data if a disk fails.

The other issue is how often you actually backup your data. Most people find that when they have a failure it’s been “quite a while” since they last backed up.

The best situation is to backup automatically. To have a system where you don’t have to think about it. There are services such as CrashPlan which offer this service if you have a reasonable internet connection. It’s great as it keeps your data safely off-site, so if you has a flood or fire you can always get your data back. CrashPlan also allows you to setup your own servers (or peer to peer with a friend) so you can backup to those instead (or as well!).

The ideal situation is to backup to multiple places, having several copies of data in multiple locations. This is a lot to consider for a lot of people, so that’s why I often recommend CrashPlan, as it is simple to use and doesn’t cost the earth. If you don’t want to go that route then by all means continue to use hard drives, but please consider buying a second one, using that as well, and keep it in a different location to the first one.

If you have any comments or questions please leave a comment.

Screenshot 2016-02-20 17.19.23

Why you should not use VMWare snapshots

Snapshots in Virtual Machines are a great idea. They allow you to test out changes safe in the knowledge that you can revert if necessary, without any clutter left behind from an uninstall. They are brilliant for that!

This post comes off the back of seeing many people using VMWare snapshots as some form of backup system. DO NOT DO THAT!

I cannot stress enough that a VMWare snapshot is not a backup at all. The snapshot is kept in the same location as the main storage files, so if you lose the storage that your main VM is kept on you also lose the snapshots.

If you want to use a VMWare specific method of backing up you need to be looking at exporting your VM as a template. You can then farm that off to some safe location and use it to restore if necessary.

Unfortunately this is not explained well by VMWare at all, leading to all sorts of confusion and people using the technology for the wrong purpose.

Added to this, when you have a snapshot VMWare stops using your primary storage file and starts writing the changes to other files (delta files). This results in you using a lot more storage than anticipated and also puts a much higher load on the I/O system.

To conclude, snapshots are great, but use them as intended, for a short term to prove changes in a system. Once you have done that, delete them quickly. The longer you leave it the longer it will take, as VMWare has to merge back in all the delta changes to the main storage file before removing a snapshot and this is SLOW!

 

https

HTTP vs HTTPS

HTTPS has always been used to secure websites that contain sensitive information such as Credit Card numbers, but most web site owners tend not to give it much thought outside those requirements.

In 2014 Google announced it was starting to give a slight ranking advantage to HTTPS sites over their HTTP counterparts. This started out as being pretty much a tie-breaker scenario, where two sites were otherwise equal it would rank the HTTPS site first.

Last year Google also started actively looking for HTTPS content ahead of HTTP content. That means if your site supports both protocols Google will automatically look for the HTTPS version.

With the advent of HTTP/2 and it’s current requirement of HTTPS now is a good time to consider switching over to HTTPS. As well as giving your users a more secure experience you also have the added benefit of being in a good place to support HTTP/s if your host supports it.

0d7d219f99c71f23a08bc26d2700b237_400x400

HTTP/2… why you should care!

HTTP/2 (originally named HTTP/2.0) is the second major version of the HTTP network protocol used by the World Wide Web”

Now we have that out of the way, there are a few reasons to take notice of this and a few things you may want to do in order to take advantage of it.

What is wrong with normal HTTP?

HTTP is old… in terms of the Internet it is very very old indeed. It was standardised in 1997, when a lot of web developers were still learning learning to walk! It did the job, but as websites became bigger and more complex it was a constant struggle to get the site to display at a reasonable speed, even with modern high-speed connections.

The crux of the issue is the fact that sites are made up of lots of files and the HTTP protocol only allows a certain amount of transfers at the same time. This increased over time but there has always been the situation whereby files sat in a queue waiting to be downloaded by the browser.

What web developers started to do was use techniques such as merging multiple CSS files into a single one, using CSS sprites so icons were downloaded in a single file. All this to get around the queueing system. There was also the problem that if some files got “stuck” then everything else had to wait in line, causing very erratic behaviour at times.

How does HTTP/2 help?

HTTP/2 does away with the queuing system by using something called multiplexing. Without going into the finer details it basically means that browsers can download a lot more content at the same time (if the browser and server both use HTTP/2) and things should perform a lot faster.

Server pushing is also used in order to speed up the rendering experience. In the pre HTTP/2 world the browser downloads the full HTML page first, then starts grabbing the assets it needs such as CSS files and javascript. With HTTP/2 the server is able to send over files it knows the client needs into the cache, so by the time the HTML file is loaded the assets files have also started to arrive. Add in header compression and you have a much more streamlined method of loading pages

So what’s the catch?

While technically there is no requirement for encryption to use HTTP/2, several implementations have said they will only support HTTP/2 over a TLS encrypted connection. There are several reasons for this, which may or may not change over time, but for now you must use an HTTPS connection to take advantage of HTTP/2.

What this means to most users is they must have an SSL certificate for their domain, if not their users will get nasty messages about unsecured connections and/or mixed content.

Should I use HTTP/2?

Google have already stated they are starting to give sites using HTTPS a slight advantage in the ranking mechanism, so now is a good time to at least consider using HTTP/2 for your sites.

That said, HTTP/2 is very new and currently only supported by a hand full of hosts. For now if you convert your site to use HTTPS you will be in good shape to enable HTTP/2 as soon as it is supported on your host, and thus take advantage of a very real boost in performance!