Good backup strategy = a better night’s sleep. It’s as simple as that. It’s also difficult to get right and needs constant tweaking, not just because the amount of data we’re producing is growing exponentially, but also because the options for quick, secure backup are increasing, and that’s a very good thing!
Conversely, bad backup strategy is worse than useless as you can pay a lot of money for a big bag of false hope.
Why take backups anyway?
Even when you think that all your data is safely in the cloud, that doesn’t mean it’s always available. SAAS providers do go bust. Even the mighty Amazon has the occasional cloud wobble, and when that happens the rest of the internet wobbles, too.
Why have a ‘backup strategy’?
You have many types of data. Some is business critical and some isn’t. Some changes hourly or daily. Some might need to be archived, or not backed up at all.
However and wherever you backup will also be determined by how quickly you need to get it back, and in what circumstances you might need to do so.
Every business unit will have different requirements, and often very different ideas about how critical their function is. The backup strategy needs to take all this into account.
The strategy also feeds into the Disaster Recovery and Business Continuity Plans. There can be no recovery or continuity planning unless you know you can restore data in any crisis event.
Here are 6 things you might want to consider when creating a backup strategy. The best thing is that the first letters give you a handy mnemonic.
Bare Dave Stent, or better still: Bare Dave’s Tent. Whoever Dave is, the state of his undress and his choice of abode, temporary or permanent, are not important.
Let’s dive in:
I will get some IT jargon out of the way first. You need to know about RTO and RPO because this will keep coming up when talking about backup and recovery.
RTO is the Recovery Time Objective. You may be familiar with this if you have undertaken any Business Continuity planning. This is the maximum period of time that a process (or systems affected by data loss) can be interrupted without causing unacceptable consequences to the business.
RPO is the Recovery Point Objective. It’s a given that you will lose some data when disaster strikes. RPO is the maximum amount of data that can be lost in terms of time, which roughly equates to the point at which the last reliable backup of the affected system was taken.
Both these will change depending on how often the data changes in each system to be backed up and how critical these systems are to the functioning of your organisation. This is where the people with overall responsibility for backups will need to sit down with the owners of the data. Backup strategy is not a one size fits all plan!
What do you need to back up? If you’re lucky then a careless or malicious employee might only delete the folders on the file server that he or she has access to. A hardware or disk fault could take out the whole server, maybe including directory services such as Active Directory. The job of restoring not only all the files but all the configuration, users and permissions could be a very long one.
Do you just need to backup up files and folders, or does it need to be whole disks, servers or virtual machines, even those running in the cloud? What is critical to your organisation? You should also consider whether your router or firewall configs and DNS settings should be in scope.
You need something to recover your data to. It could be the same server, right from where the data was lost in the first place, if the hardware is undamaged. But if something needs replacing how long will that take to arrive? If you’ve had a malware attack can you really trust your existing systems even if everything’s been wiped?
How long your backups will take to restore can seriously affect your RTO so prioritising which data needs to be restored first is of the utmost importance.
Any backup is only as good as the quality of the data, and every piece of backup software and storage medium has the potential to fail.
Backups need to be tested regularly. The shorter the RTO, the more important the testing is and the more often it should be done. Problems you may have to overcome when trying to restore a whole server should be practised when you have time to document them, not when the disaster has already happened and the boss is breathing down your neck.
Work with the teams whose data you’re checking so you know that just because the backup software says bytes restores equals bytes backed up, the files still work. That’s peace of mind for you and the people whose work you’re protecting.
The consideration of a data storage solution combines cost, speed, reliability, access and location.
As a bare minimum you should consider the age-old 3-2-1 approach:
3 copies of the data, over 2 different media, 1 offsite.
The media could be a different server or NAS raid, good old-fashioned tapes, and, increasingly, the infinite storage capacity of the cloud (and by infinite I mean you’ll never fill that up!)
Disks are fast and reliable, and very quick to restore from, but are susceptible to ransomware attacks and and physical disaster that befalls the server they’re connected to.
Tapes haven’t gone away yet and do provide an easy and cheap way to create offsite copies. They also create an air-gap that ransomware can’t get through (and if it can then, well, that means the machines have well and truly taken over!). There is endless talk about how tapes are less reliable than other forms of backup, but for the moment they’re here to stay.
Cloud backup is a convenient and increasingly cheaper option as long as the increase in the amount of data that needs backing up isn’t exceeding your internet bandwidth! It’s now possible to store backups across Amazon regions, but for compliance reasons you will need to ensure that any cloud backup is not being kept in a region or country that it’s not supposed to be in.
Backups should be encrypted both at rest and in transit . Whether you’re backing up virtual machines to the cloud over https, or copying your employee data folder to a USB stick so you can give it to a colleague (which you probably shouldn’t be doing anyway!) you must ensure that strong encryption is used.
This is a cybersecurity issue, a data protection issue and very often a compliance requirement.
Whatever your backup method, don’t forget to make sure that the encryption keys are backed up themselves, and to somewhere you’ll have access no matter what happens!
How is backup data getting from A to B and how fast does it get there? A NAS raid mirroring a server disk is pretty instant. Tapes held offsite are as fast as traffic conditions; just don’t leave them in the car on a hot day! The vast potential offered by could storage might be limited only by your internet connection, but if you have terabytes of data to backup you might find that your backup speed gets slower and slower as the provider throttles the connection. Restoring those terabytes of data is then often most quickly achieved by the provider dumping your data onto hard disks and shipping them, but at quite a cost!
Backup strategy is a moveable feast. The amount of data you hold is almost certainly growing. What your priority files are will be constantly changing. Your infrastructure will be evolving. and of course the options you have are growing in capability and changing in cost.
With constant evaluation, testing and communication you can really lay the foundation for effective disaster recovery, no matter how big (or indeed small) the crisis.
TBG security can work with you to create an effective backup strategy. They have huge experience across a broad range of organisations who are subject to varying levels of compliance. They can help determine what data you should be backing up and the level of criticality, your RTO and RPOs, and what the most cost effective solutions for backup, testing and restoration.