5 Best Practices You Should Know To Insure Your Data Is Safe
When talking about data security, I often tell the story of the company we worked with years ago who invested in what was a nice state of the art tape backup system. The backup was automatically run every night, and they diligently replaced the tapes each day. One day, we got a call from a rather panicked sounding person, let’s say her name was Sarah. Sarah breathlessly asked “Can you recover our server… our building is on fire!” After making sure everyone was okay and being assured the fire department was there, we asked if she had her tapes. After a short pause, she replied “I can see them through the window sitting on top of the server about to be burned”.
Well, there are lots of things wrong with that story, if we look at it from today’s perspective with today’s tech. However, even in today’s high-tech environment, there are some very important best practices that you should use as a measure to insure that your data is safe.
1) Follow The Rule of 3. This rule says that you should always have at least three copies of your data. You should have the original copy, a local backup at your location, and an offsite copy at another location as far away from your main location as possible. Three distinct copies are necessary just for the simple reason that things don’t always go as planned, and if something goes bad with one, you still have another alternative. Offsite is important for the obvious site disaster type issue like Sarah had. Her company had been instructed to carry tapes offsite, but didn’t make it a priority. Today, many accomplish the offsite with a cloud based offsite solution that takes away the trouble and worry of carry-off. However carry-off using disk drives is still an often used option to avoid monthly storage costs.
2) Use Snapshot or image based software. Newer backup software uses image based technology that takes a snapshot at a given point in time of your whole server, then updates things as bits and bytes change. This software allows a complete restore of your whole server to a given point in time. It also allows you to restore to a virtual machine and takes away the need to recover on the same hardware. It essentially makes your server OS and data portable to new hardware. In addition, it allows multiple backups to occur during the day. Without this type of software, you are stuck on the old model of one backup every night, and lengthy very ugly recovery, if a server goes down.
3) Make sure it is automated. This seems like it shouldn’t need to be said, but we often complete an IT assessment for companies that are interested in our help. In that process, we often find an amazing lack of a good backup. Prior to us helping them, the IT person backed certain things up when he had time, and users backed other things up on their own. Sometimes we find entire servers full of data that exist outside of the backup plan. This is a recipe for disaster. Backups should be automated to run on a regular schedule at least every 4-hours during the business day. They should be all encompassing and get all of the data.
4) Check your recovery time. Make sure that your backup software system or plan meets your recovery objectives. There are lots of non-commercial grade backup systems that are low cost, but do not allow you to recover quickly. How long can your ERP system be down? How long can you be without CRM? If you are using a file based cloud backup, often recovery takes days, not minutes or hours. Ask your IT person or IT services provider how long it would take to recover specific files and software in the event of a failure. Make sure they are in a position to meet your recovery objectives.
5) Understand how it is monitored. Your on-staff IT person or IT services provider should be able to clearly and concisely answer the following questions: Will the backup software alert you by email or by creating a trouble ticket if it fails? How much priority do you give that kind of a failure? When was the last time you visually inspected the backup software to insure that there are no errors? Will someone know if there is a failure, even when you are out of the office or on vacation?
It is not just small and medium sized organizations that fall into the trap of not following best practice. In recent news, a California hospital was hit by a Ransomware virus that encrypted all of their files and shut their IT infrastructure down. After attempting to recovery for several days, they threw in the towel and payed tens of thousands of dollars using Bitcoin to the hackers who created the Ransomware so that their files would be decrypted. Their story is that they had backups, but they were questionable, and would take far too long to recover. It was cheaper and faster to pay the ransom.
Securing your data is more important than ever. Paying close attention to these 5 best practices will help you stay safe and insure your data is around to stay!
Scott Hirschfeld is the President of CTaccess, an Elm Grove IT support company that has been helping small businesses stop focusing on IT and getting back to doing business since 1990. Under his leadership CTaccess provides the business minded approach of larger IT companies with the personalized touch of the smaller ones. Connect with Scott on LinkedIn.