Top 10 tips on how to protect your data

Your business data is critical to the day-to-day running of your business. Ensuring that it's secure and policies are set for the various types of data will provide peace of mind and a cost-effective data management solution.

1. Analyse your data

All primary data needs to be analysed to ascertain:

  • The type and age of data
  • Creation, access and modify date of data
  • Data volumes
  • Historical data growth
  • Biggest servers, users, files
  • Duplicate data

2. Decide on the importance of your data

Typically there are four different types of data identified within an organisation's environment, categorised by their importance:

  • Ultra critical data: This is typical transaction- based application data, where even a few hours of data loss can have a severe business impact.
  • Critical data: This is other application and file data, which has been created or accessed in the last 90 days.
  • Inactive / legacy data: This is all the inactive & legacy data sitting typically on the mail server or local mail stores and on file servers, which needs to be protected for operational or compliance reasons.
  • Duplicate and non-business data: This data is either duplicate or does not fit any business requirement (e.g. an end user's music collection) or is outside the organisation's retention requirements.

3. Set the correct policies for all your data

The business has to agree on the appropriate retention policies, which are based on the organisation's legal and compliance requirements. This could mean that all e-mail data needs to be protected for seven years and file system data for three years. The retention requirements will vary between different organisations and different sectors.

4. Ultra-critical data needs to be replicated or backed up throughout the day

This data could comprise of e-mail data and other fast changing databases such as Oracle or SQL. The only to be protect this data when it is created or modified is replication as this can be carried out without impacting on the servers this data resides on. Replication will enable fast recovery for business continuity purposes and significantly reduce the recovery point (RPO) and recovery time objectives (RTO).

5. Critical data needs to be backed up once a day

All ultra-critical as well as the critical data needs be backed up once a day for disaster recovery purposes and retained for the appropriate retention period. Data then needs to be offsited daily to protect against site outages, system failures, fire and theft.

6. Inactive and legacy data needs to be archived and retained

All data that is no longer critical, i.e. data that hasn't been accessed for a long time, but still considered being important and of business value should be archived. This process would physically remove the data from primary systems and replace them with place holders. This will significantly reduce primary server, storage, management and backup costs while reducing backup windows and improving recovery times.

7. Non-business and duplicate data needs to be deleted

As part of the archiving process all non-business or duplicate data needs to be deleted. This will ensure that only valuable business data is protected and reduce storage costs and management overheads.

8. Ensure that all data can be accessed by the data owners

End users need to have access to all their data being ultra-critical, critical or inactive. Both ultra-critical and critical resides on primary servers and storage and is accessed directly by end users. Inactive data that has been archived and resides on archive servers and secondary storage. It is therefore importance that all archived data is directly accessible by end users (the owners of data) and does not required the assistance of IT administrators (the keepers of data) or IT helpdesks. By replacing items that are archived with placeholders the direct end users access to archived data is ensured.

9. Ensure that recovery of ultra-critical and critical data is tested regularly

The DR plan needs to be tested regularly to ensure the business can recover the operation successfully and in a timely fashion. DR testing is a major challenge for most IT departments, but if recovery has not been tested all the way to the application level, it is very likely that problems will occur.

Even though a DR test is a major operational disruption it shouldn't be treated as a pro forma exercise but needs to include true end-to-end testing all the way to production. The focus needs to be on recovering applications rather than servers since with today's complex applications, client server and web-based multi-tier applications, the components reside on multiple servers thus there are interdependencies between these. If recovery has not been tested all the way to the application level, it is very likely that problems will occur.

10. Ensure data retention policies and categorisation of data is reviewed regularly

Regular SRM audits provide for continuous best practices data management and allow for the objective review of categorisation of data and retention policies. These audits will demonstrate whether existing polices provide improved server and storage performance, reduced backup window and improved data recovery speeds while reducing overall data management costs.