As many of you know, last week we experienced a major outage, affecting multiple email and database servers for just under 5 days.
It was, without a doubt, the worst technical issue we’ve ever experienced as a company.
Thankfully, all systems were brought back up without any loss of data. While we were happy that we were able to restore our customers’ services, we knew the duration of the incident, as well as the incident itself, was unacceptable.
To make certain that something like this never happens again, we launched a full investigation to determine the root cause of the issue, outline the steps we took to handle it, and help us take preventative measures against this type of problem in the future.
I want to share this report with you, both to satisfy the curiosities of our more tech-savvy customers, and to illustrate the amount of time, work, and research it took to resolve this difficult issue. I also want to share the steps we’ll be taking to prevent this issue in the future, which are included at the end of the report.
So, here is the post-issue follow-up my system administrators delivered to me this morning. I think you’ll find it informative, and I hope it sheds some light on the mess that was last week:
Incident Name: Storage Outage – sas3
Incident Date: 2014-03-02
Report Date: 2014-03-14
Storage sas3 on dbmail01
93 Shared vms (mail and mysql of cp9-11), resources of 30366 accounts were affected.
Incident Root Cause:
The existing SAS sans use a RAID50 configuration that consists of two RAID 5 groups (one parity group consisting of the even numbered drives and one parity group consisting of the odd numbered drives). The array can handle two disk failures at the same time as long as they are not part of the same parity group. In the case of our outage, drive 6 failed and drive 10 was added to the RAID to rebuild the group. During the rebuild process, drive 0 failed causing us to lose the even-numbered parity group. This occurred just before 4AM EST on 3/2/2014 and caused the RAID to go into an unrecoverable state. We contacted our hardware vendor’s support line before acting, because there was a large potential for data loss, and were escalated to their engineering team. Total call time was 10 hours.
In order to regain access to the data, we had to manually disable slots 10 and 15 (the spare drives) so that the RAID would not attempt to rebuild. Next, we reseated drive 6 which brought it online, but not as part of the RAID. This allowed the entire RAID to come back online in a degraded state with drive 0 active. Because drive 0 was still failing, we knew the RAID was in a very fragile state and that we had to move forward with great care or we would risk losing data.
Our hardware vendor showed us a binding procedure that allowed us to move the affected volumes from the storage system. We learned that if we triggered another failure in drive 0 at any point during this process, the RAID would go offline and we could lose access to the data. With this in mind, we began to migrate the volumes, one at a time (that way, it would reduce the stress on drive, thereby reducing the chance of it failing again). We were methodical and deliberate in the way we approached this and thankfully, we were successful in migrating all data from the storage without triggering another failure in drive 0. The process completed, and all customers were back online as of 3/6/2014 just after 6PM EST. The whole process took just under 5 days.
Click here to view the timeline of events, from the initial outage to the final server’s reactivation.
What We’re Doing To Prevent This:
Currently, our automated hardware checks are set to notify us when a storage system has an issue of any type. While sophisticated, it’s not specific enough to tell us what the actual problem is. For instance, if a drive fails, we get a general notification, rather than a ‘drive x has failed’ message. We are looking into using more specific, granular notifications for individual disks.
Proactive Hardware Replacement
It may be possible to check via SNMP for things like disk errors on specific disks before they actually fail out of the RAID and trigger a rebuild. This will result in less drive failures and less rebuilds.
Switch All Arrays to More Stable RAID
Our RAID currently rebuilds on storage arrays using RAID50. Though this is standard, it can take more than eight hours to complete a rebuild. This is an 8-hour window where we risk losing two drives from the same parity group. We can decrease this risk by moving to a significantly faster RAID10 setup, which can rebuild in about 3 hours.
Thanks for reading and again, we’re so sorry about this inconvenience. If you have questions, feel free to ask them in the comments.