The use of cloud services by Australian consumers and small and medium enterprises (SMEs) increased 11 percent in 2013 over 2012, according to a report released by the Australian Communications and Media Authority on Tuesday.
Report
Comcast Accounts for 49 Percent of New Broadband Subscribers in 2013: Report
Comcast added more than one million broadband subscribers last year, accounting for 49 percent of the total net additions for the top seventeen largest cable and telephone providers in the US.
IX Outage – Post Issue Technical Report
As many of you know, last week we experienced a major outage, affecting multiple email and database servers for just under 5 days.
It was, without a doubt, the worst technical issue we’ve ever experienced as a company.
Thankfully, all systems were brought back up without any loss of data. While we were happy that we were able to restore our customers’ services, we knew the duration of the incident, as well as the incident itself, was unacceptable.
To make certain that something like this never happens again, we launched a full investigation to determine the root cause of the issue, outline the steps we took to handle it, and help us take preventative measures against this type of problem in the future.
I want to share this report with you, both to satisfy the curiosities of our more tech-savvy customers, and to illustrate the amount of time, work, and research it took to resolve this difficult issue. I also want to share the steps we’ll be taking to prevent this issue in the future, which are included at the end of the report.
So, here is the post-issue follow-up my system administrators delivered to me this morning. I think you’ll find it informative, and I hope it sheds some light on the mess that was last week:
Incident Name: Storage Outage – sas3
Incident Date: 2014-03-02
Report Date: 2014-03-14
Services Impacted:
Storage sas3 on dbmail01
93 Shared vms (mail and mysql of cp9-11), resources of 30366 accounts were affected.
Incident Root Cause:
The existing SAS sans use a RAID50 configuration that consists of two RAID 5 groups (one parity group consisting of the even numbered drives and one parity group consisting of the odd numbered drives). The array can handle two disk failures at the same time as long as they are not part of the same parity group. In the case of our outage, drive 6 failed and drive 10 was added to the RAID to rebuild the group. During the rebuild process, drive 0 failed causing us to lose the even-numbered parity group. This occurred just before 4AM EST on 3/2/2014 and caused the RAID to go into an unrecoverable state. We contacted our hardware vendor’s support line before acting, because there was a large potential for data loss, and were escalated to their engineering team. Total call time was 10 hours.
Response:
In order to regain access to the data, we had to manually disable slots 10 and 15 (the spare drives) so that the RAID would not attempt to rebuild. Next, we reseated drive 6 which brought it online, but not as part of the RAID. This allowed the entire RAID to come back online in a degraded state with drive 0 active. Because drive 0 was still failing, we knew the RAID was in a very fragile state and that we had to move forward with great care or we would risk losing data.
Our hardware vendor showed us a binding procedure that allowed us to move the affected volumes from the storage system. We learned that if we triggered another failure in drive 0 at any point during this process, the RAID would go offline and we could lose access to the data. With this in mind, we began to migrate the volumes, one at a time (that way, it would reduce the stress on drive, thereby reducing the chance of it failing again). We were methodical and deliberate in the way we approached this and thankfully, we were successful in migrating all data from the storage without triggering another failure in drive 0. The process completed, and all customers were back online as of 3/6/2014 just after 6PM EST. The whole process took just under 5 days.
Timeline:
Click here to view the timeline of events, from the initial outage to the final server’s reactivation.
What We’re Doing To Prevent This:
Improve Monitoring
Currently, our automated hardware checks are set to notify us when a storage system has an issue of any type. While sophisticated, it’s not specific enough to tell us what the actual problem is. For instance, if a drive fails, we get a general notification, rather than a ‘drive x has failed’ message. We are looking into using more specific, granular notifications for individual disks.
Proactive Hardware Replacement
It may be possible to check via SNMP for things like disk errors on specific disks before they actually fail out of the RAID and trigger a rebuild. This will result in less drive failures and less rebuilds.
Switch All Arrays to More Stable RAID
Our RAID currently rebuilds on storage arrays using RAID50. Though this is standard, it can take more than eight hours to complete a rebuild. This is an 8-hour window where we risk losing two drives from the same parity group. We can decrease this risk by moving to a significantly faster RAID10 setup, which can rebuild in about 3 hours.
Thanks for reading and again, we’re so sorry about this inconvenience. If you have questions, feel free to ask them in the comments.
Healthcare Cloud Market to Reach $6.5 Billion by 2018: Report
The North American healthcare cloud computing market will rise at a CAGR of 29.8 percent to $ 6.5 billion by 2018, according to a report by MarketsandMarkets. The market was valued at over $ 1.7 billion in 2013.
Cybercriminals Using Cloud Infrastructure to Launch Attacks: Annual FireHost Report
Cloud infrastructure company FireHost blocked over 100 million malicious hack attempts in 2013, many of them coming from cloud service provider networks, according to a new report.
Mobile Apps Being Used in DDoS Attacks: Prolexic Report
Downloadable mobile apps are being used in Distributed Denial of Service attacks against enterprises, according to the latest quarterly report from DDoS protection provider Prolexic Technologies.