Don’t forget to ensure that your test cases contain time-based events, like leap years and Daylight Savings. Not that it would have helped much, since it was a cert expiry that caused the issue. The moral of the story? Ensure critical apps have a disaster recovery plan that doesn’t include a single source of failure (cloud provider).
When the clocks struck midnight, things quickly got janky, and a cloud-system domino effect took charge. A large number of Western Hemisphere sites and the U.K. government's G-Cloud CloudStore were among the many stopped cold by the outage. Microsoft has been retracing its steps in finding out what exactly happened and hasn't said very much yet, although it did report in an Azure team blog that the problem has "mostly" been fixed.
IT in Canada - Canada's Only Integrated Social Media News Network
No comments:
Post a Comment