Design or Go Down: Mitigating outage risk through data centre setup

As the number of data centre outages continues to rise each year, Verne Global’s Director of Technical Services, Jorge Balcells, explores how data centre downtime could become a thing of the past.

According to Eaton’s 2015 Blackout Tracker, last year the UK suffered 640 data centre outages, up 23.5% from 2014 and 84% in 2010. Combined, these outages lasted an average of 50 minutes each – a total of 32,032 minutes (that’s more than 22 days of downtime.

Not only did these outages affect over 2.5 million people, each would have cost the businesses involved an average of ?6,000 every 60 seconds in process-related expenditures and lost opportunity costs.

With figures like these, companies can’t afford the risk to revenue, nor the economic impact of damaged brand reputation, customer churn, and lost business.

Keeping the lights on is therefore crucial for businesses looking to mitigate such risk.

So, how can IT leaders begin to alleviate these issues at the core – the data centre itself – during the initial design process? And what solutions should data centre operators be working into blueprints to maximise resiliency and protect outages? 

Power by design

When it comes to data centre design, protecting campuses from external factors is the foremost consideration. For those in charge of the build process, the primary goal should be to balance these inherent risks with technical and economic advantage.

In my experience, there are three key considerations that could contribute to the resolution – or at least the significant reduction – of data centre downtime moving forward:

1.       Site selection

The most important way to eliminate outages risks to a data centre starts with the selection of the site. Businesses should be looking to locations that benefit from stable power infrastructure where the risk of the grid going down – and taking data operations down with it – is much less.

Now, thanks to the cloud, companies no longer need to sit on their data (that is, on premise). Information can now be housed in global locations with very little (if any) impact to latency and security. This means businesses can release their data from poor-performing (and expensive) grids – like the UK, where the grid works at 96% capacity – to take advantage of some of the world’s most reliable and affordable energy infrastructure.

For compute-intensive applications, regions with hydroelectric and geothermal energy are optimal so it’s no coincidence that the industry has seen a steady data centre migration to countries like Iceland – where the grid operates at just 10% capacity – Norway, Sweden, and the Province of Quebec over the past few years. Cooler climates also benefit from fewer complexities in the design of mechanical cooling, which means less moving parts for operators to worry about.

Once a robust site is chosen to minimise external risks, the next step is to ensure that the technical design addresses not only industry best practices, but is fine tuned to directly address the risks and reap the benefits attributable to the selected site.

2.       Data hall tiering

The second key design consideration for data centres looking to negate risk lies in providing customers with the opportunity to separate their data applications based on required redundancy levels.

Campuses should help IT leaders look at their data sets and identify which applications are ‘Mission Critical’ and require high resiliency (based on application, usage requirements, and impact to the business), and those that can be run with lower redundancy at a location that has abundant, stable power.

New technologies continue to emerge which allow companies to spread their data applications across multiple sites or multiple halls on the same site in this way. This has two benefits. First, the enterprise becomes less dependent upon the risk factors that are inherent at a single location. And second, applications can be deployed into the data centre or hall that makes the most sense for the end users, both technically and economically.

This ‘variable resiliency’ model is in practice at the Verne Global campus in Iceland where data centre space is designed to help customers disaggregate workloads across tiered data halls and modular rack set-up. The result: customers can make use of full flexibility for redundant power and cooling and all the associated cost savings.

3.       Flexible power sources

A really important design consideration is to keep a data centre’s design flexible – especially when it comes to power resources. This will ensure it is set up to deal with and rapidly resolve issues if the worst was to happen.

At the Verne Global campus, we have the ability to power servers from any one of four electrical systems (all of which are fully isolated and connected to our dedicated on-site substation) – an approach that allows customers to choose the appropriate level of protection for their applications. In addition, companies can rest assured that if the grid goes down, their data will continue operating – saving them valuable time and money.

Building a resolution

We are unlikely to ever be in a situation where we can completely resolve power outages – this would require substantial and on-going investment to modernise our global power infrastructure. However, by incorporating these considerations into the data centre design process we can go a considerable way to alleviating the risk and revenue loss associated with downtime for years to come.

 

By Stuart Farmer, Sales Director, Mercury Power.
By Nick Bannister, vice president sales for Arrow’s enterprise computing solutions business in...
Here are the top six trends according to Brent Owens, Director Sales & Partner Enablement EMEA for...
By Paul Flannery VP of International Channel Sales at ERP provider, Epicor.
By Chris McKie, VP, Product Marketing Security and Networking, Datto, a Kaseya company.