Build, connect, protect

DCS talks to Chris Scott, Offering Development Executive, Data Centre Site and Facilities, IBM Global Technology Services. With over 28 years experience in IT services, Chris shares the experience that has led to him becoming a Core Member of the Site and Facilities Community of Practice, which is a community of IBM specialists creating and sharing intel-lectual capital and thought leadership in the area of Data Centre planning, design and availability including modular deployment.

Q Whether updating an existing data centre (DC) facility, or

building one from new (brown- or greenfield), what, in broad
terms, are the issues to consider?

A There are 3 main areas of consideration. Cost, operational efficiency and provision of an appropriate platform to support core business.

The first 2 are intricately linked as cost includes understanding the balance between the capital investment to build a new facility or update an existing one, with the operational cost of running and managing the new environment compared to the old. Typically it can cost anywhere between 3 and 5 times the build cost to run a data centre over its lifetime and spending a little more on energy efficient capital plant and innovative design may save millions of pounds in downstream operational spend.

The last point is about ensuring that a flexible environment is
planned for the deployment of leading IT technology. This can help business efficiency and facilitate corporate growth whilst providing an energy efficiency message for corporate social responsibility programmes.

Q Is there a simple sum to do to establish the relative costs of
upgrading an existing facility as opposed to starting over?

A This wouldn’t be a simple sum; there are simply too many factors to consider. Age of existing facility, lengths of leases on current real estate, cost of new real estate, availability of appropriate land and capacity of electrical grid power all need to be taken into consideration.

Other factors will also play a part. More than three quarters of data centres were built before the dotcom era began and this aging infrastructure simply cannot support new generations of IT as efficiently as newer, flexibly planned, scalable facilities. Data centres outlive their IT contents by many times so flexibility to adapt to new IT technologies is critically important too.

Q What about a sum to work out whether to own the DC facility or
to go down the colo route?

A I am reminded of a quote from Ronald Coase who won the Nobel prize for economics in 1991. He wrote “Firms should only perform internally those functions that cannot be performed more cheaply by the market”.

Judging how to establish what is cheaper will be difficult though, as I think this depends far more on the skills that currently exist within an individual organisation and a decision around ‘core versus chore’ at an enterprise business level. Services will continue to exist to help corporations with both scenarios.

Q Everyone talks about the need for building a DC facility that
can adapt to the needs of a dynamic IT load. What does this
mean in practice?

A If we consider how servers have evolved over the years from floor-standing boxes to rack-mounted blocks, then thin ‘pizza boxes’ and eventually blades with multiple cores and shared fans. Each generation has had a different demand on the data centre facility in terms of space, power and cooling density.

In addition, peak processing times in various international time zones, batch processing runs, peak trading times for retailers etc. all mean that a data centre needs to be flexible enough to adapt to varying and rapidly changing densities and loads without expensive retrofit.

Q How achievable is it?

A These challenges can be managed if scalability, flexibility and modularity are planned into data centres at the point of design. For example, enabling high power densities by installing the necessary infrastructure but not all of the mechanical and electrical plant on Day 1 can help with both capital and operational cost planning. Software plays a part here too with IT load balancing, energy management and switching servers off rather than running them whilst idle.

Q Energy usage/bills seem to be the number one pain point
for any DC facility. Is this the right place to start when it comes
to implementing a DC optimisation strategy?

A Energy consumption is definitely one of the primary considerations. 5% of IBM’s real estate is data centres yet this space consumes over 40% of IBM’s electrical consumption.

There are other factors too however. Many data centre estates happen by accident through mergers and acquisitions and often an optimisation strategy will be formed around operational efficiency
in addition to pure energy considerations.

Q The Power Usage Effectiveness (PUE) metric is offered as a means of evaluating DC energy efficiency. How helpful is it, or has it become more abused than used?

A PUE can be abused (some calculations include the power protection infrastructure and some don’t which will offer very different metrics) but if properly understood, it can be a useful factor of consideration.

As simple efficiency ratio, it does give a good indication of how efficiently a data centre uses and delivers its power, but my feeling is that overall energy consumption and how reductions can be made are more important than just defining the ratio between IT power usage and overall power.

Q Similarly, if more broadly, the Tier I-IV scale is offered by the
Uptime Institute. How helpful is it?

A I think the definitions of IT availability levels and how they relate to redundancy, contingency and therefore cost and energy efficiency within a data centre are very important to understand, whether defined by the Uptime Institute or others. IT strategy should also be considered as not all applications will demand the same availability within a corporation.

These factors need to be properly balanced against each other to make sure that data centres are planned, designed and built to meet all of a corporation’s needs.

Q Do you think there is a need for a new DC energy-related
standard?

A In honesty, I don’t believe that additional or new energy related standards would be too helpful. We already have PUE, LEED, BREEAM, Building Regulations Part L etc. so comparisons can already be made. I think it is more important to look at overall energy consumption, the potential for real reductions and the cost savings that can be made through efficient process.

Virtualisation of server workloads, increasing the utilisation of IT equipment, switching idle IT off completely should feature in thinking as much, if not more, than standards.
Q Power + cooling seem to be joined at the hip in the DC. Is
it right to look at these two together, or would it be more helpful
to separate the two?

A Power and Cooling are both big topics and rightly so because it is losses in power systems and inefficiencies in cooling systems that contribute most to unnecessary energy consumption in the data centre.

‘Power’ can cover availability, generation method, transport, internal distribution, protection, density, management whilst ‘Cooling’ could include fresh air, chilled water, use of available external lakewater or seawater, containment, reuse of heat, heat removal at the perimeter, rack or chip level. Although the words “power and cooling” are
often used together, they are sufficiently complicated topics to
treat separately at first but their relationship must also be considered in the whole data centre design process.

Q There are many DC power offerings out there. What does an
end user need to look for in order to ensure a best-fit solution for
any particular DC facility?
A This is where energy efficiency becomes really important. Modern power equipment is now much more efficient than ever before with losses being reduced. This is particularly evident in the area of UPS systems which used to be at their most efficient only when they were close to their load capacity, yet now run at very high levels of electrical efficiency even at very low electrical loads.

This need for efficient infrastructure equipment should also be coupled with a good set of tools and processes to monitor and manage energy efficiency, power usage and overall consumption.

Q Cooling technologies and solutions are just as abundant. Again,
how does an end user ensure that the right choice is made?

A Simple processes and practices are the starting point. The adoption of hot and cold aisles for racks, ensuring that blanking plates channel airflow to where it is needed, preventing exhausted air from mixing with cooled air (containment) must all be considered at the same time as planning choices for cooling technologies and solutions.

This will give a better understanding of whether under-floor, overhead fancoil, in-row with hot/cold containment or other methods are most suitable. There are no wrong ways here, just different ways and the appropriate solution will be based on a whole number of circumstances. If change is being considered, the starting point should be an assessment to look at existing infrastructure, intended use and growth plans as well as external factors such as geography and ambient temperatures.

Q There’s a great deal of talk about ‘free cooling’ – is there an
agreed definition, and, if so, how achievable is it?

A Free cooling is a general term for taking advantage of external factors such as low ambient external temperatures to provide or facilitate data centre cooling with limited - and sometimes no - reliance on mechanical cooling.

It covers a number of methods and technologies including the use of external air to directly cool IT, passing air through naturally cooled heat exchangers, and creating closed chilled water circuits cooled by river, lake or sea water.

Each of these methods and technologies has its own fixed definition and more innovative solutions will continue to arise as the tolerance of IT equipment improves and temperature operating windows change.

Q The IT folks want to put more and more kit in less and less
space – what does this mean for the power and cooling
requirement in such a DC facility?

A For power, this means increasing floor space densities further still so the data centres with the more flexible designs will be more sustainable than those which cannot continue to grow.

From a cooling perspective, it means that liquid cooling will necessarily replace air cooling as we move to the source of the heat itself – the chips. We are now seeing some fantastic chip-level cooling technologies including the use of water at remarkably high temperatures to remove heat which can then be used for other applications such as building heating systems.

Hardware development cannot be ignored here though. Innovations in 3D processors and newer phase-change storage technologies will also have an effect on the energy consumption (and therefore heat output) of the hardware itself even if the computing power per square metre continues to rise.

Q Do you think that the IT hardware OEMs could do more to help
in the quest for DC energy optimisation?

A I think that the major hardware vendors are already doing a great deal in the area of making their servers and storage technologies more energy efficient. This will continue hand in hand with innovation in deployment and cooling methods within the data centre itself.

Q Traditional software applications are IT-hungry. Do you think
that it’s time for a new approach to writing applications that are
less ‘hardware-intense’?
A Whilst outside of my own area of expertise, I know this is receiving good attention in the industry. Open source application development as well as IT vendors looking at legacy and new proprietary software is taking energy efficiency into account. Middleware and management software is helping further to balance workloads and application efficiencies.

Q For anyone considering moving towards a Cloud model,
what are the likely impacts on the DC environment that need to
be considered?

A This depends of course on whether the Cloud model used is private or public but the potential to deploy virtualisation to improve IT utilisation will continue to have an enormous impact on data centre energy consumption as Cloud models grow.

The other big impact will be the ability to scale (up and down) or add data centre resource dynamically without significant local investment.

Q For virtualisation, what have been the benefits for the DC
environment, and have there been any negatives?

A Without doubt, the largest energy savings and therefore cost reductions in data centre operations have been down to the virtualisation of IT workload away from poorly utilised physical servers. It has saved energy, software licence cost, and data centre floorspace as well as made the management of IT and change simpler.

The perceived negatives have been local departmental based arguments around no longer having dedicated servers for specific confidential applications like HR or R&D for example. The debate is really around whether a corporation should have an IT strategy or whether individual departments should determine their own. The centralised approach is almost certainly the most cost effective.

Q How is the current BYOD/IT consumerisation impacting on the
DC environment?

A The phenomenal growth of handheld devices and applications generally is having an effect in addition to BYOD in the corporate environment. Behind nearly every handheld app is a data centre providing content and availability.

Smart TVs and other internet devices are adding to the impact too. One of the effects this has is to increase the demand for content in all data centres and turning it into a true 24 x 7 ‘always on’ model.

Q What are the cabling issues that need to be considered when it
comes to ensuring an efficient DC facility?

A The growth of file sizes, bandwidth and content delivery have all contributed to a need for sensible structured cabling deployment to be considered with data centre design and strategy. Copper prices, the need for fast communications and future proofing data centres with sustainable cabling systems have seen growth in fibre connectivity.

Another effect on cabling has been the trend towards IP convergence with VOIP telephony, IP video conferencing, physical security systems and even lighting provided over Ethernet.

Q What about ‘humble’ components, such as the racks and
cabinets?

A Racks and cabinets have indeed evolved with data centre design especially in areas like airflow management, cable management and intelligence in PDUs. Doors are now designed to be around 85% ‘open’ to allow good airflow and server racks are complemented by a variety of passive and active rear door heat exchangers to remove server heat before the data centre starts to do its own work.

Q IP lighting seems to offer some obvious benefits, is this a
no-brainer, or are there other factors to consider when
specifying lighting?

A Well my first comment is that data centres can generally operate perfectly well in the dark so whether the lighting is IP based or traditional should make little difference to overall energy consumption. That said, IP lighting is very efficient with lumens per watts now exceeding fluorescent lighting.

The real benefit of IP lighting systems however is the intelligence behind the lights. It is in the sensors and the software that they feed, allowing not only lighting control but a variety of other inputs such as movement, temperature, humidity etc. When fed into a good management system, this can be very useful both in the data centre and in the distributed environment too.

Q Are there any other facilities technologies (ie fire suppression,
physical security) that can make the difference between a best-
practice or distinctly average DC?

A A good deal of what is available in areas like fire prevention, fire detection and physical security can be down to preference.

For example, there are a number of systems that effectively control the amount of oxygen in a data centre at a level below the point at which combustion can take place but which is still safely breathable. Some companies however are reluctant to deploy such a system in case there are health risks to staff working in the facility.

The use of inert gas extinguishing systems compared with simple water sprinklers or atomised water vapour systems also generates debate. For most corporations, the real asset that needs protecting is the data, therefore Business Continuity and Recovery processes have become a higher priority than worrying about local damage to hardware. IBM has chosen to use water sprinklers in a number of
our own data centres.
The move to IP based physical security systems does however offer great benefits around the ease of security camera and access control device installation, lower running costs than mains powered devices and improved management and security software linking to more comprehensive data centre management and control.

Q There are all manner of PDUs out there, some from IT hardware
vendors, others from specialist companies. What are the main
issues to consider when specifying this technology?

A Intelligence in PDUs including the ability to switch remotely is very important but with many of these things, it is the intelligence behind them, in the software that is the true benefit. This includes linking to good Data Centre Infrastructure Management (DCIM) and BMS tools.

Q You mention the relatively new discipline of DCIM. How does
this differ from and/or compliment Building Management
Systems?

A I think the biggest challenge with DCIM is that there isn’t a clear definition of what it really means or includes.

To me, a comprehensive DCIM system should include everything from cabling, patch management and KVM, through environmental monitoring and control, asset management with warranty and maintenance cycles, right up to energy management, power control and links into not only the building’s BMS system itself but also into areas of IT management such as IBM Tivoli. This often means a multi-vendor approach and a deeper consideration into what DCIM value is needed than is generally given at first.

Q What does one look for in a DCIM solution?

A End to end data centre value with an ‘umbrella dashboard’ rather than single point solutions.

Q We can’t talk about DC design without acknowledging the trend
towards ‘modular data centres’. Is this term helpful, bearing in
mind there seem to be many definitions of what a modular
data centre looks like?!

A This is a great observation because modular does not necessarily mean in a shipping container or pre-fabricated.

The entire IBM Data Centre Family portfolio is modular in the way it is designed and implemented so that facilities are able to grow in a flexible and scalable way but a relatively small proportion of the data centres we have built over the years has been containerised, portable or pre-fabricated.

There can be superb benefit (particularly around rapid deployment times) to the pre-fabricated approach and they are certainly modular.
This does not mean that enterprise data centres built in existing buildings cannot be designed and implemented using pre-engineered and designed modules. I do think that the term is useful but sometimes may need a little sub-division for clarity.

Q What are the key issues to consider when looking at a modular
solution?

A Deployment times, quality control, compatibility assurance, compliance, flexibility to adapt to future IT changes and complexity of implementation.
Q How is IBM positioned when it comes to helping end users with
the DC facility issues we’ve talked about?

A IBM is very proud of its physical data centre portfolio. We created our Data Centre Family of offerings which highlighted the importance of modularity and scalability well before the market had started to use the terms in a general way.

But the Data Centre Family is only a part of the value available from IBM. Our breadth includes strategy and planning services to ensure that any changes being considered fit with both business and IT needs. This breadth continues with consultancy and design services around data centres themselves including virtualisation, consolidation and relocation if appropriate.

Servers, storage, networking, security, business continuity and all of the services around these aspects can also be included together with Business Analytics for informed decision making. We are confident this provides comprehensive value to optimise any data centre, whether existing or still in the planning phase.

Q Specifically, you have launched the Smarter Data Centre
campaign. What’s the thinking behind it, and what does it offer?

A We have launched two main campaigns in recent years.

The first was ‘Project Big Green’ in which we looked at how we could double our computing capacity internally whilst keeping our energy costs flat. This led us to develop what we had learnt into real offerings for our clients.

More recently, we presented our own views on Smarter Data Centres to the marketplace. We highlighted in summary that Smarter Data Centres:
£ Are designed to meet existing needs with the flexibility to respond to future unknowns in business requirements, technology and computing models
£ Leverage cost-effective approaches to optimise assets to improve operational efficiency – including hardware, software, data centre infrastructure and people and processes
£ Require active monitoring and management capability to provide the operational insights to meet the required availability, capacity planning and energy efficiency.
Q What is/are the IBM USP(s) when compared to other folks in
this space?

A The breadth of our thinking, the breadth of our offerings, our people and the expertise we have accumulated from over 50 years of planning and building data centres. This linked of course to the work that takes place in our research laboratories around the world.

Q The fact that you have great experience of designing and
building data centre facilities and of manufacturing the kit that
goes inside them gives you a unique insight into how everything
should work together efficiently?

A Yes, we have been building data centres (infrastructure and IT) since before they were called data centres. IBM has managed
millions of square feet of data centre space with both IBM hardware and software and other IT vendor content too. I would also
include networking insight, distributed IT experience, business analytics and consulting. There is one other aspect that makes
IBM unique and that is one
of innovation, research and future thinking to enable a Smarter
Planet.

Q To anyone embarking on some kind of a DC refresh, are there
any obvious low-hanging fruit, or does the need for a long-term,
integrated solution preclude such easy wins?

A There are simple assessment services around energy efficiency or operation efficiency that will always uncover some simple steps to take with a good ROI and often with a short payback time. Start with getting the facts and develop from there.

Q If you had to give just one or two pieces of advice to someone
about to start planning a DC facility optimisation strategy, what
would these be?

A Consider existing estate first. Can its life be effectively extended or should it be replaced?

Next consider whether the number of sites is appropriate. Can any consolidation or relocation take place for optimised operations?
If physical changes need to be made, consider modular, scalable, flexible options that allow pay as you grow and sustainable use as IT changes.

Lastly, consider automation, integrated monitoring
and management, but make sure you consider it at the point of design.
Q Finally(!), indulge in a little bit of crystal ball-gazing and tell us
what the data centre of 2020 might look like?

A This is a good way to attract some ridicule in 7 years time.

I imagine that power densities will be far higher in some individual server racks (likely up to 70 kW) but the overall data centre floor power densities will not have risen too much as more energy efficient hardware is developed particularly in the area of storage. The higher rack power densities will lead to more direct cooling at the chip level using water or other liquid to remove heat.

We will see the adoption of free cooling in much warmer climates as end users become more comfortable operating their IT at the higher temperatures supported by ASHRAE and other organisations’ calculations but we will not lose our reliance on mechanical cooling as a contingency measure. Cloud computing will continue to grow as a model to a point where utility computing is a genuine option (immediate access to ‘pay for what you use’).

Pre-fabricated modular deployment will have developed to a point where the rapid movement and deployment of pre-engineered modules becomes a matter of course for both IT and the power and cooling services to support it. Decision making around the building
of a data centre structure, its internal contents and the security
and DCIM systems will converge and fall to the same person.
IBM has coined the phrase Build, Connect, Protect in readiness
for this trend. ibm.com/smarterdatacentre/uk
 

First of its kind research, in partnership with Canalys, offers deep insights into some of the...
According to a recently published report from Dell’Oro Group, worldwide data center capex is...
Managed service providers (MSPs) are increasing their spending by as much as 70% to meet growing...
Coromatic, part of the E.ON group and the leading provider of robust critical infrastructure...
Datto’s Global State of the MSP: Trends and Forecasts for 2024 underscores the importance of...
Park Place Technologies has appointed Ian Anderson as Senior Director, Channel Sales, EMEA.
Node4 has passed the ISO 27017 and ISO 27018 audits, reinforcing its dedication to data security,...
Park Place Technologies has acquired Xuper Limited, an IT solutions provider based in Derby, UK.