The inevitable arrival of AI in Data Center
Infrastructure Management

Maintaining a datacenter is a complex task. Those who have been involved in keeping the infrastructure up and running at all times while trying to reduce the operational expenses knows what we are talking about.

Maintaining uptime is easy when you have unlimited resources. Just make it a 3N+1 or better and your chances of an outage are close to zero.

However, no one would be able to afford such an expenditure. Not for the investment nor for the operational costs of keeping all that equipment up and running.

BLOG POST
The inevitable arrival of AI in Data Center Infrastructure Management

Read More →

BLOG POST
Sure, you have your data center protected against fire…

Read More

Hunt for the sweet spot

An operational manager of a datacenter has the complex task of finding the sweet spot where the sum of CapEx and OpEx meet uptime. How to achieve the maximum uptime with a minimal investment and the lowest operational cost? Sometimes the uptime is fixed at a certain limit, say 99,995% and the costs should be derived from that. In other occurrences one needs to find the maximum uptime for a given cost. In any case the Manager Operations is looking at a very complex field of options where each decision influences other variables. It’s a bit like a chess where you need to be able to look ahead.

AI beats humans time and again

In 1996 it was the IBM computer ‘Deep Blue’ who took on chess world champion Garry Kasparov and defeated him in a chess tournament by 3,5 to 2,5. In 2017 the AI ‘AlphaGo’ defeated the then ruling world champion Ke Jie by winning 3 out of 3 games of Go a game much more complex than chess. Artificial Intelligences (AI) has proven to be better than humans in strategic decision making time and again. So it is only natural that AI will be helpful in the very complex decision making in a datacenter.

How complex can it be, running a datacenter?
A datacenter has quite a number of variables that are all interlinked. Think of the air temperature at server level, the temperature at the inlet and outlet of the CRAC, the airspeed, the efficiency level of the various infrastructural devices. Then there are the time dependable energy demands of servers. Some cluster peak during the night, others are more active during office hours.

While you are juggling with all these parameters you need to consider the redundancy of all your machinery at the same time so that no single failure leads to an outage. If you are still able to keep all these balls in the air you also have to mind servicing all that hardware and keep your redundancy up during that maintenance moment, oh and do all this while using the minimum amount of energy possible.

Get it? It is almost an impossible task for humans to manage. Machines are capable of doing the math almost infinite times fasters than humans.

Is it all just a matter of calculations?

No, because a computer doing the maths is just calculating what has been put into code. It can’t think for itself. Deep learning enables a computer to analyse massive amount of data and draw conclusions from that. But even that has it’s limitations: what about situations that have never occurred before? How can you predict something that you have not tested yet? That’s where the ‘digital twin’ concept comes in. A technology that Perf-iT puts to work in its latest version of 4D Cool, a radical new concept for datacenter efficiency management.

Digital Twin and ‘what if’

4D Cool is a management system that is based on real life data, generated by a limited amounts of sensors, computational fluid dynamics (CFD) to predict temperature in every corner of your data room and a fully digitised version of your datacenter. By analysing the output of the CFD with the real-life data from the sensors the system constantly improves its predictions. The level of predictability becomes high enough to analyse  hypothetical situations such as: what if we have a broken transformer, or what if we enter 500 more servers in this aisle. These ‘what if scenarios’ can only be calculated with this digital twin concept as the digital twin is an exact representation of your real life datacenter.

The future of datacenters is digital

The size and complexity of datacenters is ever increasing. No single person is capable of handling so many variables simultaneously and predict the effects of changes in multiple variables. Computers need to step in. But number crunching alone will not do the trick. The computer needs to be able to verify its own calculations by cross checking them with real life data. A digital twin is doing just that. The DC manager now has a tool that allows him to analyse the effects of changes in a datacenter and thus determine the best course of action. 4D Cool is that tool and we are happy to demonstrate it to you.

Are you looking for more information on MES?