As we collect more and more data about everything around us, smarter cities and facilities are developing to operate in more efficient and reliable ways.  The same can be done for data centers, as the massive amount of data can be leveraged to reduce risk while striving to optimize how the many data center systems can perform. If you haven’t seen it in action at least you can read about how Google has introduced AI to control and optimize their cooling systems: DeepMind to take over all of Google data center cooling.

Cities and companies have been implementing means to monitor their vehicle fleets then perform audits to see how they can improve for years.  This data, growing larger every day, allows major decisions to be made about how the vehicles are deployed, where, and how simple things can improve the drivers’ efficiency and decrease accidents and time in traffic.  With more granular monitoring and controls being incorporated in facilities over the last 10 years, the same can be done with facilities.

For instance, a data center could consider operating its UPS systems in an eco/bypass mode to avoid double conversion losses.  However, the bypass mode can be considered a risk to reliability since the reaction time is usually increased.  Instead, the weather or other transients that are detected can trigger to switch the UPS back to double conversion and avoid increasing reliability risk.

Now, the data that is collected and analyzed can be stepped up beyond the simple equipment controls that Google’s DeepMind is responsible for optimizing.  Network system monitoring can be overlaid with the facilities information to determine how reactionary each system can be when changes are made.  Chillers and cooling towers can be studied and targeted to maximize their energy use as compared to operational loads of the data center, even considering how peaks for each system can be reduced.  Utilizing integrated temperature monitoring, say with both the actual server sensors and a DCIM solution, can allow fans and pumps to not sure ensure that the loads are met but also alter the settings to just meet the operational needs.

Changing how all equipment operates in a dynamic scenario seems like setting up the data center temperatures, airflow and loads to fluctuate in a yo-yo like fashion. However, this is just using the most current data that is received.  Making the data center smart is tracking the trending and understanding all of the minute details of how the many systems respond.  Mechanical controls do not react in the near-instantaneous manner as IT equipment; guaging this lag of timing is crucial, and when understood the dynamic changes of a data center can be anticipated much more closely.

In the years ahead the buffer temperatures that we may apply for a data center can be minimized to achieve greater and greater savings.  Automation of temperatures and flow of air and water can seek the best performance while not hampering IT equipment operation.  What we may need to get used to is that an AI (Skynet?  Hal9000?) may be the one issuing work orders for us to fulfill, even telling us the exact procedures and equipment to buy for it’s own preservation.

Posted in: AI, Smart DCs, Software Defined Data Centers (SDDC)

Filed under: AI, DCIM, DeepMind, SDDC, Smart

footer_logo
© 2022, Green Data Center Guide. All Rights Reserved.