The rapid growth of Artificial Intelligence (AI) is necessitating overall redesigns in data center infrastructure.  AI's complex workloads require the integration of more graphic processing units (GPUs), tensor processing units (TPUs), and accelerators, which in turn demand higher power and generate more heat, and that increase in density is bound to ripple throughout how a facility can support.  According to several research studies from economists studying the industry, this could lead to an annual expenditure of $76B on new AI data center infrastructure by 2028, and possibly as high as $200B by 2035.   

From an earlier reporting call Nvidia alone sold 900 tons of H100 processors in Q2 2023, each drawing 700 watts of power and requiring liquid cooling.  Just taking the processer power and aggregating it to the server, rack, row, room, and hall levels means that the data center itself will need to step up capacity quickly.   

Data centers may not need to be as robust to support such loads, and the industry has gained practice in designing and operating slimmed down data centers that more closely match the power demands.  Now as those demands ramp upward, existing data centers are removing any buffer capacity that may have remained, whether to support the next generation of cloud offerings or full AI offerings.   

Let’s also consider the need for the systems and facility to go from a load that is near zero to full in mere seconds; unlike a light switch, a data center needs to be designed to allow this or else cascading failures or slow response times may ensue – just ask Meta and Apple about those issues (and whether they were truly resolved properly).  The new data center infrastructure may be aiming to keep everything up as much as possible, but the AI teams realize that they should plan for outages of servers, racks, and clusters for prolonged periods of hours to days as data centers they have moved into quickly incorporate planned outages to add capacity, perform maintenance, or react to a failure.   

Many manufacturers, designers, and data center operators have seen the calls for AI server racks designed to handle up to 100kW/rack, compared to the 20kW average in even ‘modern’ data centers over the last 5 years.  This has led to the transformation of data center layouts from tightly organized and managed rows to dense, heat-intensive clusters.  Specialized racks do exist to aid with this density, however mass production is just now ramping up as opposed to being custom orders.  Instead of stacking a rack with servers some are opting to fill racks only partially, spreading out to air cool with lowered densities as before.  The opposite is true as well, and that has been triggering the discussion about whether the density is now ready to move to immersion cooling systems.   

Of course, there is a growing focus on sustainability as this density escalates.  Major data center owners are increasingly relying on renewable energy sources and are calculating the carbon footprint over the lifecycle of data centers, including all the details for ESG scope 3 reporting.   

Companies are pivoting to experiment more in this space with large tech organizations diving deep with PhDs and subject matter experts of their own to compliment startups as they pair up power and cooling to help with the ramping AI boom.  Solar and wind have seen an increase in projects attributed to data centers, but they are not alone as some have committed to sourcing megawatts of power from startups using fusion energy, clean hydrogen, and new generation hydroelectric as sources. 

footer_logo
© 2022, Green Data Center Guide. All Rights Reserved.