Innovative Cooling Strategies for AI and HPC: Ensuring Efficiency and Sustainability 

Innovative Cooling Strategies for AI and HPC: Ensuring Efficiency and Sustainability 

As data centers evolve rapidly, companies like Expert Thermal are pioneering innovative cooling strategies that keep pace with the increasing demand for processing-intensive applications such as artificial intelligence (AI) and advanced analytics. With the consolidation of computing power in smaller spaces, data centers are experiencing higher energy consumption and heat generation per rack. This shift necessitates the implementation of efficient thermal management solutions to ensure safe and effective operations. Traditional air cooling methods, once sufficient for racks consuming less than 20 kilowatts (kW), are now struggling to manage the heat dissipation of modern high-performance racks that often exceed 20 kW, 40 kW, or more. These advanced racks are equipped with CPUs and GPUs featuring significantly higher thermal power densities, making customized cooling solutions from Expert Thermal a critical need for organizations exploring liquid cooling options.

The following table provides a detailed comparison of various high-performance GPU architectures, CPUs and SoC used in AI and data center applications. These architectures are evaluated based on their power consumption, performance, networking speeds, efficiency, processor type, and thermal characteristics. Understanding these parameters is crucial for optimizing these GPUs’ deployment in data centers, especially in energy efficiency and thermal management. 

Table 1: GPU comparison for AI and HPC 

AI GPU Architecture Evaluation Type of GPU Power (watts) Performance FP16 Tensor (Dense) (teraflops) Networking Speeds (GB/s) Performance/Watts (teraflops/watts)
NVIDIA B200
Discrete
1000
1800
400
1.8
NVIDIA B100
Discrete
700
2250
400
3.2
INTEL GAUDI 3
Discrete
900(passive cooling) / 1200 (active cooling)
1835
1200
2/1.5
AMD INSTINCT MI300A
SoC/APU
760
980
128
1.3
AMD INSTINCT MI300X
Discrete
750
1300
128
1.7

 

Table 2: CPU comparison for AI and HPC 

Processor Company Turbo Clock (GHz) Feature size (Nm) Cores TDP (W) Socket (mm x mm) Juntion Temperature (°C ) Die Size (cm2) Heat Flux (W/cm2) Reference
Xeon Platinum 8380
Intel
3.4
10
40
270
76 x 56.5
100
12.00
22.5
Intel Specs
Xeon Platinum 8570 (used in NVIDIA DGX B200)
Intel
4
10
56
350
77.5 x 56.5
105
15.26
22.9
Intel Specs
Xeon W9-3495X
Intel
4.8
10
56
420-350
77.5 x 56.5
105
19.08
22.0
Intel Specs
EPYC 7763
AMD
3.5
7
64
280
75.4 x 58.5
85
10.64
26.3
AMD Specs
Ryzen Threadripper PRO 7995X
AMD
5.1
5
96
350
75.4 x 58.5 (TR4 socket)
95
12.40
28.22
AMD Specs
Grace CPU Superchip (2 Grace CPUs)
NVIDIA
3
4
Up to 144 (ARM)
300 (estimated)
-
-
15.48
19.37
NVIDIA Announcement
Xeon Sierra Forest E-core
Intel
3.2
-
144-288
330
-
-
10.6
31.10
Intel Roadmap
Xeon Granite Rapids P-core
Intel
3.8
-
128
500
-
-
22.76
21.96
Intel Roadmap
Turin (Zen 5)
AMD
-
3
192
500 (estimated)
-
-
N/A
-
AMD Roadmap
AmpereOne
Ampere
3
5
192
350
-
105
N/A
-
Ampere Announcement
Graviton4
AWS
3
-
96
200
-
-
N/A
-
AWS Announcement

These tables provide a comprehensive overview of the thermal and performance characteristics of various GPU architectures and associated CPUs with their heat flux which is crucial for the thermal management, control, and optimizing the design and operation of data centers. There is a noticeable relationship between feature size (measured in nanometers) and heat flux (W/cm²). Generally, processors with smaller feature sizes tend to exhibit higher heat fluxes. For example, the AMD Ryzen Threadripper PRO 7995WX with a feature size of 5nm has a heat flux of 28.22 W/cm², which is higher compared to the Intel Xeon Platinum 8380 with a feature size of 10nm and a heat flux of 22.5 W/cm². Shrinking chip features to achieve faster processing speeds comes at a cost: higher transistor density generates more heat, intensifying the need for advanced thermal management solutions.

The data underscores the importance of advanced thermal management solutions for modern processors. As feature sizes shrink, leading to higher heat fluxes, the demand for efficient cooling technologies becomes more critical. This is evident in high-performance CPUs like the AMD EPYC 7763 and Intel Xeon series, where significant heat flux necessitates robust thermal management strategies to ensure optimal performance and reliability. The increased heat flux with smaller feature sizes highlights the ongoing need for innovation in thermal management to handle the evolving thermal challenges of next-generation processors. Official NVIDIA documentation and press releases don’t explicitly mention the Grace CPU’s footprint and power consumption. The Grace CPU comes in two configurations: single-chip and as part of the Grace Blackwell Superchip paired with a Blackwell GPU. This suggests the Grace CPU itself might be relatively compact to fit within the superchip package. 

                                                                                                                               Figure 1: TDP Forecast

The race for ever-increasing AI performance is heating up! Tech giants like Nvidia, AMD, and Intel are all pushing the boundaries of chip design to cram more processing power into smaller spaces. This miniaturization trend allows for powerful AI capabilities in compact devices.

However, there’s a catch: increasing the processing power of a decreasing package size results in increased heat dissipation per chip area (heat flux in watts per unit area), and characteristically defined as the Thermal Design Power (TDP) for the chip. According to Omdia’s projections, we’re witnessing a TDP explosion. Just a few years ago, typical CPU and GPU TDPs hovered around 200-300 watts. Now, estimates suggest these numbers could skyrocket to 1000 watts and beyond in the coming years! 

Air cooling has long been a staple in data centers, but with the rise of AI and high-performance computing (HPC), Expert Thermal’s advanced thermal management solutions are leading the way in optimizing data center efficiency. Despite technological advancements that have made air cooling more efficient, the core principle remains the same: cold air is circulated around hardware to dissipate heat. At Expert Thermal, we categorize air cooling systems into three primary types: room-based, row-based, and rack-based systems, each tailored to meet the unique needs of high-density data centers.

Room-Based Systems: These use computer room air conditioning (CRAC) units to push chilled air into the equipment room. Air can be circulated around the room or vented through raised floors near the equipment. Modern room-based systems often employ a hot and cold aisle configuration to optimize airflow and reduce energy costs, sometimes using containment to isolate hot and cold aisles. The mission of a cooling system is to get heat generated by hot computer chips out of the data center. The data center air is drawn through the servers with server fans. This slightly warmer air is then cooled off by computer room air conditioning units blowing air into pressurized server room floors. Chilled water is pumped from the chiller at 45°F to the air handling coils and is warmed to 55°. The chiller cools the 55° water to 45°. The chiller must then be cooled off by cooling tower water entering at 85° and leaving the chiller at 95°. This 95° hot water is pumped through a cooling tower that finally rejects the data center heat to the outside ambience. 

                                                                                                                                   Figure 1: Room Based System

                                                                                                                                Ref: https://2crsi.com/air-cooling

Row-Based Systems: These systems target cooling more precisely than room-based systems. Each row of racks has dedicated cooling units focusing airflow at specific equipment, which improves efficiency and reduces the power needed for fans, lowering energy usage and costs. Cooling units can be positioned between server racks or mounted overhead.

     

                                                                                                                                 Figure 2: Row Based System

Ref: https://www.energystar.gov/products/data_center_equipment/16-more-ways-cut-energy-waste-data-center/install-rack-or-row

Rack-Based Systems: Offering the highest precision, rack-based systems dedicate cooling units to specific racks. These units are often mounted on or within the racks, allowing for tailored cooling capacity to meet each rack’s needs. Although this method ensures more predictable performance and costs, it requires more cooling units, increasing overall complexity. 

   

                                                                                                                                      Figure 3: Rack Based System

Ref: https://www.energystar.gov/products/data_center_equipment/16-more-ways-cut-energy-waste-data-center/install-rack-or-row

Challenges of Air Cooling 

Air cooling faces significant challenges in modern data centers. It struggles to meet the demands of high-density, processing-heavy workloads. The capital expenditure and complexity of air-cooling systems become unjustifiable beyond certain thresholds. Additionally, air cooling is less efficient, with rising energy costs exacerbating the issue. Cooling accounts for a sizable portion of a data center’s overall energy consumption, typically ranging from 35% to 50%.  Water restrictions and the noise from increased cooling fans and pumps add to the drawbacks. Air is an inefficient heat transfer medium, prompting the need for more effective cooling methods. 

 

                                                                                                                       Figure 4 : Cost/Watt vs Average/Rack Power Density

=

                                                                                                    Figure 5: Annual Electrical Cost (x$1,000)/Average/Rack Power Density

Ref: https://www.energystar.gov/products/data_center_equipment/16-more-ways-cut-energy-waste-data-center/install-rack-or-row

ASHRAE Guidelines for Air Cooling Options 

The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) provides both recommended and allowable metrics for data center temperature and humidity. In each case, ASHRAE provides guidance for both the relative humidity — the amount of moisture in the air — and the dew point or temperature at which water vapor turns to liquid, assuming a constant air pressure is maintained. It is also worth noting that ASHRAE’s recommendations assume a change rate of no more than 5 ˚ C (41 ˚F) per 20-hour period and must be adjusted for elevation. ASHRAE’s metrics are based on an elevation of 3,050 meters (10,006.6 feet). 

The recommended humidity for A1 to A4 equipment is a dew point temperature range of -9 to 15 ˚C (15.8 to 59 ˚F), with relative humidity ranging from 50% to 70%. 

Allowable humidity for A1 to A4 equipment ranges from a minimum dew point of -12 to 17 ˚C (10.4 to 62.6 ˚F) to a maximum dew point of -12 to 24 ˚C (10.4 to 75.2 ˚F). The relative humidity level ranges from 8% to 80% for Class A1 and from 8% to 90% for Class A4. 

Liquid Cooling

Expert Thermal’s liquid cooling solutions are redefining data center efficiency with cutting-edge technology that maximizes heat dissipation in high-density environments. Liquid cooling, with its superior heat capacity, is far more effective than traditional air cooling methods, which rely on high airflow rates but struggle with heat dissipation over boundary layers. Expert Thermal specializes in providing customized cooling solutions designed to maintain optimal temperatures for high-density computing environments, ensuring both energy efficiency and system reliability. 

Liquid cooling utilizes water’s higher heat capacity for efficient heat absorption, a more accurate understanding of its effectiveness requires considering mass flow rate (m), specific heat capacity (Cp), and temperature difference (ΔT). This represents the total amount of heat that can removed from a system. Additionally, the Heat Transfer Coefficient (HTC), which depends on the Reynolds number (Re) and Prandtl number (Pr) of the flowing liquid, plays a crucial role.  HTC governs the rate at which heat transfers from the heat source to the coolant. In the context of liquid cooling, the HTC typically ranges between 200 and 1000 W/m2K, significantly outperforming the HTC of air cooling, which generally falls in the range of 0.5 to 100 W/m2K. By focusing on both factors (mCpΔT and HTC), liquid cooling can achieve superior thermal performance compared to air cooling. This improved efficiency allows for reduced reliance on high-speed airflow, minimizing hot spots within the data center. 

Moreover, Liquid cooling offers advantages in thermal management by leveraging the superior heat transfer properties of liquids. Unlike immersion cooling, which achieves direct contact with heat sources, standard liquid cooling systems primarily rely on forced convection. In these systems, the coolant circulates through water blocks which rest on top of the chip that is being cooled. Thermal paste and a baseplate improve heat transfer between the water block and the CPU. The relatively cooler liquid circulating through the water block absorbs heat from the baseplate, effectively pooling heat away from the chip and facilitating efficient cooling. This method offers significant improvements compared to air cooling due to the higher heat capacity and higher heat transfer coefficient. The ability of liquids to dissipate heat across larger boundary layers leads to a more uniform temperature distribution within the system, reducing thermal resistance compared to traditional air-based methods.

 

Liquid cooling is categorized into three main types: direct-to-chip cooling, rear-door heat exchangers, and immersion cooling. 

Direct-to-Chip Cooling: This method integrates cooling directly into the computer’s chassis. Cool liquid is piped to cold plates mounted on components like CPUs and GPUs. The liquid absorbs heat from the components, circulates to a cooling device or heat exchanger, and is then recirculated. 

Rear-Door Heat Exchangers: In this approach, a heat exchanger replaces the back door of the rack. Server fans blow warm air through the exchanger, where liquid coolant dissipates the heat. This closed-loop system effectively manages heat, often using local cooling units or larger systems, such as underground piping. 

Immersion Cooling: This innovative technique submerges all internal server components in a non-conductive dielectric fluid within a sealed container. The fluid absorbs heat from the components and is either continuously circulated and cooled (single-phase) or allowed to vaporize and condense (two-phase). Immersion cooling is highly efficient, using less energy, reducing noise, and extending hardware lifespan.

Compared to air cooling, liquid cooling directly addresses the data center’s biggest energy guzzlers. Traditional air coolers on high-temperature chips are replaced with water-based cooling, eliminating the need for server fans altogether. This domino effect extends to the large air movers that circulate cold air and the chillers that supply them, both becoming obsolete with liquid cooling. By targeting these critical energy consumers, liquid cooling offers a more efficient and environmentally friendly approach to data center cooling.  

Data center owners are constantly seeking ways to reduce their environmental impact and operational costs. This relentless pursuit of efficiency has been fueled by the Power Usage Effectiveness (PUE) metric. PUE measures the ratio of total facility power consumption (including cooling) to the power consumed by the IT equipment itself (servers, storage, etc.). While some data centers boast impressive PUEs below 1.2, the average facility hovers around 1.8, indicating significant room for improvement. 

This is where liquid cooling steps in as a game-changer. Compared to traditional air cooling, liquid cooling systems can slash energy consumption by up to 80%. Considering that cooling often accounts for over half of a data center’s non-IT energy usage, this shift can be transformative. For example, a data center with a PUE of 1.8 could potentially achieve a PUE of 1.3 by adopting liquid cooling. This significant reduction translates to substantial cost savings and a smaller environmental footprint. In essence, liquid cooling emerges as the champion of energy efficiency in the data center arena. 

 

Direct-to-Chip Liquid Cooling method is highly efficient, directly extracting heat from server components and reducing additional energy consumption. With chipsets projected to exceed 1000W, liquid cooling is becoming essential. Water’s superior heat capacity allows liquid cooling to use up to seven times less power than air cooling, enabling higher inlet temperatures and lowering energy requirements, thus reducing carbon emissions.  

Impact on AI and High-Wattage Processors 

As technology advances, so does the power consumption of AI chips, making efficient cooling critical. Traditional air-cooling struggles to manage the heat from high-wattage processors, leading to higher energy usage. Liquid cooling addresses this by providing a more efficient heat transfer method, reducing power consumption by up to 40%. 

Improved Power Usage Effectiveness (PUE) 

Liquid cooling significantly enhances a data center’s PUE ratio, which measures total facility power divided by power used by IT equipment. For example, a data center with a PUE of 1.5 using air cooling could achieve a PUE of 1.1 with liquid cooling, a 26.6% reduction in energy wasted on cooling. 

Environmental Benefits 

A significant portion of data center emissions stems from electricity usage which is scope 2 emissions for the data center operators. Power generation produces 357 gm/kWh of emissions, optimizing energy consumption is crucial. By decreasing overall energy consumption, liquid cooling reduces reliance on fossil fuels for electricity generation, thereby minimizing greenhouse gas emissions. This makes liquid cooling a crucial component in making AI advancements more sustainable and environmentally friendly. 

Advantages and Drawbacks of Liquid Cooling 

Liquid cooling offers numerous benefits, including better handling of high densities, lower energy consumption, reduced water usage, and a smaller footprint. However, it also has downsides, such as higher capital expenditure, risk of leakage, and the need for specialized skills and maintenance frameworks. The market’s immaturity and the potential for vendor lock-in add to the concerns. 

 

                                                                                    Figure 6 : Air cooling vs Liquid colling based on CPU power

  Ref: https://www.ashrae.org/file%20library/technical%20resources/bookstore/emergence-and-expansion-of-liquid-cooling-in-mainstream-data-centers_wp.pdf

 

Table 3: Air Cooling Vs Liquid Cooling Comparison

Feature Air Cooling Liquid Cooling
Heat Transfer
Less Efficient
More Efficient
Energy Consumption
Higher
Lower (up to 20% savings)
Upfront Cost
Lower
Higher
Maintenance
Simpler
Complex, but reliable
Leak Risk
Low
High (potential for equipment damage)
Noise Level
High
Low
Space Efficiency
Less space efficient (depending on system)
More space efficient (depending on equipment)
Water Usage
High (evaporative cooling)
Lower
Technology Maturity
Mature
Developing
Staff Expertise
Widely available
Requires training
Ideal for
Low to medium density data centers
High density data centers

While air cooling continues to be a familiar choice, its limitations become increasingly apparent. Liquid cooling offers a powerful alternative for improved efficiency and sustainability, albeit with higher upfront costs and operational complexity. Data centers need to carefully weigh their specific needs and resources when choosing the optimal cooling solution.

Data Center Cooling Options: A Guide to Air and Liquid Cooling Solutions

This guide explores various data center cooling options, outlining their effectiveness and suitability for different server power densities. 

Air Cooling (3 kW to 10 kW per Rack): 

  • Efficient for low-density data centers. 
  • Liquid cooling offers improved efficiency but is not essential for server operation in this range. 

Rear Door Heat Exchanger (10 kW to 25 kW per Rack): 

  • Features a chilled water heat exchanger panel on the back of the server rack. 
  • Server fans draw air across the panel for cooling. 
  • Considered liquid cooling, but still relies heavily on fan energy and lower water temperatures. 
  • This reduces the potential energy savings compared to other liquid cooling options. 

Direct-to-Chip Cooling (25 kW to 50 kW per Rack): 

  • Focuses on cooling the hottest components: CPUs and GPUs. 
  • Liquid-cooled heat exchangers directly replace air-cooled fins on these chips. 
  • Popular for its performance boost due to cooler chip temperatures. 
  • Requires some air cooling for other server components. 

Immersion Cooling (Over 50 kW per Rack): 

  • Submerges the entire server in a non-conductive liquid bath. 
  • Eliminates air cooling entirely, with full liquid coverage for all components. 
  • Requires a separate heat exchanger system for efficient cooling. 
  • Server maintenance becomes more complex due to the horizontal vat design. 

The rise of AI and demanding processing applications has ignited a critical conversation around data center cooling. While air cooling, the familiar workhorse, remains widely used, its limitations expose its shortcomings for modern, high-density workloads. Liquid cooling emerges as a compelling alternative, offering superior heat transfer and efficiency gains that translate to operational cost savings and a smaller environmental footprint. However, its higher upfront costs and increased complexity necessitate careful consideration. To navigate this decision, organizations must evaluate a holistic range of factors: cost-effectiveness, maintenance expertise, water usage impact, local climate, and future scalability needs. By meticulously assessing these aspects, data centers can make informed choices between air and liquid cooling, ensuring efficient, sustainable operation in the face of ever-growing digital demands 

Navigating the Data Center Cooling Landscape: A Strategic Approach 

The decision between air and liquid cooling extends beyond the technology itself. Data centers with lower-density server configurations can still benefit from liquid cooling alongside air cooling to optimize efficiency. Ultimately, a strategic evaluation of cost, maintenance needs, location factors, future growth plans, and sustainability goals will guide data center operators towards the most suitable cooling solution for their specific environment. 

Data center owners can consider these liquid cooling options alongside air cooling for lower-density servers to optimize energy efficiency. However, as power density increases, the selection of the most suitable liquid cooling method becomes crucial. Not every data center will be housing high-performance servers or have a need for high-density power usage. Nevertheless, the efficiency gains associated with liquid cooling are still available. Whether you retrofit your existing servers or wait until you replace servers, there are multiple options to successfully implement liquid cooling in your data center. 

Despite the drawbacks of air cooling, liquid cooling isn’t always the best choice. There are several factors to consider. 

Cost 

Power usage is among the most expensive line items on a data center’s balance sheet and cooling cost represents about 40 percent of that total, on average. Although liquid cooling comes with higher capital costs, it may be more cost-efficient over time due to lower operational costs. Still, producing a total cost of ownership (TCO) comparison can be difficult due to intangibles such as management complexity and vendor lock-in. 

Maintenance 

The cost and complexity of maintenance is a related factor. Immersion cooling systems come with unique maintenance challenges, given that system components are submerged in dielectric liquid. Data centers might have to rely on vendors for most maintenance tasks. 

Location 

Liquid cooling is less attractive for data centers in colder climates, which can rely on free cooling by bringing in outside air. Data centers in warmer climates and areas with stressed water supplies are more likely to transition to liquid cooling. 

Strategic Roadmap 

The implementation of advanced applications such as AI and advanced processor chipsets could drive the transition to liquid data center cooling. Data center operators should consider whether these applications are on the strategic roadmap or whether there is a need to consolidate workloads or grow within a small footprint. 

Sustainability 

Many data centers are facing mandates to increase sustainability. New developments are facing pushback from local communities as energy and water usage rises. As a result, resource usage is an important consideration when choosing cooling technology. Continued collaboration between AI developers, data center operators, and cooling solution providers is crucial to achieve sustainable AI advancements. By optimizing energy efficiency through innovative cooling technologies, we can ensure a future where AI thrives alongside environmental responsibility. 

Calculation Reference: 

Energy Consumption and GHG Evaluation: Air Cooling vs. Liquid Cooling for Data Center Applications 

Based on the following assumptions: 

Floor area: 5000 sq. ft

Operational hrs.: 8600 hr/ysr

Electricity cost: $0.17 USD/kWh

GHG Emissions: 0.357 kg/kWh

Table 4 : Cooling Load Calculation 

Item Data Heat Gain
Servers and racks
200 racks with 8 servers each. 200 x 8 = 1600 servers
-
Server power consumption
725 W
3,957,920 Btu/hr (1,160 kW)
Switches
2 switches per rack
100,800 Btu/hr (30 kW)
UPS with battery power and distribution
Max Capacity of 72kW
245,664 Btu/hr (72 kW)
Lighting
2 x floor area in Watts
34,120 Btu/hr (10 kW)
Personnel
50 employees. 100 x no. of employees in Watts
17,060 Btuh/hr (5 kW)
Total Cooling Load
4,355,564 Btuh/hr/ 1277, \\ kW

Cooling Load in tons: 4,355,564 / 12,000 = 372.

 

Table 5: Cooling System Calculation and Comparison

Cooling System Energy Requirement PUE TUE COP Annual Cooling Load Annual Operating Costs GHG Emission
CRAC, DX System
1.4 kW/ton (508 kW)
1.40
1.41
2.5
4,454,435 kWh
$751,909
1590 tons
Liquid Cooling System, centrifugal chillers
0.65 kW/ton (236 kW)
1.19
1.20
5.4
2,068,131 kWh
$349,100
738

Energy and Cost Savings: 54%

GHG Reduction: 852 tons

It is estimated that data centers consumed close to 400 Terawatt hours in 2020, costing close to $34 billion. Data center electricity usage is increasing as the use of cloud-based storage, artificial intelligence, and other computer technologies advances. Most data centers are air-cooled. If they converted to liquid cooling, $14 billion per year in electricity costs and 68.4 million tons of carbon emissions could be avoided. 

Conclusion 

Expert Thermal’s innovative liquid cooling solutions play a crucial role in reducing total power consumption and minimizing greenhouse gas emissions in data centers, enhancing cooling efficiency while lowering Power Usage Effectiveness (PUE). Though liquid cooling systems may require a higher initial investment, the long-term savings in operational costs, coupled with the environmental benefits, make them an ideal solution for modern data centers with high-density computing requirements. At Expert Thermal, we recognize that efficient cooling solutions are not just about optimizing hardware performance; they are essential in mitigating the environmental impact of advanced technologies like AI and high-performance computing (HPC). By adopting cutting-edge thermal management technologies, Expert Thermal is committed to ensuring a sustainable future for AI advancements, allowing businesses to harness its full potential while prioritizing environmental responsibility. 

Efficient cooling solutions are not just about maintaining optimal hardware performance; they play a vital role in mitigating the environmental impact of AI. By embracing innovative cooling technologies, we can ensure a sustainable future for AI advancements, allowing us to harness its potential without sacrificing our planet’s well-being. 

The quest for optimal data center cooling necessitates a nuanced approach, moving beyond a “one-size-fits-all” mentality. While liquid cooling offers undeniable advantages for high-density server deployments, its superior heat transfer and lower energy consumption come at a cost. Upfront investments and the increased complexity of managing a liquid cooling system demand careful consideration. 

For many data centers, a hybrid solution might prove most effective. This strategic approach involves implementing liquid cooling for specific high-heat areas, like high-performance computing clusters. Air cooling, with its familiarity and relative simplicity, can continue to support fewer demanding applications within the data center. 

Expert Thermal recognizes the evolving landscape of data center cooling needs.  We offer a comprehensive suite of flexible data center infrastructure solutions. This empowers you with the agility to adapt and scale your cooling technology as your requirements dictate. Don’t settle for a static cooling solution that might hinder your future growth. Our team of experts is here to guide you through the various options and help you craft a customized cooling strategy. This strategy will optimize efficiency, minimize energy consumption, and perfectly align with your unique data center environment.  Contact us today to discuss how Expert Thermal can empower your data center’s agility and sustainability through a tailored cooling solution. 

 

 

References: 

https://www.missioncriticalmagazine.com/articles/94760-is-liquid-cooling-right-for-your-data-center#:~:text=Based%20on%20the%20direct%20comparison,1.80%20PUE%20into%20a%201.3. 

https://www.techtarget.com/searchdatacenter/tip/Avoid-server-overheating-with-ASHRAE-data-center-guidelines 

https://www.techtarget.com/searchdatacenter/feature/Liquid-cooling-vs-air-cooling-in-the-data-center 

https://www.techtarget.com/searchdatacenter/tip/How-to-calculate-data-center-cooling-requirements#:~:text=An%20overall%20data%20center%20cooling%20calculation&text=Because%20most%20HVAC%20systems%20are,t%20of%20max%20cooling%20needed. 

https://www.datacenterdynamics.com/en/opinions/what-happens-when-you-introduce-liquid-cooling-into-an-air-cooled-data-center/ 

https://teamsilverback.com/knowledge-base/data-center-power-series-2-watts-amps-and-btus/#:~:text=Using%20standard%20conversion%20equations%20(Watts,degrees%20Fahrenheit%20in%20one%20hour.) 

https://www.boydcorp.com/blog/energy-consumption-in-data-centers-air-versus-liquid-cooling.html 

https://www.cedengineering.com/userfiles/M05-020%20-%20HVAC%20Cooling%20Systems%20for%20Data%20Centers%20-%20US.pdf 

https://pictures.2cr.si/Images_site_web_Odoo/Landings/Air/air_cooling.gif  

https://2crsi.com/air-cooling  

https://www.storagenewsletter.com/2023/02/17/single-phase-immersion-cooling-study-for-storage-systems/  

https://www.semianalysis.com/p/sound-the-siryn-ampereone-192-core  

https://www.tomshardware.com/news/heres-the-cpu-intel-accidentally-revealed-then-pulled-from-public-view  

https://www.techpowerup.com/313820/intel-288-e-core-xeon-sierra-forest-out-to-eat-amd-epyc-bergamos-lunch#:~:text=Despite%20being%20based%20on%20an,for%20ECC%20DDR5%2D6400%20speed 

https://www.upsite.com/blog/how-computer-chips-are-being-upgraded-to-serve-ai-workloads-in-data-centers/  

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *