Artificial intelligence (AI) and machine learning (ML) applications consume significant power and generate considerable heat in data centers. High-performance AI accelerators — such as graphics processing units (GPUs), tensor processing units (TPUs), and application-specific integrated circuits (ASICs) — increasingly require more efficient cooling methods to maintain safe and optimal thermal operating levels.
This article discusses the growing energy demands of AI and ML and explores the rise of liquid cooling for these high-performance workloads. It also reviews key design requirements for liquid-cooling connectors and highlights evolving industry standards formulated by the Open Compute Project (OCP).
The increasing energy demands of AI and ML
Accounting for 10% to 20% of all energy consumed in US data centers (Figure 1), AI-driven applications are considerably more power-intensive than many conventional workloads. For example, a ChatGPT query draws ten times more energy than a standard Google search. As computational power requirements for AI model training double every nine months, data centers may soon consume as much energy as entire countries.

With thermal design power (TDP) requirements reaching 1500W and average rack power increasing from 8.5 kW to 12 kW, effective cooling systems are critical to maintaining optimal data center temperatures of 70 to 75°F (21 to 24°C). Cooling infrastructure now accounts for approximately 40% of total energy consumption in some of theties, prompting organizations such as The Green Grid to develop a Liquid Cooling Total Cost of Ownership Calculation Tool (tggTCO).
The rise of liquid cooling for AI and ML workloads
Many liquid cooling systems circulate dielectric fluids or water-based solutions through pipes or channels placed near or directly on components like GPUs. This process effectively dissipates thermal buildup in data centers running a wide range of high-performance AI and ML applications, large learning models (LLMs), and training sets. These mixtures offer superior thermal conductivity and greater heat transfer capacity than traditional air cooling, fan-based systems, or passive heat sinks.

Data centers typically implement liquid cooling using two primary methods: cold plate and immersion cooling (Figure 2). Cold plate cooling circulates dielectric coolant over or near the hottest components, delivering high performance at the chip level yet still relying on supplemental air cooling to dissipate residual heat. As rack densities increase, cold plate liquid cooling scales more efficiently than stand-alone air-cooling systems, which often struggle to dissipate heat from densely packed equipment.
Significantly reducing the use of auxiliary fans, immersion cooling further improves energy efficiency by dissipating, recapturing, and reusing nearly 100% of generated heat. This cooling method, however, often requires new facility designs, structural modifications, and upgraded or new power distribution systems.
Precision liquid cooling, which occupies a middle ground between cold plate and immersion cooling, uses minute amounts of dielectric coolant to target the hottest components and effectively cool the entire system. This hybrid method, which eliminates water use for cooling, can reduce energy consumption by up to 40%.
Key performance requirements for liquid cooling connectors
When designing liquid-cooled AI systems, data center architects select connectors that meet key performance requirements, such as resisting temperatures up to 50°C (122°F), handling coolant flow rates up to 13 liters per minute (LPM), and maintaining pressure drops around 0.25 psi.

Additionally, these connectors ensure easy serviceability and compatibility with water-based or dielectric fluid (Figure 3) mixtures, preventing corrosion and leaks. Liquid cooling connectors also integrate seamlessly with in-rack manifolds and existing cooling infrastructure.
Additional key liquid cooling connector features include:
- Quick disconnect: facilitates easy, dripless connection and disconnection for routine maintenance and emergency access in AI and ML data centers.
- Large diameter: accommodates high flow rates, typically with a 5/8-inch inner diameter for server cooling in AI racks.
- Thermal resistance: optimizes heat transfer by reducing thermal resistance, which is critical for cooling efficiency.
- Manifold compatibility: aligns fluid connectors with three-inch square stainless-steel tubing for optimized coolant distribution.
- Hybrid designs: combines high-speed data transfer and liquid cooling channels for AI systems.
- Rugged designs: ensure durability and prevent leaks in challenging conditions, such as fluctuating temperatures, abrupt pressure drops, and strong vibrations.
Many companies, such as CPC (Colder Products Company), Koolance, Parker Hannifin, Danfoss Power Solutions, and CEJN, offer liquid cooling connectors for high-performance AI workloads in the data center. These manufacturers provide various quick disconnect fittings, couplings, and other components designed to manage thermal efficiency.
Evolving industry standards for liquid-cooling connectors
Industry organizations like the Open Compute Project (OCP) are developing open standards for liquid cooling connectors in data centers. The evolving OCP Large Quick Connector Specification outlines a universal quick connect, with standardized interface dimensions and performance requirements.
These include a working pressure of 35 psi at 60°C, a maximum operating pressure of 175 psi (12 bar), a flow rate of over 100 liters per minute (LPM), and ergonomic designs limiting mating torque to less than 5 Nm. Connectors must also handle temperatures from -4°F to 140°F (-20°C to 60°C), with shipping ranges of -40°F to 158°F (-40°C to 70°C). Additional criteria specify fluid loss under 0.15 mL per disconnect and a service life of at least 10 years of continuous use.
Summary
High-performance AI accelerators increasingly require efficient cooling to maintain safe, optimal thermal levels in data centers. Liquid cooling systems, which circulate dielectric fluids or water-based solutions near or directly on GPUs and TPUs, provide superior thermal conductivity and capacity compared to traditional air cooling, fan systems, or passive heat sinks. Liquid cooling connectors, designed for demanding environments, must resist temperatures up to 50°C (122°F), handle flow rates up to 13 LPM, and maintain pressure drops around 0.25 psi.
Related EE World Online content
How Are High-Speed Board-to-Board Connectors Used in ML and AI Systems?
Driving Standards: The Latest Liquid-Cooling Cables and Connectors for Data Centers
Where Are Liquid Cooled Connectors and Connectors for Liquid Cooling Used in EVs?
Where Are Liquid-Cooled Industrial Connectors Used?
Liquid Cooling For High-Performance Thermal Management
References
The Basics of Liquid Cooling in AI Data Centers, FlexPower Modules
High-Power Liquid Cooling Design: Direct-to-Chip Solution Requirements for 500-kW Racks, ChillDyne
Six Things to Consider When Introducing Liquid Cooling Into Your Data Center, Data Center Dynamics
Harnessing Liquid Cooling in AI Data Centers, Power Electronics News
How AI Is Fueling a Boom in Data Centers and Energy Demand, Time
Cooling the AI Revolution in Data Centers, DataCenterFrontier
Data Center Cooling: The Unexpected Challenge to AI, Spectra
Supporting AI Workloads: The Future of Data Center Cooling, DataCenterPost
The Advantages of Liquid Cooling, Data Center Frontier
Answering the Top FAQs on AI and Liquid Cooling, Schneider Electric Blog
Large Quick Connector Specification, Open Compute Project
How Immersion Cooling Helps Reduce Operational Costs in Data Centers, GRC
Leave a Reply
You must be logged in to post a comment.