While requirements for power supplies have been steadily increasing, the need to address space and heat limitations is taking on greater importance for modern connector design.
Contributed by Ken Stead, director, power products, Molex
Today, across the world, there are nearly 90 million internet transactions conducted per minute, or 1.5 million occurrences each second. All of these emails, app downloads, video streams, social media interactions, retail purchases and more are processed through a network of worldwide data centers. These data centers contain upwards of 10,000 servers supported by a network on switches, routers and cooling equipment—all of which rely on an increasing amount of electricity.

With power consumption projected to double at data centers every five years in the U.S., the rate of electric consumption is getting more and more attention from data center owners footing the bill, electrical utilities that must provide the power on demand, and government officials concerned with the wide-ranging effects of the massive power generation required.
Power is delivered to data centers via the same grid that provides power to homes and businesses; however, while U.S. homes generally receive power at 220 V, data centers must receive thousands of volts to accommodate the massive amount of power needed to run the processors that are at the very heart of the computing power driving the internet.
Conversion and distribution
Datacenters use a measure called Power Unit Effectiveness, or PUE, to evaluate the efficiency of the power architecture. PUE is the total power delivered to the data center divided by the power delivered to the critical load (servers), and ideal PUE is 1.0. For example, a PUE of 1.7 means that for every watt delivered to the load, 0.7 is lost in power distribution and cooling. In 2018, reported PUE levels for data centers measured around 1.6.
One of the most critical conversions occurs at the rack itself. There are thousands of servers required for the compute power. In addition to servers, there are switches that manage the communication both between the servers and from the servers to the outside world.
Bulk power is delivered to racks containing 30 to 35 1U servers that are increasingly being powered by 3 kW power supply units (PSUs). These PSUs, typically located at the bottom of the rack, convert power to rails of various voltage levels. Power that enters the PSU at 208 Vdc is converted to 3.3-, 5- and 12-V rails to meet the needs of different components inside the servers and switches, such as motherboards with processors, adapters and video cards, PCIe and memory.
In addition, racks contain large numbers of fans required to provide cooling airflow. Much of the energy that is delivered to the server is converted into heat. This heat loss occurs as a natural part of the conversion process as power is converted from ac to dc and dc to dc.
The challenge of space
Managing this increasing amount of power has resulted in significant challenges when it comes to packaging space and thermal management. While requirements for power supplies have been steadily increasing, the space allotted for both the power supply and the critical connector in back has not changed. Going back to the early days of server development, server system infrastructure (SSI) requirements called for 400 to 600 W power supplies and the power I/O would use four to six power blades rated at 30 A per blade to deliver the required power out to servers. Today, connector companies are being asked for power I/Os to carry triple the current in the same space.
A benchmark specification might call for six to eight power blades capable of handling 70 to 80 A per blade generating no more than a 30 temperature rise (or T-rise). When rating these power connectors, measuring the current is straightforward. However, measuring T-rise gets complicated. Issues such as location of thermocouples within the connector can impact temperature measurement. Consideration in the design of the PSU PCB with copper layers, layer thicknesses and footprint design can contribute to temperature rise. Often during thermal evaluations, heat can be witnessed being transferred from the PCB to the connector, leading to a discussion of proper balancing as the connector supplier would rather not have the connector as a heat sink.
Gaining density
Connector designers are now being forced to come up with creative solutions to manage heat and current. While airflow cannot be factored in to the rating of a connector, venting is now often being designed into the housings to allow heat to escape and prevent overheating.
Basic physics tells us that to carry more current, you simply need more copper. Advances are being made in copper alloys to allow increased conductivity, but these advances will not keep up with the demand for higher current densities. Likewise, improvements in contact design can improve the typical power loss found in the interface between the PSU and the connection point, whether it is the mating half of the interconnect or sometimes a PCB card edge, but these improvements cannot be relied upon to provide significant gains in current density.
Customers are now asking connector designers to decrease the centerline spacing between the power contacts; however, decreasing that spacing causes mutual heating issues both at the PCB footprint and within the connector itself.
For the past 40 years, connector development has centered around higher densities. However, the industry is approaching the point at which it must consider adding more space for more power or examining the conventions used to evaluate and rate connector performance. Just a 1% improvement in data center electrical efficiency is believed to result in millions of dollars in savings. With potential savings that significant, active and lively discussions will surely continue for some time between data center owners, electrical utility providers and government officials.
Molex Inc.
www.molex.com
Leave a Reply
You must be logged in to post a comment.