InfiniBand is a very high-speed server fabric I/O network with various lane counts and connector/cabling specifications. The current high volume HPC and DataCenter installations are primarily using EDR 25.7G per lane interconnects. This is done using primarily a 4-lane option cable link between ToR switch and rack leaf server using the QSFP28 connector system. Although the current InfiniBand trade association’s roadmap also still has 1- and 12-lane options, these lane counts have not been used very much especially with current product speed rate generations.
Back during the 2.5G per lane speed rate, years ago, the HSSDC-2 connector was used for a few single lane link applications. The 12-lane cable assembly option has been supported using the 2-row CXP connector on each end to provide a ToR to EoR or EoR to Core, switch to switch FatPipe or MultiLink trunk cables. There have been moderate volume applications for 1 CXP to 3 QSFP and 1 CXP to 12 SFP breakout type cable assemblies for higher density switch port to leaf server applications.
As new work on the next generation IB HDR 50G per lane specification is started in earnest and some work on the NDR 100G per lane is being planned and partially developed, there are new connector and cabling topologies needed that will use new lane counts but will still use the 4-lane link as well. The new 50G per lane count links are 8-lane and 2-lane.
It seems that Ethernet 10 and InfiniBand 12 lane links usage has diminished. At the current very high speed rates it is even more difficult to coordinate the electrical signal integrity performance for so many differential pairs. But the lesser 8-lane option seems to be of interest for InfiniBand users as the Ethernet new standard, IEEE802.3bs for 400GBaseCR4, 8 lane x 50G = 400G, is becoming a popular Core to EoR and ER to ToR switch FatPipe Link. The very new 8-lane QSFP-DD pluggable optical module is becoming very popular for reaches over 10 m inside the datacenter. As many datacenter active equipment systems use both InfiniBand and Ethernet networks, using the standard QSFP-DD interconnects is very economical with just inside the plug memory-mapping differences.
Rather than using a new connector for the new 2-lane InfiniBand interface, pragmatic developers are using either QSFP56 or µQSFP56 in half populated configurations. For this breakout style, two cable legged assembly, one QSFP56 plug connector has all four twin-axial pairs populated and handling a total of 200Gbps bi-directional bandwidth. Each of the two separate cable legs has four twin-axial pairs handling 100Gbps with usually the smaller µQSFP plug connector. So, these are 2 separate links that can be of different lengths whether passive copper or active optical cable assemblies. It seems that the next InfiniBand 100Gbps per lane NDR specification will also use this 2-lane breakout cabling option.
The new Gen-Z interconnect architecture supports 1-256 lanes which is way more than the older PCIe which supports 1-32, maybe 1-48 lanes.
It is likely that Gen-Z external cable Lane options will also mostly reflect Ethernet and InfiniBand with 4- and 8-lane options. It is likely that the Gen-Z 50Gbps speed rate with also use the new 2-lane link option as well as the same connectors. Using newer lane bonding technologies, the Gen-Z internal lane options will be much larger for 3D Memory Modules and NVM storage applications than is supported currently by PCIe.
Leave a Reply
You must be logged in to post a comment.