
By Ed Cady, Contributing Editor
QSFP-DD (double density) interconnect is an 8 lane x 25 Gbps, NRZ modulation = 200 Gbps and also, 8 lane x 50 Gbps PAM4 modulation = 400 Gbps system. It is based on the widely adapted QSFP interconnection system that is especially used in datacenters and HPC centers. It has evolved from a birds-of-a-feather coalition into the new QSFP-DD MSA consortium that has formed and announced the intent to produce and release a fairly detailed specification.
When this consortium releases this new specification, further work will likely involve the SFF committee and I/O Interface organizations’ PHY layer standards subcommittees like Ethernet IEEE.802.3bz and InfiniBand HDR. Many key developers include OEMs and ODMs that are active, contributing members of all or any of several technical and marketing committees. I expect there will likely be various consortium plug-fest testing events and conference live demos within a year from now.

The QSFP-DD hot pluggable system includes new modules, PCB connector, cage, heatsink system, and passive, active optical and copper cable assemblies. The www.qsfp-dd.org site shows some partially detailed concept images including two staggered length cable plug ends going individually into the integrated double-stacked single-edge type receptacle with integrated double-stack metal shield cage. These cable plugs appear to be keyed differently as well from each other, thus maybe the two individual cable assembly types (or a two legged assembly approach) are necessary for some copper cable types due to possible bend radius and route-ability issues. So will there be two different active E/O modules going into the top and bottom interface ports? How do these different modules work together without impacting one link intra-skew? It seems that the QSFP-DD primary application is for longer reaches thus optical modules with passive optical cables will account for most of the volume at first. However, it seems that newer, larger datacenters with many medium-length reaches will be served by active optical cable assemblies for EoR to ToR applications. Expect to see ToR to many Leaf servers within a rack connected by using a hydra multi-legged active optical cable assembly.

Passive and active copper cables may still be needed for system failover links or short reach inter-switch aggregation links. But such cables will need to use 16 twin-axial cable elements that will become a large diameter link assembly. This will be difficult to install and route within a rack.
Will the QSFP-DD receptacle connector have a contact pitch like microQSFP, QSFP28 or QSFP56 connectors? Thus will it be like or equal to a double-stacked microQSFP or as equal to a double-stacked QSFP28 or QSFP56. Right now you could have one QSFP56 receptacle on top of the PCB and another mirror-mounted on the bottom of the PCB to create an equivalent 8-lane link solution but with a different thermal profile.
Another competing interconnect solution is the new but larger CFP8 400 Gbps interconnect system targeting mostly very long distance applications that require more power consumption and thermal management. The CFP8 likely costs much more than the developing QSFP-DD. So each will have their market segment/applications.
Will there be a QSFP-DD 56 Gbps receptacle connector option that has back-to-back edge contacts to allow a ribbon twin-axial cable or FinRail PCB assembly to plug into the internal edge, thus providing a straight-through transmission path using better dielectric for optimized signal integrity? The other end of the internal ribbon cable or FinRail PCB assembly connects very close to the switch chip on the blade board or right onto a switch chip module or the chip itself.
QSFP-DD evangelists are promoting their solution for Ethernet 400GBaseCR8, VSR8, SR8 and LR8 applications as well as InfiniBand HDR and RapidIO-50G standards. Expect to see newer higher end interfaces like Mellanox’s Connect-X and Intel’s TrueScale/OmniPath interconnect solutions to include QSDP-DD interconnects.
Leave a Reply
You must be logged in to post a comment.