The current 10Gbps per lane Extoll IO interface system uses high-speed interconnects based on SAMTEC’s HD16 connectors and cable assemblies. This 12 lane connector system was chosen because it is smaller and lighter than the CXP 12 lane InfiniBand connector as has increased port density on the front plate of a network switch. This IO interface has been used for connecting high-performance computer networked systems.
At the time of its first generation ASIC in 2003, it had a very low latency, 3µs ping between. Its power consumption is very efficient at 2 W at 3.3 V. Current chip latency is
When it was first launched back in 2003, Extoll IO signals worked well over about 2-m copper twin axial cable assembly links. These passive copper cables were used mostly within a rack’s cable routing channel. So their developers chose to use twin-axial single elements and no outer cable jacket and shielding. Just an occasional thin tape wrap holds the bundle of elements together. The HD16 design does not have latches nor other fasteners. The plug PCB has pretty good retention in the mated edge receptacle but these cables need to be supported with cable management components to stay connected. The huge orange pull tabs are easy to see and use compared to many designs found on various SFP, QSFP and CXP plugs. Twin-axial wires are soldered to the plug paddleboard PCB. It appears that the exiting twin-axial elements from the plug shell may have issues passing some Telecordia standards for axial pull test requirements. The huge orange dust covers are easy to use but in a large installation there would be large buckets of plastic dust covers to recycle. One wonders if they are or could be sent back to Samtec for reuse.
For their AOC design, an improved receptacle metal-shielded cage and connector and shell were developed to handle the heat generated from the active optical engine chip embedded within the cable plug. The very low latency network interface, HyperTransport IO, has also used the Samtec HD16 connector within its HT-3.1 specification.
Extoll’s AOCs use 24 OM3 MMF for 12 lanes time 10Gbps per lane for 120Gbps Link. They also use 850nm VCSELs for driving photons through glass fiber up to 100-m reach within datacenters. These cables have good outer jackets for installation in racks and building infrastructure.
AOC cables are much lighter than copper 24 twin-axial pair cabling but 24 fiber MMF cables are considered heavy and large compared to SMF used with CWDM signaling, Silicon Photonics optical engines and two- or four-fiber optical cabling. Will the possible next generation Extoll AOC be based on Silicon Photonic optical engines that would use much lighter four SMF cables? Copper cables and connector shells have been a major weight factor when calculating a datacenter floor thickness and the cost of the floor. Older datacenter floors have cracked under the weight of new equipment including cabling.
After reading its website announcement and from other sources, it appears Extoll GmbH is poised to announce its next generation interface this coming July 19-23 at the ISC in Frankfort. Will they announce a new higher speed generation of Extoll IO interface and interconnect at maybe 28Gbps NRZ or even 50Gbps PAM-4 per lane? Would a new generation interface use just AOCs or some active copper cables?
Extoll NICs and cables have had a modest volume of installations and customers, but have been utilized in some very large HPC datacenters. How well would a newer generation Extoll interface compete versus even newer interfaces like Omni-Path, BIX-3 and Connect-X next generation IOs?
Will an improved higher speed design of the HD16 connector support the next higher speed EXTOLL interface? Will it be competing versus a smaller size, nascent microCXP 12 lane connector that might be based on microQSFP 4 lane connector technology? Maybe a new double-stack 6/6 lane version could be developed like the new QSFP-DD 8 lane system?
I would call it HSFP-DD or HexaSmallFormfactorPluggable DoubleDensity interconnect. Anybody want to talk about that?