• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Advertise
  • Subscribe

Connector Tips

Connector Tips has connector and electrical connector news, product highlights and and editorial coverage.

  • Products
    • board-to-board
    • cable-to-board
    • power
    • RF
    • USB
    • wire-to-board
  • Electronics
    • bonding
    • copper
    • fiber
    • gold
    • optical
    • transistor sockets
  • Markets
    • Aerospace
    • Automation
    • Automotive
    • Electrification
    • Electrical & Instrumentation
    • Medical
    • Military
    • Off-Highway
    • Oil/Gas
    • Telecom/Data
  • Learn
    • Basics/FAQs
    • eBooks/Tech Tips
    • EE Training Days
    • EE Learning Center
    • Tech Toolboxes
    • Webinars & Digital Events
  • Resources
    • Design Guide Library
    • Digital Issues
    • Engineering Diversity & Inclusion
    • LEAP Awards
    • White Papers
    • DesignFast
  • Videos
    • EE Videos
    • Teardown Videos
  • Newsletter Subscription
  • Suppliers

How accelerator IO interface connectors and cables offer more options

January 3, 2022 By Ed Cady Leave a Comment

Accelerator I/O connectors and cables provide secure, electrical contact and transmission for high-speed data. These devices typically make up one of the largest unit percentages of data-center equipment in terms of use and assembly. This is particularly true compared to top-of-rack (TOR) switching fabric network interface cables, such as Ethernet or InfiniBand.

TOR switching is a data-center architecture design whereby computing equipment (such as servers or switches) are placed within the same rack and are connected to an in-rack network switch. This type of architecture typically places a network fiber switch in every rack to connect more easily with every other device in the rack.

TE’s QSFP-DD and double-stack QSFP-DD.

In hyperscale data-center systems, however, accelerator interfaces offer more options. They run eight to 16 lane-cabled links. In comparison, Ethernet is usually a QSFP (four-lane quad, small form-factor pluggable) cable. It’s been the primary type of high-speed IO interface interconnect until now.

Accelerator devices are driving new possibilities, such as eight-lane QSFP-DD, OSFP (octal small-form-factor pluggable), 16-lane double-stack QSFP-DD, and OSFP-XD interconnects — including the connectors, cables, as well as module active Ethernet and active optical cables.

There are now more options. For instance, by using at least four cable legs and two QSFP-DD plugs in a single cable assembly, and then plugging it into a double-stack QSFP-DD receptacle, it’s now possible to devise a 1.6T-link, 1m passive or 2m active DAC solution. Granted, using the OSFP-XD is still more feasible because it offers better thermal performance, faceplate port density, and a lower channel operating budget.

The OSFP & OSFP-XD Consortia module-cage connector.

There are few full, high-speed interconnect product suppliers that can afford to test, stimulate, source, and build the 112+G per-lane assemblies and components due to the cost and limited supply chains.

GPU accelerators — such as NVidia’s NVLink 3.0, AMD’s Instinct MI200, and Intel’s Ponte Vecchio 2.0 — appear to favor using the standard pluggable interconnects, internal 112G PAM4 per-lane types, and fewer proprietary connectors and cables (at least compared to earlier designs). They’re mostly using eight and 16-lane implementations.

These systems, which consist of several GPU 1U and 2U boxes of accelerator blades and switches are connected on a single rack. The racks are typically connected by external cables and the boxes are employing newer, internal cables and connectors.

System accelerators — such as AMD’s Infinity Fabric, IBM’s BlueLink, NVidia’s Bluefield, HPE’s Slingshot, and the EU’s EPI Rhea — typically use eight, 16, and 32+ lane links. However, customized cabling solutions that support the ideal system fabric network do exist. A unique implementation of BlueLink included 96 lanes. But these IO interfaces tend to be eight lanes, using a pluggable series of interconnects.

Memory storage accelerators — such as the latest versions of OpenCAPI, Open Memory Interface, CXL, GenZ, and custom CCIX are using eight and 16 lanes, internally — inside the box and inside the rack of external cables and connectors. Often internal Twinax, pluggable jump-over cables that connect to faceplates and external pluggable connectors and cables are used. Internal-only applications can use the standard options or one of the newer, proprietary 112G cables and connectors, such as Samtec’s NovaStack seen here…

The OCP’s Accelerator Module development spec, which defines the form factor and common specifications for a compute accelerator module and a compliant board design, accepts the use of the SFF-TA-1020 internal I/O connectors and cables. This is also true of the Open Compute Project (OCP) NIC 3.0 specification, as seen in Intel’s new Ice Lake CPU-based applications.

However, some applications will continue to rely on the SFF-TA-1016 interconnects, which are rated for 106 and 112G per-lane link budget requirements.

New 224G connectors and cables are in development as per Samtec’s road-mapped NovaRay.

It also seems that the new, Smart Data Accelerator Interface SDXI memory-to-memory switching links use internal Twinax cables with SFF-1020 or SFF-1016 connectors (also called MCIO types).

Additionally, the PCB paddleboard plug R/A, VT, Orthogonal straddle-mount options are possible if using 0.6mm-pitch, SFF-TA-1020 connectors. There are options.

Cable accelerators
An overall preference for accelerator links, especially in large hyperscale data centers, has led to an increase in the use of active chips embedded in cable plugs. These chips use different technology and can function as equalizers, re-timers, signal conditioners, and gearboxes.

Originally, an active type of DAC cable was supported by several IO interfaces with similar speed rates, but now they’re offered as 112G PAM4 per-lane active copper cable or ACC.

More recently, a 106G per-lane active Ethernet cable product has launched and is referred to as AEC by its chip developer. This chip supports several Ethernet data rates. Another company also recently developed Smart Plug modules with a chipset for use in 112G per-lane PAM4 active electronic cable — or AEC. The acronyms are the same, so be aware of the context.

A few other companies are offering 106 or 112G options. Spectra-7 has a new 112G PAM4 per-lane linear equalizer, dual-channel chip, which supports top or bottom-mounting and routing embedded in SFP, SFP-DD DSFP, QSFP, QSFP-DD, and OSFP cable plugs. Several leading cable assembly companies currently use the Spectra 7 embedded chips.

Cameo offers 112G PAM4 re-timers and gearbox chips, embedded in their own active cable assembly product family.

Astera Labs has developed a new, 106G PAM4 per re-timer chip with advanced fleet management and deep diagnostic functions. This chip supports multiple Ethernet speed rates.

When choosing the ideal option, system engineers must consider the available power, cable-routing options, cooling costs, form fact, topologies, and overall cost for each chip type and ACC type.

OEMs and end-users typically dictate the use of a specific active chip and plug type, as well as the assembly configuration. Although typical cables can use 34 AWG, Twinax wire conductors, a 16-lane cable uses 64 wires that form a substantial bundle diameter.

Final thoughts
There are advanced 212 and 224G per-lane chips now in development to support short-reach copper links. Features include minimum power consumption and greater functionality, although the cost will be an important consideration.

It’s also important to compare CMIS, COM, IL, BER parameters and test data of the various accelerator connectors, cables, and PCBs. Inter-operability between these devices will be a key to supporting heterogeneous networks.

Many 112G PAM4 and most 224G PAM4 accelerator applications will require embedded, active copper chips that minimize the cable-bundle diameter. For small-bundle diameters and longer lengths, the use of eight-lane QSFP-DD, OSFP, and 16-lane QSFP-XD AOCs are the better options — and likely required for achieving 224G per-lane link reaches.

The 32+ lane Twinax cables work for internal applications but not for external cables. Multiple jacketed and shielded copper fan-out cable legs can quickly become cumbersome and costly. To date, the cost of competing AOCs is decreasing while that of AECs has remained steady.

Developers are wise to keep in mind that customers typically go for proven and value-added technology and performance that’s matched with a reliable supply chain.

 

 

 

 

You may also like:

  • Example-of-Molex-BiPass-technology-delivering-best-in-class-signal-integrity
    High-speed interconnects to meet next-generation hyperscale and enterprise data centers…

Filed Under: Basics, EE sync, Electronics, FAQ, Featured Tagged With: faq

Reader Interactions

Leave a Reply Cancel reply

You must be logged in to post a comment.

Primary Sidebar

Featured Contributions

From extreme to mainstream: how industrial connectors are evolving to meet today’s harsh demands

The case for vehicle 48 V power systems

SMP3 vs. SMPS: why two standards?

mmWaves bring interconnect challenges to 5G and 6G

Ensuring integrity in high-performance interconnects with connector backshells

More Featured Contributions

EE TECH TOOLBOX

“ee
Tech Toolbox: Internet of Things
Explore practical strategies for minimizing attack surfaces, managing memory efficiently, and securing firmware. Download now to ensure your IoT implementations remain secure, efficient, and future-ready.

EE LEARNING CENTER

EE Learning Center

RSS Current EDABoard.com discussions

  • 12VAC to 12VDC 5A on 250ft 12AWG
  • Lightbox circuit help
  • Engineer's own PCB layout software guide?
  • LVS Mismatch Error in Simple Layout
  • Does mobility carrier ratio changes with Wn? (0.18um) inverter design

RSS Current Electro-Tech-Online.com Discussions

  • How to repair this plug in connector where wires came loose
  • Lightbox circuit
  • Fuel Auto Shutoff
  • Kawai KDP 80 Electronic Piano Dead
  • Python help with keystroke entries

EE ENGINEERING TRAINING DAYS

engineering
“bills
“connector
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.

Footer

EE WORLD ONLINE NETWORK

  • 5G Technology World
  • EE World Online
  • Engineers Garage
  • Analog IC Tips
  • Battery Power Tips
  • DesignFast
  • EDA Board Forums
  • Electro Tech Online Forums
  • EV Engineering
  • Microcontroller Tips
  • Sensor Tips
  • Test and Measurement Tips

Connector Tips

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
  • About us

Copyright © 2025 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy