By Ed Cady, contributing editor
MXC connectors are a new type of fiber-optical connector and cabling system initially optimized for 850nm VCSELs that is having a successful rate of deployment within 21st century data centers. The MXC connector brand was developed by US CONEC and is supplied to interconnect suppliers to produce cable assemblies.
While the MPO format is the high density, cable-to-cable connector of choice for structured cabling applications requiring low insertion loss, MXC connectors are targeted at emerging point-to-point links with embedded optics. Applications include the cloud data center market segment, High Performance Computing and some enterprise data centers. The telecom market segment which has primarily used the MPO fiber-optical interconnect type for two decades for embedded optical architectures is now turning toward the MXC connector format for switch and routing fabrics. Datacenter based consortiums like Open Compute also ascribe to using the MXC interconnection system.
This success can be attributed to the connector’s much smaller size, higher count of multiple fiber position options, the fact it doesn’t need fiber polishing, fewer component parts, lower cost, less weight, higher performance with use of expanded beam lensed ferrules, and a stronger resistance to debris sensitivity. Successful connector adaptations usually have more financial and human resources made available for further manufacturing improvements and product family development. So watch this solution set continue to grow.
At the heart of the MXC connector format is the PRIZM MT expanded beam ferrule technology. During 2016, we will likely see use of this ferrule technology employed in both metal and plastic as well as over-molded connectors used in industrial, factory and building automation network systems. Eventually, ruggedized MXC connectors will be used in some military network systems and maybe even aerospace applications.
Some applications include switch to servers TOR, MOR and EOR fan-out fiber-optical cables with up to 64 fibers per connector. Point-to-point uplink cables are among the more prominent outside-the-box applications. It remains to be seen if the MXC connector format will be employed with standardized pluggable Tx/Rx interfaces. Some multi-fibers formats that would be candidates include the released IEEE802.3ba 4×25 Gbps = 100 G Ethernet, the developing IEEE802.3bs 8×50 Gbps = 400 G Ethernet and developing InfiniBand 12×50 Gbps = 600 G IB HDR specifications. Additionally, 4×50 Gbps = 200 G and 4×100 Gbps = 400 Gbps potential standard iterations may occur for some developing applications.
Is there a potential for two and four fiber MXC connectors and cables to be developed for possible consumer applications like a potential and maybe nascent USB 3.2 Type D 2×20 Gbps = 40 Gbps? Time will tell.
There are many inside the equipment applications as well. MXC internal fiber-optical ribbon cables interconnect within the chassis mid-board Active Optical Modules, through the backplane or mid-plane.
Other success factors are application optimized product features, competitive comparisons, and higher fiber count versions that are performing well with new expanded beam lenses technology in the PRIZM MT ferrule while maintaining fiber/light alignment precision. Aqua, green and black colored hoods or strain-reliefs are available per transmission standard requirements like Aqua for 10 G fiber type. High energy efficiency, lowest operating cost, newer data centers use all white interior building walls, ceilings and floors to enable maximum light efficiency. These data centers also require the active equipment and external cabling to be white colored to help reflect the available light; so we may see some of those become available this year.
Using 64 fibers, each handling one wavelength at 25 Gbps, a single MXC connector assembly can support a 1.6 Tbps Link. Emerging new silicon photonic optical engines run at 50 Gbps per fiber and may be able do 3.2 Tbps links. Developing newest engines run at 100 G per fiber so there is a potential to have future 6.4 Tbps links in discretely developing next generation ExaScale size data centers.
Fernando Alba says
Very interesting article.
Question: How can be inspected the end face of a MXC connector?