The rise of 400G changing the data center landscape

Tony Campbell, Molex

Tony Campbell, Molex

Tony Campbell is a business development and marketing professional at Molex, focused on networking, data center, and optical communication solutions Higher Ethernet speeds, cloud computing, IoT and virtual data centers have upped the ante for data center operators. Hyperscale data center operators are driving broader adoption of 100G links and module technologies. Concurrently, 400G form factors and optical modules are on the cusp of a full-scale launch slated to progressively occur in 2019. This shift within the data center industry will mark an impressive doubling of the density of proven QSFP28 (Quad Small Form Factor Pluggable 28G) modules for up to quadruple the bandwidth with lower overall power consumption for a 400G module than four 100G modules.

With the development of increasingly more powerful 56G PAM-4 ASIC (application-specific integrated circuit) chips for network switches from companies such as Broadcom, Innovium, Nephos, and Barefoot Networks, demand continues to grow for next-generation optical interconnects and modules.

These new ASICs deliver 12.8 Tbps of bandwidth resulting in next-generation switches that can provide 32 ports of 400 Gbps. Alternatively, if a data center architecture requires a higher radix, these ASICs can be run in a reverse gearbox mode such that they can provide 128 ports of 100 Gbps. Both traditional OEMs such as Cisco and Arista and white box manufacturers such as Accton, QCT, and Celestica are racing to produce these higher speed switches, with many already released to the market. As 400G switches become readily available, it is critical that optical and copper interconnects are also qualified and available to support actual deployment.

Key drivers of 400G

What factors are driving new needs? Data center storage requirements are increasing by more than 50 percent annually, according to IDC, with digital information projected to increase to 40 zettabytes by 2020 and 163 zettabytes by 2025. There are several key contributors to this growth, including a wave of transitions to cloud storage, open systems, edge computing, machine learning, deep learning, and artificial intelligence.

Virtual reality has only begun to gain traction on a wide scale. And, the reality of driverless vehicles going mainstream in the foreseeable future will place exponential strains on data center infrastructures.

Planned obsolescence is always a fact of life for hyperscale data centers that on average upgrade the overall network architecture about every two years to keep pace with bandwidth demands.

The data center supply chain has stepped up to create ever more powerful, energy efficient and scalable solutions. At present, 100G technologies deliver the fastest connections for Ethernet links. Implementations of both 100G and 400G Ethernet technologies will continue to rise in coming years, with the latter ultimately taking the lead to become the prevailing speed in switch chips and network platforms.

Stay ahead of the curve

What do we see as we look ahead? An impressive array of data center infrastructure solutions designed to address expanding hyperscale requirements for higher bandwidth and power. Next-generation solutions leveraging copper and optics deliver high signal integrity, lower latency and lower insertion loss for maximum efficiency, speed, and density.

Copper cables (DAC) capable of achieving 400G already exist, while optical transceivers enabling 400G switch connections are rapidly being qualified in preparation for full-scale launch. Currently, in beta sampling, 100G Single Lambda and 400G transceivers will be hitting the market soon. The ramp for 400G will begin in mid-to-late 2019 as early adopters that require higher bandwidth will deploy these products, even before supply chain costs come down and pricing erosion begins.

Many data centers will continue to deploy 100G CWDM4 transceivers for longer reach links, while demand for 100G PSM4 is quickly disappearing and suppliers are exiting the market. As the 100G Single Lambda (100G-DR or 100G-FR) transceiver products become available in early 2019, they are predicted to cannibalize the 100G CWDM4 market as there are lower price expectations, in addition to the single lambda products being capable of directly interoperating with 400G transceivers in a breakout topology.

As bandwidths migrate higher, the industry will continue to experience a steady phase-out of 10G and 40G technologies as they are replaced with optical transceivers, direct attach cables (DAC) and active optical cables (AOC) supporting 100G, 200G, 400G and beyond in a range of inter- and intra-data center communications. QSFP-DD (Quad Small Form-Factor Pluggable Double Density) transceivers stand to play an important role in that upward spiral.

The shape of things

The QSFP-DD transceiver features an eight-lane electrical interface, with each lane capable of achieving up to the 50G data rate. Enabling up to 20W power (per QSFP-DD MSA Rev 5.0), a QSFP-DD module can deliver 400G performance in a range of reaches with the help of innovative heat sink capabilities. That’s important because advanced ASICs consume more power and dissipate more heat, which the QSFP-DD form factor can efficiently dissipate with an effective thermal management strategy.

The wider and deeper OSFP (Octal Small Form Factor Pluggable) form factor also supports 400G. One of the key advantages of the QSFP-DD transceiver over the OSFP is that it is fully backward compatible with existing QSFP+ and QSFP28 transceivers. 56G PAM-4 technology is widely seen as the key to enable QSFP-DD and OSFP form factor transceivers. Platforms integrating QSFP-DD and OSFP optical module form factors are being introduced to support 400G Ethernet in cloud applications. These new platforms provide backward compatibility with 100G ports to enable staggered implementation across a data center or enterprise.

Regardless of form factor, 400G transceivers require the use of a DSP ‘gearbox’ to create four 100G optical channels from eight 50G electrical lanes. This will be a critical component in the supply chain and could play a significant role in the ability of transceiver suppliers to deliver products and ramp volume to address the high demands of data center consumers. Availability of less power-hungry 7nm DSPs in 2019 will further disrupt this supply chain as transceiver suppliers seek to differentiate products.

Molex has demonstrated 100G FR QSFP28 and 400G DR4 QSFP-DD products in compliance with the 100G Lambda MSA. The technology ecosystem for next-generation networking equipment promotes 112G PAM-4 as a foundation to support 400G solutions for high volume data centers. The MSA specifications address the technical design challenges of achieving optical interfaces utilizing 100G per wavelength PAM-4 technology, and multi-vendor interoperability. PAM-4 technologies enable 100G optical with reaches of 2 and 10 kilometers and for 400G with a reach of 2 kilometers over duplex single-mode fiber. A PAM-4 platform can effectively lay the initial foundation for a cost-effective complete migration to 400G. Aggregating four 100G per wavelength lanes, the technology platform can support 400G versions such as 400G DR4, 400G FR4 and 4x100G for breakout applications.

Navigating the 400G evolution

Modern communications networks demand greater bandwidth to meet a data explosion on a global scale. As a result, the data center switch and transceiver markets are growing and evolving rapidly. High-speed optical transceivers, flexible and scalable optical transport products, compact connectors and fiber management are all vital capabilities to build 400G network equipment to serve high volume telecommunications providers, enterprises and hyperscale data centers.

Data center fiber management at 400G and beyond is important to evaluate, and products such as Molex’s fiber aggregation boxes provide efficient solutions for high fiber systems and organized fiber management. These products can reduce or eliminate dead channels and provide a passive switching location that is compact and requires no power or cooling. They can also bridge the connectivity gap between current LC duplex patching and next-generation MPO high density connector solutions. An example of this is where a data center has an existing LC duplex fiber plant that utilized CWDM4 transceivers at 100G but is now moving to DR4 at 400G and requires a parallel fiber infrastructure.

Beyond 400G

Data center switch ASIC suppliers have already announced the general availability of 56G PAM-4 12.8Tbps ASICs and are now working on 112G PAM-4 25.6Tbps ASICs which would be capable of driving a 32-port switch with each port capable of 800 Gbps. This ASIC capability will then create a number of challenges related to topics such as signal integrity, thermals, power and loss to name but a few while raising the question if interconnects can or should continue to be modular, and if so, which form factor can realistically support this.

As data center operators map out plans to scale quickly and cost-effectively, the design of 100G and 400G infrastructure can be optimized by working closely with suppliers who possess the capabilities, expertise, and scalability that today’s data centers demand. Implementation requires orchestration to coordinate data transmission between hundreds or thousands of components to achieve the optimal data center structure to mitigate overall risk and address dynamic requirements in the future.

Original Article Source from, Author: Tony Campbell, Molex