The Technologies of Next-Generation Optical Transceivers — PAM4 and 64QAM

PAM4 and 64QAM Technologies

PAM4 and 64QAM Technologies

The shift to cloud services and virtualized networks has put the data center in the middle of our world and meant that connectivity within data centers and between data centers has a huge impact on the delivery of business and personal services. Hyperscale data centers are being installed across the world and these all need connecting. To meet this demand, optical transceiver suppliers are delivering new solutions based on PAM4 and 64QAM, providing coherent modulation that will drive down the cost of connectivity and increase the bandwidth of each connection.

Connections to many servers are already 25G and links between switches in large data centers are already 100G. The introduction of SFP28 and QSFP28 transceivers integrating new technologies and built using efficient manufacturing techniques has driven down the cost of these connections and allowed massive growth in the market. The next stage is the introduction of 100G single lambda solutions and cost-effective 400G transceivers for links between switches. The PHY devices needed for this next step are already becoming available, 12.8T switch devices are in production, and the first 400G QSFP-DD and OSFP optical transceivers are sampling.

QSFP-DD

The rise of the hyperscale data center operator has dramatically changed the market. The switch to 25G and 100G from 10G and 40G has happened very quickly. The sheer scale and numbers of data centers being installed or upgraded means that the new technologies can be shipped in volume as soon as the price is right, the components have been qualified, and the production lines are operational. We are now seeing the first 400G PHY devices and optical transceivers for data centers becoming available and companies are vying for market position as we wait for the leading hyperscale operators to commit to large deployments.

Many of those companies that have benefited from 25G and 100G are putting their investments into single lambda PAM4 100G and 400G solutions for the data center. This has required new PAM4 PHY devices designed to meet the power constraints of 400G OSFP and QSFP-DD transceivers. A few companies have also invested in 50G and 200G PAM4 PHYs, enabling a cost-effective upgrade from 25G and 100G. 50G SFP56 and 200G QSFP56 transceivers are expected to be interim solutions, but it is unclear how widespread their use will be or for how long. 40G was an interim solution that lasted for many years.

Coherent technology, originally developed for 100G long-haul networks, is now widely used for long-haul connections, including subsea, metro networks, and Data Center Interconnect (DCI) between data centers. The market for DCI has grown rapidly, with many systems vendors offering solutions with 80km to 500km reach. For long-haul and metro applications, several leading equipment manufacturers continue to use in-house coherent Digital Signal Processor (DSP) designs. Coherent DSP solution is now available to optical transceiver vendors such as Gigalight that is going to ship 400G transceivers based on this design. The latest DSP ASICs are enabling 600G (64Gbaud 64QAM) solutions and CFP2-DCO transceivers. The next step is the introduction of the 7nm DSPs that will enable the cost-effective 400G ZR transceivers planned for 400G links up to 100km starting in 2020.

This continues to be a market in flux. Lumentum has completed the acquisition of Oclaro, Cisco has completed the acquisition of Luxtera, and several Chinese optical transceiver vendors have joined the charge to 400G in the data center. The PAM4 PHY devices required for 100G single lambda and 400G in the data center are proving to be very challenging to deliver. PAM4 PHY solutions in 28nm and 14/16nm technology have been sampling for more than six months and these are now being joined by 7nm solutions.

Related articles: PAM4 — The High-Speed Signal Interconnection Technology of Next-Generation Data Center

Which Is Better for 80km Links? PAM4 or Coherent Technology

A significant portion of Data Center Interconnections (DCIs) and telecom router-to-router interconnections rely on simple ZR or 80km transceivers. The former is mostly based on 100Gbps per 100GHz ITU-T window C-band DWDM transceivers, while the latter is mostly 10G or 100G grey wavelength transceivers. In DWDM links, the laser wavelength is fixed to a specified grid, so that with DWDM Mux and Demux 80 or more wavelength channels can be transported through a single fiber. Grey wavelengths are not fixed to a grid and can be anywhere in the C-Band, limiting capacity to one channel per fiber. DCI links tend to use DWDM because they have to utilize the optical fiber bandwidth as much as possible due to the extremely high-volume traffic between data centers.

Another emerging 80km market is the Multi-System Operator (MSO) or the CATV optical access networks. This need emerges because MSOs are running out of their access optical fibers and they need a transmission technology which would allow them to grow to a very large capacity by using the remaining fibers. For this reason they need to use DWDM wavelengths to pack more channels in a single fiber.

The majority of the 10G transceivers on 80km links will be replaced by 100G or 400G transceivers in the coming years. For that to happen, there are two modulation techniques to enable 80km 100G transceivers.

  • 50G PAM4 with two wavelengths in a 100G transceiver
  • Coherent 100G dual-polarization Quadrature Phase Shifted Keying (DP-QPSK)

Generally speaking, PAM4 is a low-cost solution but require active optical dispersion compensation (which could be a big headache as well as extra expense to data center operators) and extra optical amplification to compensate for the dispersion compensators. By contrast, Coherent approaches do not need any dispersion compensation and the price is coming down rapidly, especially when the same hardware can be configured to upgrade the transmission data rate per wavelength from 100G to 200G (by using DP-16QAM modulation).

When 400G per wavelength is needed in a DCI network within a 100GHz ITU-T window, coherent technology is the only cost-effective solution, because it will not be feasible for PAM4 to achieve the same high spectral efficiency of 4 bit/sec/Hz.

On the standards front, many standards organizations are adopting coherent technology for 80km transmission. The Optical Inter-networking Forum (OIF) will adopt coherent DP-16QAM modulation at up to 60Gbaud (400G per wavelength) in an implementation agreement on 400G ZR. This is initially for DCI applications with a transmission distance of more than 80km, and vendors may come up with various derivatives for longer transmission distances. Separately, CableLabs has published a specification document for 100G DP-QPSK coherent transmission over a distance of 80km aimed at MSO applications. In addition, IEEE802.3ct is in the process of adopting coherent technologies for 100G and 400G per wavelength transmissions over 80km.

As data rates increase from 100G to 400G and capacity requirements per fiber are driven by DCI needs, and assisted by volume driven cost reductions in coherent optics and in coherent DSPs, we expect coherent transmission to be the technology of choice for 80km links.

What is Data Center Interconnect/Interconnection?

Data Center Interconnection means the implements of Data center Interconnect (DCI) technology. With the DCI technology advances, better and cheaper options have become available and this has created a lot of confusion. This is compounded by the fact that a lot of companies are trying to enter this market because there is a lot of money to be made. This article is written to straighten out some of the confusion.

According to the different applications, there are two parts of data center interconnections. The first is intra-Data Center Interconnect (intra-DCI) which means connections within the data center. It can be within one building or between data center buildings on a campus. Connections can be a few meters up to 10km. The second is inter-Data Center Interconnect (inter-DCI) which means connections between data centers from 10km up to 80km. Of course, connections can be much longer but most of the market activity for inter-DCI is focused on 10km to 80km. Longer connections are considered Metro or Long-haul. For reference, please see the table below.

DCI Distance Fiber Type Optics Technology Optical Transceivers
intra-DCI 300m MMF NRZ/PAM4 QSFP28 SR4
500m SMF QSFP28 PSM4
2km QSFP28 CWDM4
10km QSFP28 LR4
inter-DCI 10km SMF Cohernet QSFP28 4WDM-10
20km QSFP28 4WDM-20
30km to 40km QSFP28 4WDM-40
80km to 2000km CFP2-ACO

Intra-DCI

The big bottlenecks are in the intra-DCI and therefore, the highest volume of optical transceivers are sold here generating the most revenue, however, it is low margin revenue because there is so much competition. In this space, may of the connections are less than 300m and Multi-Mode Fiber (MMF) is frequently used. MMF is thicker, and components are cheaper because the tolerances are not as tight, but the light disperses as it bounces around in the thick cable. Therefore, 300m is the limit for many types of high speed transmission that use MMF. There is a data center transceiver with a transmission distance up to 100m over OM4 MMF for example.

Gigalight 100GBASE-SR4 100m QSFP28 Optical Transceiver

100G QSFP28 SR4 for MMF up to 100m

In a data center, everything is connected to servers by routers and switches. Sometimes a data center can be one large building bigger than a football field and other times data centers are built on a campus of many buildings spanning many blocks. In the case of a campus, the fiber is brought to one hub and the connections are made there. Even if the building you want to connect to might be 200m away, the fiber runs to a hub, which can be more than 1km away, so this type of routing increases the fiber distance. Some of the distances between buildings can be 4km, requiring Single Mode Fiber (SMF), which has a much narrower core, making it more efficient, but also increasing the cost of all related components because the tolerances are tighter. Therefore, with data centers growing, so has the need for SMF as the connections get longer within the data center. With SMF you have the option to drive high bandwidth with coherent technology, and we’ll see more of this in the future. Previously coherent was only used for longer distances, but with cost reductions and greater efficiency versus other solutions, coherent is now being used for shorter reaches in the data center.

Gigalight 100GBASE-LR4 Lite 4km QSFP28 Optical Transceiver

100G QSFP28 LR4L for SMF up to 4km

500m is a new emerging market and because the distance is shorter, a new technology is emerging, and that is silicon photonics modulators. EMLs (Externally Modulated Lasers) perform modulation within the laser, but with silicon photonics, the modulator is outside the laser and it’s a good solutions for distances of 500m. In an EML, the modulator is integrated into the same chip, but is outside the laser cavity, and hence is “external”. For silicon photonics, the laser and modulator are on different chips and usually in different packages. Silicon photonics modulators are based on the CMOS manufacturing process that is high scale and low cost. A continuous wave laser with silicon photonic modulation is very good for 500m applications. EMLs are more suitable for longer reaches, such as 2-10km. Therefore, with data centers growing, so has the need for single mode fiber as the connections get longer within the data center. With SMF you have the option to drive high bandwidth with coherent technology, and we’ll see more of this in the future. Previously coherent was only used for longer distances, but with cost reductions and greater efficiency versus other solutions, coherent is now being used for shorter reaches in the data center.

100GE PSM4 2km QSFP28 Optical Transceiver

100G QSFP28 PSM4 for SMF up to 500m/2km

100GE CWDM4 2km QSFP28 Optical Transceiver

100G QSFP28 CWDM4 for SMF up to 2km

100GBASE-LR4 10km QSFP28 Optical Transceiver

100G QSFP28 LR4 for SMF up to 10km

Inter-DCI

Inter-DCI is typically between 10km and 80km, including 20km and 40km. Before we talk about data center connectivity, let’s talk about why data centers are set up the way they are and why 80km is such an important connection distance. While it is true that a data center in New York might backup to tape in a data center in Oregon, this is considered regular long-haul traffic. Some data centers are geographically situated to serve an entire continent and others are focused on a specific metro area. Currently, the throughput bottleneck is in the metro and this is where data centers and connectivity are most needed.

100GE 4WDM-20 20km QSFP28 Optical Transceiver

100G QSFP28 4WDM-20 for SMF up to 20km

100GE 4WDM-40 40km QSFP28 Optical Transceiver

100G QSFP28 4WDM-40 for SMF up to 40km

Say you have a Fortune 100 retailer and they are running thousands of transactions per second. The farther away a data center is, the more the data is secure because the data center is so far away and separate from natural disasters, but with the increased distance there are more “in flight” transactions are at risk of being lost due to latency. Therefore, for online transactions there might be a primary data center that is central to retail locations and a secondary data center that is around 80km away. It’s far enough away not to be affected by local power outages, tornadoes, etc, but close enough that there is only a few hundred milli-seconds of latency; therefore, in the worst case a small number of transactions would be at risk.

In another example of inter-DCI, as if a certain video is getting a lot of views, the video is not only kept in its central location, but copies of the video are pushed to metro data centers where access is quicker because it’s stored closer to the user, and the traffic doesn’t tie up long haul networks. Metro data centers can grow to a certain size until their sheer size becomes a liability with no additional scale advantage and thus they are broken up into clusters. Once again, to guard against natural disasters and power outages, data centers should be far away. Counterbalancing this, data centers need to have low latency communication between them, so they shouldn’t be too far away. There is a compromise and the magic distance is 80km for a secondary data center, so you’ll hear about 80km data center interconnect a lot.

It used to be that on-off keying could provide sufficient bandwidth between data centers, but now with 4K video and metro bottlenecks, coherent transmission is being used for shorter and shorter distances. Coherent is likely to take over the 10km DCI market. It has already taken over the 80km market but it might take time before coherent comes to 2km. The typical data center bottlenecks are 500m, 2km, and 80km. As coherent moves to shorter distances, this is where the confusion comes.

The optical transceiver modules that were only used within the data center are gaining reach, and they’re running up against coherent solutions that were formerly only used for long distances. Due to the increasing bandwidth and decreasing cost, coherent is being pulled closer into the data center.

The other thing to think about is installing fiber between data centers. Hopefully this is already done, because once you dig, it’s a fixed cost, so you put down as many fibers as you can. Digging just for installing fiber is extremely expensive. In France when they lay fiber, they use another economic driver. Whenever you put in train tracks, you put in fiber at the same time, even if it is not needed. It’s almost for free because they are digging anyway. Fibers are leased to data centers one at a time; therefore, data centers try to get as much bandwidth as possible onto each fiber (this is also a major theme in the industry). You might ask, why not own your own fiber? You need to have a lot of content to own your own fiber. The cost is prohibitive. In order to make the fiber network function, all the nodes need to use the same specification and this is hard. Therefore, carriers are usually the ones to install the full infrastructure.

Article Source: John Houghton, a Silicon Valley entrepreneur, technology innovator, and head of MobileCast Media.