Cloud native EDA tools & pre-optimized hardware platforms
From your online grocery purchase to the movies you stream and the banking transactions you manage via an app, more of your daily activities rely on data centers than you may have realized. These days, data centers¡ªespecially the hyperscale variety managing petabytes or more¡ªare taking center stage. With the volume rising in our digital sphere every year¡ªIDC projects ¡ªhigh bandwidth coupled with super-fast network speeds are essential to keeping our online lives humming.
The next frontier for fast network speeds is 1.6T Ethernet. Anything slower is just not enough to move the data between compute, networking, and storage components in fast-growing applications such as AI, autonomous vehicles, high-performance computing (HPC), and cloud computing.
What¡¯s a swift and low-risk path to get to 1.6T? 224G Ethernet PHY IP.
While we¡¯re starting to see a handful of 224G Ethernet design starts this year, we anticipate the first wave of deployments in 2026. Early adopter applications include retimers, switches, AI scaling, optical modules, I/O chiplets, and FPGAs. Now is the time to ready yourself for designs that deliver blazingly fast connectivity. Read on to learn more about the drivers, design challenges, and opportunities for 1.6T Ethernet designs.
While I/O speeds continue increasing, they¡¯re still not keeping pace with compute power. What¡¯s happening is, as Moore¡¯s law slows and the laws of semiconductor physics waver, a gap is widening between compute power and I/O bandwidth. To increase compute resources, more transistors can be added to a chip (and multi-die systems present an opportunity to add even more transistors to scale compute power. Techniques such as parallelizing CPUs and multi-threading also increase system performance. But I/O performance has increased by less than five percent as logic density has doubled and, since the emergence of 45nm process technology, costs per mm2 have continued increasing. Given the volume and complexity of today¡¯s data, the I/Os are becoming the bottleneck.
To support our unrelenting demands for data, hyperscale data centers are moving to network architectures that are faster, flatter, and more scalable. A flatter architecture of no more than three layers of switches lowers latency and drives up the need for higher bandwidth and efficient connectivity over longer distances. Answering the connectivity call is the Ethernet high-speed interface, long considered the data connectivity backbone for the internet.
Each generation of the Ethernet standard has delivered double the speed, addressing the demands of an increasingly hyper-connected world. In fact, hyperscalers have become big influencers of the Ethernet roadmap, driving the evolution toward 1.6T. Historically, revisions of interface standards have taken place every four years; these days, that rate is reducing to address the I/O bandwidth gap.
The Ethernet protocol also provides a level of flexibility that is appealing for data center SoC designers, from speed negotiation to backwards compatibility with the software stack to the ability to use different kinds and classes of media. Optical fiber, copper cables, and PCB backplanes are all supported. Being able to use optical fiber connections is particularly enticing to prevent the I/O bottleneck, as copper cables are starting to run out of steam to support increasing networking speeds.
The latest generation of the Ethernet standards will deliver 224G data rates, providing the foundation to support 1.6T Ethernet. In addition to bandwidth demands from the explosion of complex data in today¡¯s applications, server front-panel density is also creating a need for 224G connectivity. Data centers are fast approaching the limits of front-panel pluggable module density, with only so much space for pluggable optical modules. As a result, SerDes interfaces need to operate faster and faster.
224G Ethernet also brings other advantages. By reducing the number of cables and switches needed in high-density data centers, it enables better network efficiency. Its backwards compatibility with other Ethernet standards simplifies its integration into existing networks.
Designing for 224G Ethernet will entail some unique considerations, as design margins will be extremely tight. For one, the optics will need to be placed closer to the SoC because beyond 400G, the power needed to drive the electrical signals to the modules becomes too much. Co-packaged optics¡ªa single package integration of electrical and photonic dies¡ªenables a shorter and lower power electrical link between the host SoC and the optical interface. In addition, 224G SerDes needs to deliver at least one-third less power per bit compared to 112G SerDes.
It will be mission-critical to optimize individual analog blocks to reduce impairments. There are novel yet minimalistic analog architectures designed to maximize bandwidth and reduce parasitics and noise distortion. More parallelism will be essential to process the higher speeds, but this will require meticulous design care in terms of architecture, circuit, and layout. Innovative digital signal processing (DSP) can compensate for analog limitations and provide better noise immunity.
Interoperability can pose another challenge if the different sublayers of a design come from different vendors.
Designing for 224G Ethernet will entail some unique considerations, as design margins will be extremely tight. For one, the optics will need to be placed closer to the SoC because beyond 400G, the power needed to drive the electrical signals to the modules becomes too much. ¡ªa single package integration of electrical and photonic dies¡ªenables a shorter and lower power electrical link between the host SoC and the optical interface. In addition, 224G SerDes needs to deliver at least one-third less power per bit compared to 112G SerDes.
It will be mission-critical to optimize individual analog blocks to reduce impairments. There are novel yet minimalistic analog architectures designed to maximize bandwidth and reduce parasitics and noise distortion. More parallelism will be essential to process the higher speeds, but this will require meticulous design care in terms of architecture, circuit, and layout. Innovative digital signal processing (DSP) can compensate for analog limitations and provide better noise immunity.
Interoperability can pose another challenge if the different sublayers of a design come from different vendors.
As the first company to demonstrate 224G Ethernet PHY IP, Synopsys has valuable insights to offer. At ECOC 2022 Basel, we demonstrated a with <1e-6 bit-error rate (BER) with 20dB+ channel. At this year¡¯s DesignCon, we demonstrated a , with wide-open 5nm 224G PAM-4-TX eyes, and at OFC 2023 we demonstrated with <7e-8 BER.
Synopsys 224G Ethernet PHY IP, part of the Synopsys High-Speed SerDes IP Portfolio, meets growing high bandwidth and low latency requirements while delivering signal integrity and jitter performance that exceeds the electrical specifications of the IEEE 802.3 and OIF standards. The silicon-proven IP also reduces integration risks, ensures interoperability between digital and mixed-signal layers, and facilitates faster time to market.
As you plan your 224G Ethernet designs¡ªand we anticipate an influx of them in a few years¡ªyou can get a head start by working with an IP vendor who can help smooth your path to delivering high-performance data center applications.