91³Ô¹ÏÍø

How Does VSR Transcend Physical and Power Limits?

Manuel Mota

Aug 15, 2022 / 7 min read

Picture this scenario: You¡¯re getting ready to go to work but aren¡¯t sure of how warmly you should dress. Shuffling between outfits, you ask your smart home device, ¡°What¡¯s the weather like today?¡± Within a fraction of a second, it responds with the current temperature and the weather forecast for the rest of the day.

But what really happens at the backend? Your ¡°command¡± travels in the form of data packets over the internet and into global fiber networks, covering miles to converge at one of the many data centers to receive, map, and relay the information you need. Now imagine a single country¡¯s population using their smart home devices, streaming Netflix movies, and attending group Zoom meetings at the same time. That is an overwhelming amount of data.

This growth in data traffic has led to a rising demand for faster data network and interface speeds, with the inherent focus being on high reach while maintaining lower latency and power levels than those offered by legacy architectures. Modern data centers rely greatly on interconnects to deliver this connectivity. As the industry moves to higher transmission speeds, think 100 gigabits per second (Gbps) per lane, using long-reach connectivity approaches to communicate is challenging. Consequently, the power and integration issues become bigger bottlenecks.

Thanks to very short-reach (VSR) connectivity, teams can now overcome this pitfall by considering possibilities where short-reach connection links can be used to reach shorter distances, resulting in better power efficiency and management.

Read on to learn more about VSR connectivity, its advantages, trends that are driving this transformation, use cases, and solutions to overcome existing reach limits of copper modules.

Person on a Zoom call

What is VSR Connectivity?

Simply put, connectivity can be described as the ability to both link systems seamlessly and monitor how well information travels between system A to system B. It is a standardized metric and relates to the distance a signal communicates over a communications channel, attributing to the signal¡¯s ¡°reach.¡± The higher the signal¡¯s reach, the more power that is consumed.

Until recently, copper had a majority presence in networks because of its high conductivity, malleability, thermal resistance, and low cost. However, as network speeds and complex functionalities increased, the overall system-on-chip (SoC) size also grew for artificial intelligence (AI), hyperscale data centers, and networking applications.

Long-reach (LR) connectivity may not deliver ideal results in some cases. With higher data rates per lane, the total distance that traditional long-reach interconnects can control is minuscule when compared to VSR. Additionally, to achieve longer distances, circuit components can become susceptible to complexity and turn out to be more costly to design and manufacture.

Imagine you need to connect a server to a switch in a data center. Traditionally, long-reach copper interconnects were used to make that connection. But if you need the switch to run at higher speeds, thicker and denser pipes are needed for more data to pass through.

As the industry moves from 100 to 200 Gbps in the next few years, this process of the electrical copper interconnect carrying the signal through a PCB interface to the switch becomes an arduous and impractical approach. While it can still be achieved with high-quality grade cables or with active cables, not only does the insertion and power loss become significant, but it also exacerbates mechanical problems such as cable rigidness, making it difficult to access and close the backs of server racks.

Trends Driving VSR and Optics into the Data Center

Compared to their copper counterparts, optical interconnects support faster data transmission, higher bandwidth and speed, as well as lower latency and power via light. The fundamental shift to higher bandwidth and new architectures in the data center is primarily driving the adoption of optical links within the data center and the rack.

In this advanced data generation and processing environment, there are three underlying market trends triggering this transformation:

  1. The Rise in Data Traffic Within Data Centers: The growth in data traffic within the data center alone is 5x higher than that of total internet traffic. According to  report, this is set to grow at a steady 30% CAGR. For all that data to be transferred between nodes efficiently, it is critical to have denser interconnects between different hierarchical levels within data centers ¡ª from servers and racks to various individual ports. This intra-data center traffic calls for ¡°fatter¡± data pipes so more data can be transmitted in shorter distances, prompting many teams to prefer optics over traditional copper interconnects.
  2. Flattening of Networks for Low Latency: A typical data center houses around 100,000 servers. For data to transmit efficiently between each server, interconnects need to lead traffic with low latency. This means it cannot pass through multiple levels in the traditional architecture. Low latency mandates no more than three layers of switches or servers, resulting in the flattening of the overall network. Due to the large number of servers, network switches also become bigger in size and need to be of high bandwidth to transfer data faster and function at low power, adding more pressure on the switches.
  3. Aggregation of Homogeneous Resources in the Data Center Rack: There was a time when data centers were organized as hyperconverged servers, where the fundamental building blocks (storage, compute, and memory) would be rolled into a single box and connected with copper interconnects. That organization is changing to be homogeneous, a trend known as server disaggregation. Contrary to hyperconverged servers, homogenous resources have shared and adaptive compute, memory, and connectivity for bandwidth steering. This enables platform flexibility and higher utilization, while leveraging very dense optical interconnects with low latency and power.

Pluggable Optical Modules and Co-Packaged Optics

With the above trends driving multiple use cases for optical interconnects, optics is now moving closer to the server and host SoC (known as co-packaged optics). But from an implementation perspective, pluggable modules are more of a reality today.

Pluggable modules do introduce power issues, but those can be addressed by using low-power SerDes in the host SoC and as retimers in active copper cables. By adding retimers, the interface is optimized on the VSR PHY standard for remaining electrical connections. This means that teams can lower power as well as area, and no longer need a clogged long-range interface. While retimers existed in previous-generation switches, there was a notable loss in insertion and channel reach. Additionally, because of copper signal traces in PCB or connectors, more retimers needed to be added in between to compensate for the insertion loss.

VSR connectivity can standardize this interface without the use of retimers. Sure, the industry is advancing toward co-packaged optics to decrease power consumption and boost bandwidth density in data center network switches, but it will take a couple of years before broad adoption takes place. Until then, pluggable optical modules connected with VSR links will be the primary way to address optical connectivity requirements. With their pluggable modules, it makes it easier to upgrade network infrastructures to support 400G, 800G, and 1.6T Ethernet.

Use Cases Where VSR Is Beneficial

Let¡¯s take the case of a pluggable optical module as shown on the right-hand side of the above image. In an optical module, data comes in through an optical fiber, is converted to an electrical signal, and then needs to be transmitted in the direction of the host through an electrical connection. These optical modules are small and tightly built, often constrained by footprint and power budgets. This means that every single component needs to be power efficient. Compounding these challenges, the thermodynamic limit is also impacted, which results in the module overheating because the size of the optical modules does not allow it to have built-in cooling mechanisms.

This is a critical area where VSR becomes advantageous. When a LR interconnect is swapped with a VSR link, users save significant power compared to LR, enabling them to meet stringent power budgets. Today, optical modules are one of the most important areas where VSR is implemented.

Pluggable Optical Module | Synopsys

On the host SoC side of the optical module, things work differently. For a switch on the host SoC side, VSR connectivity becomes extremely beneficial in terms of overcoming both area and power bottlenecks. As we go from 25Tb-generation switches to next-generation switches (51Tb), the area that the long-reach interconnects take in the die will be a significant amount and can quickly reach maximum levels for it to not be economically fabricated.

Even with alternatives like splitting the die and an extra-short reach (XSR) interconnect, the total power consumed by the system is substantially large and impacts the overall system performance. Yet again, VSR is the knight in shining armor, offering low latency, power efficiency, and high throughput, resulting in reduced die area and a significant 50% reduction in power consumption.

Empowering Teams with Complete Optical Integration

With the market opportunities for VSR evolving and its power, performance, and area (PPA) benefits being noticed by the industry, extensive integration support and ecosystem interoperability are key. Synopsys 112G Ethernet PHY IP enables long, medium, very short, and extra short (LR, MR, VSR, XSR) reach electrical channels, as well as CEI-112G-Linear, and CEI-112G-XSR+ optical interfaces. The power-efficient PHYs deliver superior signal integrity and jitter performance that surpasses the IEEE 802.3ck and OIF standards electrical specifications.

Customers now have a comprehensive, compliant solution that is optimized for VSR channels greater than 20dB @ 28GHz Nyquist¡ªand one that is considerate of the industry¡¯s power- or thermal- constrained system bottlenecks. Recognizing that performance is a multi-dimensional challenge, we offer design engineers regular integration, test, validation, and system analysis support that addresses key challenges and goes beyond the IP we offer.

To enable interoperability with other transceivers, we also perform exhaustive cross-vendor validation to ensure customers have various degrees of freedom and confidence when designing their chips. Synopsys¡¯ 112G Ethernet PHY IP for VSR is emerging as an ideal solution for 800G optical modules.

Summary

As workload demands and data rates increase, companies thriving in data-intensive fields like cloud computing, e-commerce, and social media who build their own  will need VSR connectivity to manage the growth in data traffic. Undoubtedly, VSR connectivity will extend the pluggable market by five years and better utilize the servers we have today. With the amount of data generated and processed growing multifold, VSR is the perfect stop gap to mitigate current copper interconnect challenges while having a reliable electric interface between the switch and front panel.

Continue Reading