91³Ô¹ÏÍø

Bandwidth Considerations for PCI Express 3.0 Designs

By Rita Horner, Technical Marketing Manager, PCI Express PHY IP

 

PCI Express (PCIe) is a well-accepted standard that is adopted across multiple markets. It is utilized in client servers, storage devices, and now more and more in switches and routers, for chip-to-chip, board-to-board, or even chassis-to-chassis interconnects. Due to PCIe¡¯s multi-channel support and its capability to achieve higher bandwidth through aggregated solution, PCIe has become a big player across multiple market segments.

It is critical for PCIe designers to understand the challenges of meeting the industry¡¯s increased demand in bandwidth that is resulting into higher data rates and higher densities. PCI Express 3.0 standard increased the supported data rate to 8 Gbps, which effectively doubled the previously supported 5 Gbps data rate. While the data rate was increased, no improvements were imposed on the channel, even though the channel experiences significantly more loss at 8 Gbps than at 5 Gbps. This was mainly done for ease of adoption, backward compatibility and to achieve high volume manufacturability.

To compensate for increased channel loss, PCIe 3.0 specification requires enhanced equalization in the PHY (Physical Layer). PCIe designers must better understand the channel bandwidth limiters so that they can effectively implement sufficient equalization in their next generation designs.

This article dives into the challenges of meeting increasing demands in bandwidth as well as the physical limitations that can constrict bandwidth. Understanding this issue, and why improved levels of equalization are necessary at higher data rates, will enable designers to implement more efficient PCIe 3.0 systems.

Data-intensive applications driving demand for network bandwidth

The increase in demand for higher bandwidth is due to the ever-growing number of users, user devices, and systems being deployed every day. PCI Express has done a great job in keeping up with this increased bandwidth demand by defining a faster data rate every three to four years. But designing at higher data rates, especially at 8 Gbps can be quite challenging. This is due to bandwidth limiters, such as printed circuit board (PCB) traces, connectors and even the IC packages.

According to Cisco's Visual Networking Index Forecast, global IP traffic has increased eight folds over the past five years, and will increase four folds by 2016, which implies a compound annual growth rate of 29%. Overall IP traffic is expected to grow to 110 exabytes (EB) per month by 2016. (An exabyte is 1018--one million terabytes.) The increase in IP traffic is due to the growth of a wide range of data intensive applications, such as video, voice, network storage, and even distances learning.

Figure 1: Number of networked devices will be double the size of the entire global population

This bandwidth growth is not limited to the high-end networking market, but all across the networking infrastructure, including consumer and business applications. Eighty-eight percent of the 2016 traffic is expected to be due to consumer traffic¡ªwhich typically means internet video traffic. Internet video streaming and downloads continue to take a large share of the bandwidth, and in fact comprises half of all consumer internet traffic. Greater bandwidth demands translate into demand for higher data rates, higher performance, and higher densities across the entire network infrastructure. Higher speed and greater densities will enable designers to achieve higher bandwidth. 

Network infrastructure bandwidth

Higher data rate standards are being adopted across the entire network infrastructure, from the client level (at the bottom of Figure 2) to the core backbone layers of the network infrastructure. Higher data rates are not just for interconnecting high-end systems to systems, or boxes to boxes, but all the way down to line cards and interconnects that are across the access layers, effectively touching the consumer application space.

Figure 2: Increased bandwidth across the entire network infrastructure 

The clients, servers, and switches on the bottom of the figure 2, running at 1 Gigabit Ethernet (GE) today, feed the 10 GE, 40 GE, and 100 GE systems that connect to the core. Right now, the growth for 1 GE port adoption rate has started on its downward ramp as 10 GE is growing quickly. Dell¡¯Oro Group forecasted the 10GE port shipments to grow at a rate of almost 50% CAGR in the next five years. As clients, servers, and switches migrate from 1 GE to 10 GE, their respective higher level network infrastructures will also migrate to higher data rates to meet the increased bandwidth demand.

PCI Express is used for almost everything that connects to the access layer, as shown in Figure 2. In storage, servers or switches, almost every application has a PCIe interface, either through the host bus adapter card, LAN on Motherboard (LOM) or a network interface card (NIC).

PCI Express bandwidth doubling with each generation

As shown in Table 1, the PCI Express specification is keeping pace with the industry¡¯s increasing bandwidth demands.

From PCIe 1.x at 2.5 Gbps, the specification doubled to PCIe 2.x at 5 Gbps, which enabled speeds of 500 MBps per lane in each direction. PCIe 2.x¡¯s 16-lane connection offered a transfer speed of 16 GBps. PCIe 3.0 doubles PCIe 2.x¡¯s transfer rate, enabling a data rate of 1 GBps per lane, or 32 GBps in a 16-lane configuration. Due to the industry¡¯s insatiable demand for the higher bandwidth, PCI-SIG announced the beginning of the PCIe 4.0 spec in November 2011. PCIe 4.0 is slated to offer 16 GTps (gigatransfers per second) and is targeted to be released in late 2014 to early 2015.

Table 1: PCIe bandwidth doubling every 3 to 4 years

Bandwidth limiters at high PCI Express data rates

Copper loss increases with signal frequency, as shown in Figure 3. Higher data rates increase power loss that translates into decrease in transmission distances. Even moderate printed circuit board (PCB) trace length on the same PCB material will have increased insertion loss at higher frequencies and create signal integrity (SI) problems. These SI issues include amplitude and phase distortion and inter-symbol interference (ISI), which close the eye of a signal.

Figure 3: PCB trace response: copper loss vs. signal frequency

Bandwidth limiters on PCBs

Most traces on a PCB are not isolated signals and have neighboring signals. Therefore, an originally clean signal may be distorted due to cross talk from adjacent signals. Crosstalk is linearly dependent on the length of the trace running in parallel to its neighboring crosstalk aggressor. Even at a relatively low speed of 2.5 Gbps, crosstalk begins causing some distortion (Figure 4), and as data rate is increased to 5 Gbps the crosstalk impact on the signal increases.

Figure 4: Crosstalk effects at 2.5 and 5.0 Gbps

As shown in Figure 5, differential crosstalk can be reduced by increasing the aggressor distance¡ªthe distance between the two traces (Figure 6).

Figure 5: Increasing the aggressor distance reduces crosstalk

Figure 6: Aggressor distance: The distance between the differential pair and the aggressor 

While crosstalk is a limiting factor, it is manageable, to a point. The cost is in increasing the aggressor distance, but that greater distance means a larger trace area, lower signal density; and not every design may be able to afford its increased cost.

Crosstalk in a backplane environment

A backplane environment is a more complex system, shown in Figure 7. The complete channel starts with a line card trace from where the transmitting signal may be launched, attached to an edge connector, leading to the backplane trace, through a second edge connector, and ending with another line card trace, where the receiving integrated circuit (IC) may reside. The backplane channel has additional bandwidth limiters beyond just the PC board traces. These limiters include the IC package vias due to IC package connections to the line card, PC line cards, backplane board-to-connector vias, and backplane connectors, each of which can cause dispersion, crosstalk, or reflection. At the channel input, or the output of the transmitting IC (TX), the eye is wide open. But as the signal propagates through the channel, it experiences dispersion through the PCB traces, resulting in loss and a signal output eye that may effectively be closed.

Another limiter is the crosstalk caused by adjacent signals on the PCB traces, within the connector pins, or IC packages. It is important to maintain proper differential impedance through connectors. Crosstalk and frequency dependent losses cause signal integrity issues such as ISI. In addition, via stubs' reflection, signal amplitude distortion, and dispersion can increase ISI.

Figure 7: Complex backplane environment includes multiple potential crosstalk locations

Figure 8 walks through the common locations of signal reflection and dispersion in a backplane.

  1. The fast edge rate of the initial signal launch on the line card can trigger the first spike on the reflected pulses' plot, even with minor impedance discontinuities on the PCB. This is as the result of package loss and reflection due to package to PCB via.
  2. As the signal is launched into the backplane connector at the edge of the first line card, due to the noticeable losses from the line card PCB trace and the line card to connector via, it triggers a second spike on the reflected pulses.
  3. As the signal travels to the other side of the first backplane connector on to the backplane board, a second spike occurs due to the connector to backplane via reflection.
  4. The signal then travels across the backplane. Dispersion causes significant losses due backplane PCB trace.
  5. As the signal enters the second backplane connector on the second end of the backplane, two more reflected pulses occur due to the reflection caused by the backplane to connector via and connector to line card via. These two pulses are not as big as the earlier two pulses, due to the two via reflections, as the signal edges are no longer as fast as they were originally at the initial launch.

Figure 8: Common locations of backplane signal reflection and dispersion

PCI Express 3.0 standard enhancements address bandwidth limiters

PCI Express is a widely adopted standard that can take advantage of low-cost PCB materials and connectors. While the bandwidth limitations discussed thus far can be mitigated through the use of lower loss PCB materials and connectors, these may all be cost-prohibitive for certain applications.

The PCIe 3.0 standard definition strived to address these bandwidth limiters, without requiring high-end connectors or exotic PCB materials that would improve the overall channel performance.

Utilizing 128b/130b encoding with data scrambling for DC balance vs. the 8b/10b encoding that was used in the previous two generations enables more efficient signaling with very small overhead compared to 8b/10b encoding. The 128b/130b encoding allows designs to achieve a 10 Gbps data rate equivalent with 8b/10b encoding, and minimizing frequency dependent channel losses.

The PCI Express standard has added enhancements to the transceiver (transmitter and receiver) equalization requirement, with an equalization training algorithm and the need for equalization adaptability. These enhancements enable PCI Express 3.0 adoption while minimizing the impact on the budget of the material cost.

Conclusion

The continual increase in bandwidth demands has created challenges for bandwidth and signal integrity. While the PCI Express 3.0 standard offers some enhancements, designers will require their PHY performance to meet and exceed the base specification while maintaining interoperability across different channels.

The multi-channel Synopsys PHY IP for PCI Express 3.0 includes Synopsys¡¯ high-speed, high-performance transceiver to meet today¡¯s applications¡¯ demands for higher bandwidth. The PHY provides a cost-effective and low-power solution that is designed to meet the needs of today¡¯s PCIe designs while being extremely low in power and area.

Using leading-edge design, analysis, simulation, and measurement techniques, Synopsys¡¯ PCI Express 3.0 PHY IP delivers exceptional signal integrity and jitter performance that exceeds the PCI Express standard¡¯s electrical specifications. The PHY IP reduces both product development cycles and the need for costly field support by employing internal test features. The multi-tap transmitter and receiver equalizers, along with the advanced built-in diagnostics and ATE test vectors, enable customers to control, monitor and test for signal integrity without the need for expensive test equipment.

As the leading provider of PCI Express IP, Synopsys offers a complete PCI Express 3.0 IP solution, including digital controllers, PCIe 3.0 PHY, and verification IP from a single vendor. Accessing all the IP from one provider allows designers to lower the risk and cost of integrating the 8.0 Gbps PCI Express interface into their high performance SoC designs.