91³Ô¹ÏÍø

Which SDRAM Standard to Use and When

Vadhiraj Sankaranarayanan, Sr. Technical Marketing Manager, Synopsys

The primary function of a memory subsystem is to feed the host ¨C CPU or GPU ¨C the necessary data or instructions as quickly and reliably as possible in a wide range of applications from cloud computing and artificial intelligence (AI) to automotive and mobile. System-on-chip (SoC) designers have several categories of memory technologies to choose from, each with distinct characteristics and advanced features. Dual Data Rate (DDR) Synchronous Dynamic Random-Access Memory (SDRAM) has emerged as the de facto memory technology for the main system memory due to its high density with simplistic architecture using a capacitor as a storage element, low latency and high performance, infinite access endurance, and low power. 

Selecting the right memory technology is often the most critical decision for achieving the optimal system performance. This article describes different memory technologies to help SoC designers select the right memory solution that best fits their application requirements.

DDR DRAM Standards

Designers continue to add more cores and functionality to their SoCs; however, increasing performance while keeping power consumption low and silicon footprint small remains a vital goal. DDR SDRAMs, DRAMs in short, meet these memory requirements by offering a dense, high-performance, and low-power memory solution, either on a dual in-line memory module (DIMM) or as a discrete DRAM solution. JEDEC has defined and developed the following three DRAM categories of standards to help designers meet power, performance, and area requirements of their target applications:

  • Standard DDR targets servers, cloud computing, networking, laptop, desktop, and consumer applications, allowing wider channel-widths, higher densities, and different form-factors. DDR4 has been the most popular standard in this category since 2013; DDR5 devices are expected to become available in the near future.  

  • Mobile DDR targets the mobile and automotive segments, which are very sensitive to area and power, offering narrower channel-widths and several low-power operating states. The de facto standard today is LPDDR4 with LPDDR5 devices expected in the near future.

  • Graphics DDR targets data-intensive applications requiring a very high throughput, such as graphics-related applications, data center acceleration, and AI. Graphics DDR (GDDR) and High Bandwidth Memory (HBM) are the standards in this category.

The above three DRAM categories use the same DRAM array for storage, with a capacitor as the basic storage element. However, each category offers unique architectural features to optimally meet the requirements of the target applications. These features include data-rate and data-width customizations, connectivity options between the host and DRAMs, electrical specifications, termination schemes for the I/Os (Input/Output), DRAM power-states, reliability features, and more.  Figure 1 illustrates JEDEC¡¯s three categories of DRAM standards.

Figure 1: JEDEC has defined three categories of DRAM standards to fit the design requirements of various applications

Standard DDR

Standard DDR DRAMs are ubiquitous in applications such as enterprise servers, data centers, laptop, desktop, and consumer applications, providing high density and performance. DDR4 is the most popular standard in this category, offering several performance advantages over its predecessors ¨C DDR3 and DDR3L (a low-power version of DDR3):

  • Higher data rate, up to 3200Mbps, as compared to DDR3 operating up to 2133Mbps

  • Lower operating voltage (1.2V, as compared to 1.5V in DDR3 and 1.35V in DDR3L)

  • Higher performance (e.g., bank-groups), lower power (e.g., data-bus inversion), and higher Reliability, Availability, and Serviceability (RAS) features (e.g., post-package repair and data cyclic redundancy check)

  • Higher densities due to an increase in the individual DRAM die sizes from 4Gb to 8Gb and 16Gb

DDR5, actively under development at JEDEC, is expected to increase the operating data rates up to 4800Mbps at an operating voltage of 1.1V. DDR5 has several new architectural and RAS features to handle these high speeds effectively and minimize the system downtime due to memory errors. Integrated voltage regulators on modules, better refresh schemes, an architecture targeted at better channel utilization, internal error-correcting code (ECC) on DRAMs, increased bank group for a higher performance, and higher capacity are a few of the key features in DDR5.

Mobile DDR

Compared to the standard DDR DRAM, Mobile DDRs, also called Low-Power DDR (LPDDR) DRAMs, have several additional features for cutting power, which is a key requirement for mobile/battery-operated applications such as tablets, mobile phones, and automotive systems, as well as SSD cards.  LPDDR DRAMs can run faster than standard DRAMs to achieve high performance and offer low-power states to help deliver power efficiency and extend battery life.

LPDDR DRAM channels are typically 16- or 32-bit wide, in contrast to the standard DDR DRAM channels, which are 64-bits wide. Just as with standard DRAM generations, each successive LPDDR standard generation targets a higher performance and lower power target than its predecessor, and no two LPDDR generations are compatible with one another.

LPDDR4 is the most popular standard in this category, capable of data rates up to 4267Mbps at an operating voltage of 1.1V. LPDDR4 DRAMs are typically dual-channel devices, supporting two x16 (16-bit wide) channels. Each channel is independent and hence has its own dedicated Command/Address (C/A) pins. The two-channel architecture gives flexibility to system architects while connecting the SoC host to an LPDDR4 DRAM.

LPDDR4X, a variant of LPDDR4, is identical to LPDDR4 except that additional power savings is obtained by reducing the I/O voltage (VDDQ) to 0.6 V from 1.1 V. LPDD4X devices can also run up to 4267Mbps. 

LPDDR5, the successor to LPDDR4/4X, is expected to run up to 6400Mbps, and is actively under development at JEDEC. LPDDR5 DRAMs are expected to provide many new low-power and reliability features, making them ideal for mobile and automotive applications. One such important low-power feature for extending battery life is the ¡°deep sleep mode,¡± which is expected to provide substantial power savings during idle conditions. In addition, there are several new architectural features that allow the LPDDR5 DRAMs to operate seamlessly at these high speeds at a lower operating voltage than LPDDR4/4X.

Graphics DDR

The two disparate memory architectures targeting high throughput applications, such as graphic cards and AI, are GDDR and HBM.

GDDR Standard

GDDR DRAMs are specifically designed for graphics processing units (GPUs) and accelerators. Data-intensive systems such as graphic cards, game consoles, and high-performance computing including automotive, AI, and deep learning are a few of the applications where GDDR DRAM devices are commonly used. GDDR standards ¨C GDDR6/5/5X ¨C are architected as point-to-point (P2P) standards, capable of supporting up to 16Gbps. GDDR5 DRAMs, always used as discrete DRAM solutions and capable of supporting up to 8Gbps, can be configured to operate in either a ¡Á32 mode or ¡Á16 (clamshell) mode, which is detected during device initialization. GDDR5X targets a transfer rate of 10 to 14Gbps per pin, almost twice that of GDDR5. The key difference between GDDR5X and GDDR5 DRAMs is that GDDR5X DRAMs have a prefetch of 16N, instead of 8N. GDDR5X also uses 190 pins per chip, compared to 170 pins per chip in GDDR5. Hence, GDDR5 and GDDR5X standards require different PCBs. GDDR6, the latest GDDR standard, supports a higher data-rate, up to 16Gbps, at a lower operating voltage of 1.35V, compared to 1.5V in GDDR5.

HBM/HBM2 Standards

HBM is an alternative to GDDR memories for GPUs and accelerators. GDDR memories target higher data rates with narrower channels to provide the needed throughput, while HBM memories solve the same problem through eight independent channels and a wider data path per channel (128-bits per channel) and operate at lower speeds around 2Gbps. For this reason, HBM memories provide high throughput at a lower power and substantially smaller area than GDDR memories. HBM2 is the most popular standard today in this category, supporting up to 2.4Gbps data rates.

HBM2 DRAMs stack up to eight DRAM dies, including an optional base die, offering a small silicon footprint. Dies are interconnected through TSV and micro-bumps. Commonly available densities include 4 or 8GB per HBM2 package.

Besides supporting a higher number of channels, HBM2 also provides several architectural changes to boost the performance and reduce the bus congestion. For example, HBM2 has a ¡®pseudo channel¡¯ mode, which splits every 128-bit channel into two semi-independent sub-channels of 64-bits each, sharing the channel¡¯s row and column command buses while executing commands individually. Increasing the number of channels also increases overall effective bandwidth by avoiding restrictive timing parameters such as tFAW to activate more banks per unit time. Other features supported in the standard include optional ECC support for enabling 16 error detection bits per 128-bits of data.

HBM3 is expected to hit the market in a few years and provide higher density, greater bandwidth (512GB/s), lower voltage, and lower cost.

Table 1 shows a high-level comparison of GDDR6 and HBM2 DRAMs:

Table 1: GDDR6 and HBM2 offer unique advantage for system architects

Summary

To provide a wide selection of DRAM technologies with unique features and benefits, JEDEC has defined and developed three main categories of standards for DDR: standard DDR, mobile DDR, and graphics DDR. Standard DDR targets server, data center, networking, laptop, desktop, and consumer applications, allowing wider channel-widths, higher densities, and different form-factors. Mobile DDR, or LPDDR, targets the mobile and automotive applications, which is very sensitive to area and power, offering narrower channel-widths and several low-power DRAM states. Graphics DDR targets data-intensive applications requiring very high throughput. JEDEC has defined GDDR and HBM as the two graphics DDR standards. SoC designers can select from a variety of memory solutions or standards to meet the needs of their target applications. The selected memory solution impacts the performance, power, and area requirements of their SoC.

³§²â²Ô´Ç±è²õ²â²õ¡¯ DesignWare? Memory Interface IP solutions with silicon-proven PHYs and controllers, support the latest DDR, LPDDR, and HBM standards.  Synopsys is an active member of the JEDEC work groups, driving the development and adoption of the standards. ³§²â²Ô´Ç±è²õ²â²õ¡¯ memory interface IP solutions can be configured to meet the exact requirements of SoCs for a wide range of applications including AI, automotive, mobile, and cloud computing.