91³Ô¹ÏÍø

Why DTCO is Critical to Modern Memory Design Techniques

Anand Thiruvengadam, Ricardo Borges

Aug 24, 2022 / 5 min read

As new technology nodes have become available, memory applications have aggressively adopted advanced process technology to meet continually strong demand for memory by an array of electronic devices. With each new node, memory capacity has grown dramatically, while performance per watt has increased.

While they adopt new technologies, memory designers can move forward with confidence that their products will be denser as well as faster. Given the custom nature of memory design, teams have needed to handcraft new cells, cell arrays, and the sensing and control circuits on the periphery, with fairly predictable results.

In addition to scaling for new nodes, there have been many other innovations in the world of memory. Can you imagine today¡¯s electronic devices without multiple generations of double data rate (DDR) technology or content-addressable memory (CAM) for caches? Developing new memories has generally happened independent of process development. As new technologies were adopted, memories also stayed at the leading edge of semiconductor development.

However, today¡¯s trend of increasing chip complexity in our deep submicron age has not bypassed memory. Given this, there¡¯s a need for much closer cooperation between the design and process teams to drive continued improvements in memory density and performance. In this blog post, adapted by an article that originally appeared on , we discuss the need for memory design technology co-optimization (DTCO).

Smart Home System

What¡¯s Driving the Change in Memory Design?

Several factors are driving the changes we¡¯re seeing in memory design:

  • As Moore¡¯s law slows, scaling on its own can no longer deliver the regular, predictable benefits that it has in the past
  • The end of Dennard scaling brings many changes itself: early design/architectural optimization, more detailed optimization of physical layout design rules, and development of new process recipes
  • Bitline and wordline parasitics have an increased effect in DRAM arrays; the associated need to maintain sufficiently high storage capacitor values drives higher aspect ratio capacitor structures with the integration of materials with higher dielectric constants
  • Cell capacitance, cell contact resistance, and row hammer effects have made DRAM scaling more challenging
  • Process variability, which reduces the design margin for the sensing circuits, is having a larger effect on the scaling of DRAM and NAND periphery
  • As shown in Figure 1, the number of layers in 3D NAND devices is growing: currently at around 200 with projections for more than 500, bringing along new challenges in the high aspect ratio etching processes used to define the vertical channels and driving the research of process techniques to improve channel conductivity
Growth in Number of Layers in NAND Memories | Synopsys

Figure 1: Growth in number of layers in NAND memories

We¡¯re now seeing suboptimal devices and process recipes, suboptimal memory performance, and late-stage design changes that increase time-to-market (TTM) due to the technology-design gap that has emerged from these effects. To minimize this gap, memory designers need to optimize materials, processes, and device structures. Doing so will only become more important with emerging memory technologies such as resistive random-access memory (RRAM), phase-change memory (PCM), magnetoresistive RAM (MRAM), and Ferroelectric RAM (FeRAM).

DTCO Drives Closer Collaboration Between Process and Circuit Development

What¡¯s needed for effective memory design is DTCO, which drives a much closer collaboration between process and circuit development. Ideally, a memory DTCO flow should simulate the impact of process variability in the critical high-precision analog circuits in the memory periphery, such as the sense amplifiers. An optimal flow encompasses these phases:

  • Transistor modeling ¨C technology computer-aided design (TCAD) simulates the fabrication process with its variability sources, followed by simulation of the transistor electrical characteristics and generation of data for subsequent extraction of a SPICE model
  • Parasitic extraction ¨C a 3D representation of the circuit is created, using as inputs a description of the interconnect process flow and a layout of the circuit element (for example, a sense amplifier), and is fed to a parasitic field solver that extracts a circuit netlist and annotates it with RC parasitics
  • SPICE simulation ¨C the SPICE model and annotated netlist are simulated, with variability modeling capabilities used to simulate variation-aware circuit metrics

From this flow comes a virtual process development kit (PDK) that¡¯s used for early and rapid design exploration before wafers in the new process are available. The tight fusion of TCAD and SPICE technology provides design enablement with high-quality models that can be further refined when wafers are available and fabrication data can be gathered. Virtual PDKs can be used to create the layout, with assessment of power, performance, and area (PPA) from both pre-layout and post-layout netlists. Moving optical proximity correction (OPC) simulation, as well as lithography rule check (LRC) and debug, into the layout process enables design closure. In other words, memory designers can take advantage of true lithography-aware custom memory design that can handle the latest deep submicron nodes and emerging memory technologies.

One example of a memory DTCO solution that brings these benefits comes from Synopsys. As shown in Figure 2, the central element of this flow is Synopsys PrimeSim? SPICE, a high-performance SPICE simulator for analog, RF, and mixed-signal designs including memories. The transistor modeling phase uses Synopsys Sentaurus Process advanced 1D, 2D, and 3D process simulator, which simulates the transistor fabrication steps; Synopsys Sentaurus Device advanced multidimensional device simulator, which simulates transistor performance; and then Synopsys Mystic TCAD-to-SPICE solution to extract SPICE models from the TCAD output. The SPICE netlist is generated by Synopsys Process Explorer fast 3D process emulator and the Synopsys Raphael FX resistance and capacitance extraction tool.

Synopsys DTCO Flow for Memory Sense Amplifiers | Synopsys

Figure 2: Synopsys DTCO flow for memory sense amplifiers

Another part of the Synopsys DTCO solution is a data-to-design workflow that allows fab data to be directly consumed by SPICE and FastSPICE simulators in the Synopsys PrimeSim? continuum product family. With this workflow, process technologists and design engineers can skip the cumbersome, time-consuming compact model extraction step inherent to non-standard process technologies, and instead directly consume fab data to perform design PPA assessments. Design engineers can perform a more complete design PPA assessment with the traditional DTCO flow or the data-to-design workflow with early layout and post-layout simulations using various products in the Synopsys Custom Design family.

Synopsys Data-to-Design Flow  | Synopsys

Figure 3: Synopsys data-to-design flow with TCAD-to-SPICE direct link

Summary

As devices move to smaller process nodes and incorporate new technologies, memory design is becoming more challenging. It¡¯s no longer a given that designing independently from process development will generate optimal outcomes. This is why a technology-aware design development process, memory DTCO, is required.

Continue Reading