Cloud native EDA tools & pre-optimized hardware platforms
Memory is ubiquitous. Anything that has an on/off switch has memory inside. But it¡¯s no longer just smartphones, video game consoles, and cameras that need to quickly move data between the device¡¯s processor and its memory to work smoothly and seamlessly. Compute-intense applications such as cloud-based high-performance computing (HPC), big data analytics, machine learning (ML), and advanced driver assistance systems (ADAS) in modern cars demand large and fast memory resources to perform as expected.
A great example of a demanding application is cloud-based AI training, which needs to process huge volumes of data to train the AI models, thus requiring high-density and high-bandwidth memories. Similarly, automotive memories for ADAS applications require large and fast memories, but in addition need to be safe and reliable even in extremely harsh operating environments. By contrast, AI inference at the edge requires fast and power-efficient real-time processing, thus calling for small, low-power memory chips.
Catering to the requirements of a wide variety of applications while meeting time-to-market and cost targets is forcing designers to rethink memory design and verification. In a landscape marked by hyper-convergence and hyper-customization, three key care-abouts remain high on the list of priorities for memory chip designers:
I¡¯ll discuss each of these care-abouts in this blog post, while also highlighting how electronic design automation (EDA) tools are rising to the occasion to meet these challenges.
Meeting aggressive PPA targets is essential in any silicon design, and memory is no exception. To hit their goals, memory designers are pursuing technology scaling as well as stepping into the hyper-convergent space, utilizing multiple technologies and architectures to produce highly complex designs that generate the most optimal PPA. High-bandwidth memory (HBM), based on 3D stacked DRAM dies, is one such example. To prevent memory bandwidth from turning into a bottleneck, HBM employs a 1,024-bits-wide data bus that, in turn, requires a 2.5D interposer to connect the host to the DRAMs. Its vertical stacking along with its wide data bus deliver the high bandwidth, low power consumption, and form factor needed for applications like graphics cards, high-performance computing, networking, and AI accelerators.
Implementing hyper-convergent designs on advanced technology nodes requires a design-technology co-optimization (DTCO) approach to explore new technologies, analyze and eliminate technology-design gaps, and accelerate design enablement in a cost-effective manner. For example, a virtual process development kit (PDK) flow that¡¯s integrated with TCAD and SPICE simulation tools can enable fast and efficient evaluation and selection of new transistor architectures, materials, and other process options. In addition, integration with memory design implementation flows allows for early assessment of design PPA metrics before ¡°hardening¡± of technology, device, and process options. These initial, ¡°directionally correct¡± choices can then be fully optimized in regular memory development flows with analog simulation, timing characterization, and design implementation tools.
New memory protocols bring advances in performance and runtime. At the same time, memory chip designers are engaging in hyper-customization due to the unique needs of different applications. This means that it¡¯s more important than ever for memory chip designers to thoroughly understand how their designs will perform pre-silicon and be able to promptly mitigate any defects. Encompassing this backdrop are continued time-to-market pressures, as the end markets demanding these memory solutions are highly competitive.
The painstaking efforts required to deliver robust chips can hamper turnaround times. What¡¯s needed in memory chip design is a ¡°shift left¡± in the development process to enable designers to find and fix problems earlier on. Some tools and techniques that would make this possible include:
By shifting memory design left, memory chip designers would be well positioned to deliver the hyper-customized designs that many of today¡¯s applications demand while meeting their time to market and cost targets.
Advanced nodes not only introduce technology-design gaps but also design-silicon gaps. These gaps are further exacerbated by the adoption of new architectures including multi-die integrations and faster interfaces opening the doors to new issues around silicon reliability. These reliability issues could manifest as defects during manufacturing or in the field during normal chip operation¡ªan especially critical concern for life-impacting applications such as medical, aerospace and defense, and automotive.
As an example, let¡¯s consider memory chips for automotive applications, which need to operate under a wide process voltage and temperature (PVT) range and must also meet functional safety standards (ISO 26262). As designers model their designs, they must be able to assess the performance of the chips at the extremes of the operating conditions; if done incorrectly, reliability issues could be introduced into the silicon, causing failures and impacting safety.
To address these challenges, memory designers need tools to help them perform margin analysis and push their designs to the extremes so they can understand how the chips will perform under these conditions¡ªand what they must do to mitigate the impacts. Robust defect management and yield management solutions¡ªparticularly solutions that support pre-silicon defect analysis and defect mitigation¡ªare needed. So are advanced modeling and verification techniques to bridge the gaps. In-field defect monitoring can enable robust defect management after the chips have been deployed in the field. The ability to manage the chips across their full lifecycle, from pre-silicon design through production manufacturing and even deployment, can help ensure safe and reliable chip operation throughout the operating lifetime.
To help memory chip designers shift left, Synopsys provides the industry¡¯s most complete, end-to-end design and verification flow. Our portfolio includes DTCO solutions augmented by ultra-fast conventional and machine learning-driven simulation, and ¡°digitized¡± memory design implementation flows using digital tools spanning timing characterization, digital-on-top verification, and place and route to enable fast and accurate PPA optimization. In addition, we also offer full lifecycle reliability verification including memory-specific electrical rule checking, fast chip-level electromagnetic/IR analysis with power delivery network, functional safety solutions with ISO 26262 compliance, post-silicon and in-field defect management, as well as an integrated multi-die solution.
Aside from our EDA flows, we offer embedded memories and memory interface IP, aligned with the latest protocols, to help meet performance, bandwidth, latency, and power requirements, as well as verification IP to help accelerate runtime, debug, and coverage closure.
Memory design is very unique. Each organization employs its own customizations, taking great care in placement, routing, and connections and tweaking their designs to the nth degree to achieve their target PPA goals. By deploying tools and flows that can address the key memory design challenges while shifting the development process left, memory designers can accelerate their process and can get ahead of the competition with their differentiated designs.