91³Ô¹ÏÍø

Integrated Chip Design Tools for IC Hyperconvergence

Anand Thiruvengadam, Raja Tabet

Apr 06, 2021 / 4 min read

In our data-driven world, applications like high-performance computing (HPC) and artificial intelligence (AI) are taking center stage, delivering intelligence and insights that are transforming our lives. However, the growing complexities of HPC and AI designs are driving the need for much more complex semiconductor devices. Increasingly, multiple components and technologies are coming together in hyper-convergent designs to meet demands for bandwidth, performance, and power for these compute-intensive applications. To achieve power, performance, and area (PPA) targets, such complex chips need to be analyzed as a single system¡ªan approach that¡¯s difficult to support via traditionally disparate tools. In this blog post, we¡¯ll examine the trend of IC hyperconvergence and explain why the traditional, disaggregated approach to circuit simulation is no longer sufficient.

What is IC hyperconvergence? Simply put, a hyper-convergent IC design is one that is comprised of disparate components integrated on the same die or the same package. It¡¯s like our familiar system-on-chip¡ªbut packed with a lot more functionality. A single die or package, for instance, can feature a diverse set of analog, digital, and mixed-signal components, some built on different process nodes and catering to a variety of functions. The complexity increases further when the various components are integrated vertically using 2.5D/3D architectures in a system-in-package (SiP).

From technology generation to generation, SoCs have grown more complex with more integration in response to application needs. As recently as 2015, advanced-node SoCs were primarily digital designs, with separate discrete analog components on mature nodes and fairly low data rates for on-chip IO. Fast-forward to 2020 and you¡¯ll have noticed the increasing prevalence of advanced-node SoCs with integrated analog components, larger and faster embedded memory, and complex IOs with 100+ Gb data rates. And today, we¡¯re seeing the emergence of  (HBM) designs consisting of large 3D stacked DRAM integrated with the SoC on a 3DIC or in a SiP.

While today¡¯s highly integrated designs provide a way for designers to stretch the limits of Moore¡¯s Law, the evolution also points to increased scale complexity and system complexity. From a scale standpoint, we¡¯re seeing reduced margins and increased parasitics in advanced nodes. Also, larger and more complex circuits demand higher quality of results (QoR), time-to-results, and cost-of-results. On the system side, complex multi-function and multi-technology silicon integrations are driving designers¡¯ need for unified workflows around a common circuit simulation solution. In other words, the disparate tools that we¡¯ve long been accustomed to are not adequate to meet the evolving needs in this environment.

HPC & AI Engineer

Multi-Dimensional Circuit Simulation Challenge

To illustrate the circuit simulation needs of today¡¯s complex designs, let¡¯s consider HBM. Adopted by JEDEC as an industry standard in 2013, HBM provides a high-speed memory interface for 3D stacked synchronous DRAM (SDRAM). It¡¯s used with high-performance graphics accelerators, AI ASICs and FPGAs in high-performance datacenters, network devices, and some supercomputers. In these memory chips, multiple DRAM dies are vertically stacked with a memory controller, all interconnected by through-silicon vias (TSVs) and microbumps on a silicon interposer. This  in a smaller form factor than DDR4 or GDDR5.

Hyper-convergent designs present a multi-dimensional challenge that calls for workflows optimized for PPA and cost convergence. In a hyperconverged design, HBM designers will need to verify the entire memory sub-system present in a SiP, which means performing complex multi-dimensional analysis at the component and sub-system levels. There are difficult and more stringent constraints with new complexities that must be addressed to achieve power and performance targets. Circuit simulation tools need to be able to support:

  • Analysis of multiple technologies and multiple components (logic, analog, memory, I/O)
  • Different types of analyses (analog, digital, mixed-signal)
  • Large capacities for sub-system and chip-level analysis
  • Advanced reliability analyses (electrical, thermal, electro-thermal, temporal)
  • Signal integrity
  • Variability analysis (process, structural)

What¡¯s more, as these designs continue to scale to advanced technology nodes, there is a substantial increase in simulations to ensure that the design will be reliable and meet yield targets. Familiar challenges remain but will be exacerbated. Signal integrity, for example, will need to be analyzed through the interposer. Issues such as electrothermal stress and larger parasitics must be addressed to foster chip reliability that manufacturing at scale will require.

From a design enablement perspective, this presents a  that calls for workflows optimized for PPA and cost convergence. As a result, design teams and electronic design automation (EDA) tool providers must collaborate closely to address the complexity and costs of developing these hyperconverged designs.

Hyperconvergence Redefines Circuit Simulation

IC hyperconvergence is redefining how circuit simulation should be done. To meet the design and signoff requirements of hyper-convergent designs, circuit simulation tools need to come together in a unified workflow that:

  • Enables a holistic and cohesive verification of complex multi-technology/multi-function designs
  • Delivers greater performance while supporting much more capacity
  • Understands both the digital and analog worlds¡ªand what happens when both are integrated in a complex device
  • Delivers a rich and consistent verification experience across all tools

It¡¯s time for EDA tool providers to close the gaps that arise when disparate tools and disparate environments are applied to hyper-convergent designs. As silicon chip designers continue to find innovative new ways to extend¡ªor go beyond¡ªMoore¡¯s Law, a unified workflow is needed to support PPA, reliability, and yield targets while also reducing design costs and turnaround time to meet increasing verification demands of hyper-convergent designs.

Learn More at SNUG World 2021

Learn more about the challenges of hyper-convergent designs at our upcoming . This virtual experience takes place from April 20 to 22, providing you with an opportunity to share best practices on an array of electronic design and verification topics, network with peers in the global community, and gain practical technical knowledge you can start using right away. 

SNUG World Logo

Continue Reading