91³Ô¹ÏÍø

Table of Contents

Introduction

From autonomous cars to surgery-performing robotics, our smart everything world is driving increased and new demands for semiconductors. The unprecedented market shifts brought on by the global pandemic and ensuing supply chain pressures have highlighted chip shortages at a time when users are expecting their electronic products to deliver increasingly sophisticated functionality. Such an environment brings promising opportunities in the electronics industry, with new players entering the semiconductor landscape. What design teams are finding, however, is that traditional, monolithic semiconductor designs are no longer meeting the cost, performance, or functionality needs of certain compute-intensive, workload-heavy applications. Following the path of Moore¡¯s law and migrating to smaller process nodes also has its limits.

How can the electronics industry continue as Moore¡¯s law slows, system complexity increases, and the number of transistors balloons to trillions?

Multi-die systems have emerged as the solution to go beyond Moore¡¯s law and address the challenges of systemic complexity, allowing for accelerated, cost-effective scaling of system functionality, reduced risk and time to market, lower system power with increasing throughput, and rapid creation of new product variants. For applications like high-performance computing (HPC), highly automated vehicles, mobile, and hyperscale data centers, multi-die systems are becoming the system architecture of choice.

Multi-die systems are an elegant solution, to be sure, but not without challenges in areas including software development and modeling, power and thermal management, hierarchical test and repair, die-to-die connectivity, system yield, and more. How do you ensure that your multi-die system will perform as intended? How do you do it all efficiently and swiftly? From design exploration all the way to in-field monitoring, what are all the key steps in between that are important to consider from an overall system standpoint?

In short, designing multi-die systems is quite different than designing monolithic systems-on-chip (SoCs). Every step that you know, like partitioning, implementation, verification, signoff, and testing, must be performed from a system perspective, going from one die to multiple dies. What works for monolithic SoCs may not be adequate for these more complex systems. Read on for a deeper understanding of multi-die systems: their market drivers; how key steps including architecture exploration, software development, system validation, design implementation, and manufacturing and reliability can be adapted for the system; and opportunities for continued semiconductor innovation.

What is a Multi-Die System?


First, let¡¯s define exactly what we mean by a multi-die system. Simply put, a multi-die system is a massive, complex, interdependent system comprised of multiple dies, or chiplets, in a single package. There are different approaches to creating this type of architecture. One approach consists of disaggregation¡ªthe partitioning of a large die into smaller dies to improve system yield and cost compared to monolithic dies. The disaggregated approach applies to heterogeneous designs as well as homogeneous designs. For the former, one example is an automotive system that is disaggregated into different dies for different functions, like sensors, object detection, and general compute. For the latter, an example is a design that is disaggregated into multiple instances on the same compute die.

Another approach for executing multi-die systems involves assembling dies from different process technologies to achieve optimal system functionality and performance. For example, such a system could contain dies for digital compute, analog, memory, and optical compute¡ªeach at a process technology ideal for its target functionality. By including proven and known good dies in the mix, such as reusable IP blocks, teams can reduce their design risks and effort. Regardless of the approach, it¡¯s also more cost-effective (and better from a yield standpoint) to fabricate a design based on multiple, smaller dies versus a large, monolithic SoC.

Multi-Die System | Synopsys

Figure 1: Compared to their monolithic counterparts, multi-die systems lead to better PPA and yields. The smaller die with improved yield offsets higher silicon area and package/test cost.

Various types of packages are available for multi-die systems, whether this involves side-by-side or vertically stacked die placement. Advanced packaging types offer varying advantages in performance, area, and connectivity, along with differences in complexity and assembly. Silicon interposers are a silicon chip that serves as a conduit for electrical signals to pass through to another element. Because they provide a large conduit for the signal, silicon interposers shorten the distance between the system¡¯s IP blocks and minimize parasitic delay. Redistribution layer (RDL) interposers¡ªbecause of the RDL architecture¡ªallow for fan out of the circuitries and for lateral communication between the chips attached to the interposer, making them an integral element of 2.5D and 3D IC integration. Fan-out wafer-level packaging results in a smaller package footprint and better thermal and electrical performance compared to conventional packages. This IC packaging type also supports more contacts without increased die size. Hybrid bonding delivers the highest density of the types discussed here, along with power efficiency. With very small bump pitches and through silicon vias (TSVs) for connectivity, hybrid bonding allows two wafers to be bonded together to work as one.

Multi-Die System Choices Driven by PPA, Cost, Time to Market | Synopsys

Figure 2: Advances in packaging are enabling multi-die systems.

Industry Standards Ensure Quality and Interoperability

The history of semiconductor design has taken a smoother path thanks in part to industry standards, which play a critical role in ensuring quality, consistency, and interoperability. Two key standards for multi-die systems are HBM3 and UCIe. HBM3 provides tightly coupled, high-density memory, which can help alleviate or remove bottlenecks. UCIe, which enables customizable, packagelevel integration of dies and accommodates designs at 32 Gbps per pin, offers promise as the de-facto standard for die-todie interconnects.

UCIe for Multi-Die Systems | Synopsys

Figure 3: UCIe, which supports standard as well as advanced packaging, meets high-bandwidth, low-power, and low-latency requirements for today¡¯s and tomorrow¡¯s multi-die systems.

Die-to-die interfaces are integral in bringing multi-die systems to life. Consisting of a physical layer (PHY) and a controller block, they provide the data interface between two silicon dies assembled in the same package. Disaggregated chips rely on several die-to-die connectivity architectures that support high data rates, which is why UCIe stands out here. Other key characteristics for die-to-die interfaces include:

  • Modularity
  • Interoperability
  • Flexibility
  • High bandwidth efficiency
  • High power efficiency
  • Low latency
  • Robust, secure known good dies
  • Short-reach, low-loss channels without any significant discontinuities

Die-to-die controller and PHY IP can help ensure that the interfaces are designed to deliver on these criteria. Controller IP with error recovery mechanisms provides high levels of data integrity and link reliability. PHY IP provides the high bandwidth and low latency to support compute-intensive workloads. UCIe controller and PHY IP support standard and advanced package types and the most popular interfaces like PCI Express (PCIe) and Compute Express Link (CXL) as well as user-defined streaming protocols. PCIe 5.0/6.0, CXL 2.0/3.0, 112G/224G Ethernet, and others are important for connectivity beyond the package.

More I/O interfaces present more potential attack surfaces. Die authentication, die-to-die interface encryption, and debug are some of the ways to address security risks in multi-die designs. Standardization initiatives to align security for these systems are underway at various standards organizations, including for UCIe.

As will be discussed later in this paper, applying a co-optimization approach that simultaneously addresses the system, the dies, and the package helps to optimize performance and power.


Why Are Multi-Die Systems in Demand?

Now, what exactly is driving the demand for multi-die designs? We are in the SysMoore Era, a time of rising systemic and scale complexities that are pushing the limits of Moore¡¯s law. Greater demand for Smart Everything applications like AI, smart and connected vehicles, and the IoT are disrupting market dynamics¡ªand changing how we must move innovation forward. Abundantdata applications like data centers manage growing volumes of data (petabytes worth, in many cases). At the same time, the data itself has become more complex with the emergence of bandwidth-hungry machine-to-machine communication.

Today¡¯s SoCs have become rather large to support these compute-intensive applications, boasting trillions of transistors and a size similar to that of a postage stamp. As die sizes hit the reticle limit of the manufacturing equipment, adding more transistors to support application demands requires adding more chips. The problem is, there¡¯s a steep learning curve to ramp up production to achieve desired yields. Splitting the SoC into smaller dies addresses the learning curve and yield concerns. By reusing silicon-proven dies in a multi-die system, teams can accelerate their system time to market.

With more chips in a package, however, the cost savings move from silicon to the package and, as a result, the package cost becomes significant. Nevertheless, the march toward multi-die systems continues, driven by the convergence of four key drivers:

  • Cost, as it has become prohibitively expensive, over time, to achieve the yields of the types of chips that address SysMoore complexities
  • Growing functionality, requiring higher bandwidth, lower latency, and substantially greater compute performance in the face of reticle limit challenges
  • The power dilemma, which can be better addressed by splitting up a large design
  • Demands of multiple end market opportunities, which create a need for optimal, modular architectures

Traditional chipmakers aren¡¯t the only ones getting into the multi-die system space. Hyperscalers with their massive data centers, carmakers developing autonomous functions, and networking companies are among the businesses that are designing their own chips and, in many ways, propelling the move to multi-die system architectures to support their compute-intensive applications. These system companies are essentially striving to build architectures optimized to achieve differentiation for their own unique market needs¡ªin other words, domain-specific designs. For example, they may have particular requirements for performance, security, safety, or reliability that multi-die system designs can help them achieve. However, this does require a deep understanding of the chip, the software, and the package.


The Need for a Comprehensive Design Approach

Hyperscalers as well as verticals are placing big demands on silicon chips to support their domain-specific needs, and many have the deep pockets that silicon design requires. It¡¯s no surprise that many of these companies are designing their own chips and turning to multi-die systems to deliver the compute-density requirements that such market segments demand. Some may require a specialized architecture to optimize performance of deep learning algorithms. For others, it might be a system that strikes the right balance between power and performance for a mobile consumer device or an automotive subsystem. For example, one major automaker relies on a heterogeneous design whose dies are disintegrated for functions related to sensor inputs as well as object detection and general compute. As another example, a major player in the optical computing space integrates into its system dies from different process technologies for digital compute, analog, memory, and optical compute. Suffice to say, the semiconductor landscape is experiencing massive changes.

When designing¡ªor procuring¡ªthe individual dies, it¡¯s important to consider the packaging, the interconnects, and the system as a whole. How should the dies be split? Should the logic component be placed atop memory, or vice versa? What kind of packaging would be ideal for the end application? Every choice and decision should be made with each part in mind¡ªalong with how each will affect the design¡¯s overall PPA targets.

Multi-Die System Challenges | Synopsys

Figure 4: The move from monolithic SoCs to multi-die systems comes with unique challenges that must be addressed holistically.

In the 2D world, it¡¯s common practice for one team to work on their portion and turn the results over to the next team. With multi-die system design, all the teams should ideally address the challenges together. Important parameters like power consumption, signal integrity, proximity effects, and heat dissipation can no longer be analyzed independently because one area impacts the other. Front-end logical design must account for the back-end physical design. Otherwise, time-consuming iterations between the front-end and back-end design could result, impacting time to market and overall design costs.

In this new design environment, EDA companies must up their game, stepping in to help customers with everything from system planning to implementation and firmware/hardware/software co-development. Traditional flows and methodologies for design and verification, prototyping, IP integration, testing, and silicon lifecycle management are no longer enough to support multi-die designs, nor is it effective to stitch together disparate point tools. The nature of a multi-die system is multi-dimensional, so the market needs a scalable, cohesive, and comprehensive solution created to handle the complexity of these designs, drive greater productivity to meet time-to-market targets, and achieve PPA optimizations.

Architecture Exploration: Explore, Refine, Converge

The design starting point, architecture exploration, must take an analysis-driven approach that considers macro-architecture decisions such as IP selection, hardware/software partitioning, system-level power analysis, and interconnect/memory dimensioning. Additionally, there are multi-die macro-architecture decisions pertaining to aggregation (assembling the system from dies) and disaggregation (partitioning the application onto multiple dies).

To understand the questions that must be answered during this phase, consider a complex application like a hyperscale data center. How many die of each type would be needed, on what process nodes should they be on, and how would they be connected? For each die, how would the functionality of different subsystems be partitioned into local processing elements? How would the system, with its different memories and compute dies, be assembled? Even if you¡¯ve assured that the dies are correctly designed, how do you ensure that the entire system will meet your power and performance targets once it has been assembled? An analysis-driven approach would allow you to iterate through your many choices early to optimize your multi-die system as well as costs.

For safety-critical applications like automotive, predictability is an important criterion. Ultimately, a data-driven architectural specification approach utilizing modeling, analysis, simulation, and experimentation will guide the way.

Multi-Die System Early Architecture Exploration | Synopsys

Figure 5: Early architecture exploration of multi-die systems is geared toward optimizing performance, power, and thermal key performance indicators.

Early architecture decisions on several key areas can enhance the design process:

  • Multi-die system partitioning into dies to optimize chip-to-chip traffic
  • Chip-to-chip communication considerations to ensure effective throughput and latency
  • Trade-offs between interface power consumption, throughput, and die placement
  • Performance impact of different fabrication and packaging technologies
  • Die-to-die protocols and interfaces

Aside from making these early architecture decisions, engineering teams must also address chip-to-chip performance bottlenecks. Modeling latency and performance based on partitioning and die-to-die interface choices can help here. Finally, the other big challenge is to meet power and thermal key performance indicators (KPIs) by addressing system power consumption as well as the thermal impact of multiple dies in one package.

What¡¯s helpful to know is that automation available in today¡¯s tool flows has elevated architecture exploration beyond the manual, spreadsheet-based predictions of years past. Looking ahead, unified design space exploration could further elevate the accuracy and productivity of this process.

Ensuring a Robust System with Solid Thermal Foundation

Since multi-die systems aim for significantly more functionality in a much smaller footprint than their monolithic counterparts, performance per watt is the key attribute denoting how efficient the systems are. Integration of multiple components, however, creates several challenges related to thermal stress. Much higher transistor density generates a lot of heat. The architecture leaves little room to dissipate heat. And if the heat isn¡¯t dissipated, die function could be hampered by mechanical stress or warping if temperatures go beyond the optimal range for the device.

Heat sinks and other cooling structures in the multi-die system can help, though these components do add to the device area and cost. Designing the power grid to ensure that enough power is supplied to all areas of the system also becomes more complex in a multi-die system architecture.

A well-planned architecture following an iterative process can alleviate thermal stress. Following the initial architecture and physical planning, the team can analyze the resulting thermal behavior. Then, they can revise the architecture and perform physical planning to improve the thermal behavior. Iterations continue until the thermal constraints, along with performance requirements, are satisfied.

As part of this iterative process, ¡°what if?¡± exploration at the front end helps avoid getting locked into a partitioning structure that could eventually turn out to be sub-optimal from a power perspective. System architecture teams can use modeling tools to abstract out pieces of their chip into models for performance analysis and implementation of power tradeoffs before the design is locked into its partitions. By mapping a workload onto a multi-die system, the design team can determine the activity per processing element and per communication path. Modeling the hardware and software together also becomes more critical to generate a design that¡¯s fundamentally robust and thermally sound, since each die in the design will have its own software stack. Continued monitoring during the RTL, synthesis, place and route, and other design steps is also valuable. As tool flows evolve to become more thermally aware, this process will become more automated.

From a thermal standpoint, embedding sensors in each die to monitor and regulate health on an ongoing basis (silicon lifecycle management technology) provides indications on whether to, for instance, dial down performance to cool down the system. In-chip sensors are commonly used in applications like automotive and mobile and are likely to become mainstream practice for applications like HPC and AI.

Tackling Multi-Die System Implementation Challenges

While multi-die systems are an answer to increasing systemic and scale complexities, they do have inherent design challenges that engineering teams need to address. This is to be expected in a system with tens of chips, high integration densities (typically 10,000 to up to one million I/Os per mm2), and 3D heterogeneous designs and hybrid architectures. An important step is to explore scalability options and architectures to achieve optimal PPA/mm3. And an important approach is to co-optimize the full system for PPA, physical constraints, and cost.

 

Multi-Die System Co-Optimization for PPA | Synopsys

Figure 6: Given all of the interdependencies in a multi-die system, it's important to co-optimize the entire system to achieve optimal PPA.

An easier and more productive transition from 2D to 2.5D/3D designs will benefit from consistent data management across dies and technologies. This is where a disjointed flow consisting of point solutions can be particularly detrimental to outcomes as well as productivity. To address the unique requirements of multi-die systems, what¡¯s needed is a unified approach for die/package co-design that spans design, analysis, and signoff. Ideally, an integrated environment should:

  • Provide the integration capacity and efficiency for >100s of billions of transistor connections
  • Support faster design closure via a concurrent workflow at all stages of the design, as well as a common data model and database with common tech files and rules
  • Foster productivity with a single software environment and GUI for multi-die/package co-design
  • Deliver fast convergence on optimal PPA, while accelerating time to package
  • Optimize the design¡ªand costs¡ªearly on and at a system-wide level

Addressing Multi-Die System Software Development and Software/Hardware Validation

When it comes to validation, it¡¯s much too simplistic to consider a multi-die design as being a much larger system than an SoC. It is, but effectively emulating very large systems brings capacity into question. Multi-die systems also tend to be heterogeneous, with dies developed on different process nodes and, in some cases, reused, limiting access to any proprietary RTL.

For multi-die software development and software/hardware validation, there are a few key considerations and solutions:

  • Software bring-up of one die with software dependency on other dies. Multi-abstraction system modeling can leverage fast, scalable execution platforms that make use of virtual prototyping and hardware-assisted verification.
  • Validation of the die-to-die interface. Pre-silicon validation can take advantage of IP blocks verified and characterized using analog/mixed-signal (AMS) flows. Pre-silicon validation and compliance testing can also be handled via a UCIe controller IP prototype with a UCIe protocol interface card.
  • Multi-die system software/hardware validation. Each die can be mapped onto its own emulation setup and connected via die-todie transactors (UCIe, etc.). Realistic application workloads executed with hardware-assisted verification can yield insights into multi-die system performance and support fast turnaround time on power validation. A die under test can also be connected via a speed adaptor to prototypes of mature dies.

Let¡¯s dive in a little deeper on these points. Given such a complex system running very complex software, it¡¯s essential to start the validation process early, creating virtual prototypes of the multi-die system to support software development. Specifying system behavior up front with a virtual model, running software on that model, allows the system specs to become more solidified and the software to become better defined before emulation.

In multi-die systems, it¡¯s important to optimize the die-to-die connections at the protocol level (the digital parts) and the analog level (the PHY). AMS emulation helps to reduce the risks that something will go wrong post-silicon.

Heterogeneous setups can facilitate validation of multi-die systems. Consider a design consisting of three dies developed by one semiconductor supplier, who provides the RTL, and a fourth die from another supplier, without RTL access but with an existing die. The three dies with RTL can be emulated in a large-scale setup with UCIe transactors providing the bridge between different emulators, actually representing the connectivity in the actual multi-die system. The fourth die could be packaged in a test chip on a test board that connects to the emulator through a UCIe speed adaptor. With capacity concerns addressed, emulation can then support debugging and validation of the design¡¯s software with its hardware. Through this process, teams can get the guidance needed to make the right decisions. For example, by determining the power consumption of each die in a system early on, the team can determine whether die stacking is feasible based on the power budget of each die.

 

Verifying Multi-Die Systems for Functional Correctness

Whether we¡¯re talking about a single-die or multiple dies, the entire system must be validated to ensure that it is functionally correct vis-¨¤-vis its design specifications. In other words, does the design do what it is intended to do? Individual dies are verified before they¡¯re assembled together. More exhaustive verification at the die level reduces the chance of multi-die system bugs. After assembly, however, tests must be performed at the connectivity level to ensure that data pushed through one port lands in the right place and at the system level to ensure proper system performance.

As EDA vendors continue to enhance tool flows, the design community can look for investments in areas to address the verification challenges of multi-die systems. For example, cloud-based hybrid emulation that takes advantage of the cloud¡¯s elasticity could address capacity concerns. Transaction-level capture that streams only relevant data quickly over the cloud from distributed nodes, to be analyzed together later, could make debugging of large systems manageable. Distributed simulation techniques that repurpose multiple nodes in a cloud to use, say, 1,000 cores in the network for parallel simulation could accelerate multi-die system verification.

 

Accelerating System Signoff for Silicon Success

Design signoff is a multi-step process that involves going through an iterative series of checks and tests to ensure that the design is free of defects before it is taped out. Signoff checks are complex, covering areas such as voltage drop analysis, signal integrity analysis, static timing analysis, electromigration, and design rule checking. Multi-die system signoff follows a similar methodology, but on a much more massive scale given all the system interdependencies.

An efficient and comprehensive extraction flow can model various multi-die system architectures for accurate performance and silicon results supporting advanced process technologies. Engineering change orders (ECOs) for multi-die systems need to be executed fast and in concert with all the ecosystem partners involved, so that changes can be identified quickly and designs can be reconciled efficiently. This can only be done with golden signoff tools that offer comprehensive and hierarchical ECO that can also accelerate PPA closure. In addition, being able to accurately analyze your multi-die system design helps find problems prior to tape out. Golden signoff tools can provide the assurance that each parameter in a multi-die system can be closed with accuracy, completeness, and expediency

 

Testing, Testing: Pinpointing Availability of Known Good Dies

To ensure the quality of a multi-die system, a thorough pre-assembly test is needed to obtain known good dies (KGD) at the die level, along with a post-bond test for the interconnect and system levels. Individual dies for a multi-die system are thoroughly tested in order to meet minimal test escape requirements, as measured by DPPM (defective parts per million). This requires advanced design-for-test (DFT) capabilities that are built into the design blocks. For example, logic and memory built-in test (BIST) necessitate hardware engines to be integrated into the design to apply the testing and perform repairs, followed by diagnosis. Redundancy in the memories (and the interconnects, for that matter) allows yield optimization during repair.

When it¡¯s time to test the die at the wafer level, teams may find that there are many bumps that may be too small and dense to physically probe, so dedicated pads for wafer-based testing at the pre-assembly phase may be needed. These are sacrificial pads and will not be bonded into the final design. After an individual die has been tested and repaired thoroughly, it can move into the die-todie space for assembly and bonding. Once the memory and logic dies are partially or fully bonded, testing the interconnects helps to determine whether the die-to-die connections are good or if repair is needed. All interconnects undergo such a test, repair, and retest process after assembly. The final step is testing of the multi-die stack and package to assess if the dies are still fully operational and repair in case they were damaged during transportation, mounting, or assembly.

For multi-die systems in particular, IEEE Std 1838-2019 is the standard addressing mandatory as well as optional on-chip hardware components for multi-die test access, allowing for individual testing of dies and the interconnect layers between adjacent dies. According to IEEE, the standard applies primarily to TSVs but can also cover other 2.5D interconnect technologies, including wire bonding. 3DICs pose unique test challenges, and access mechanisms for each die¡¯s embedded test instruments are needed from bonded pads at the stack level.

DFT teams have traditionally used test access mechanisms inherited from the board level, such as boundary scans, to mimic the dieto-die interconnect and perform their test generation. This approach is rather manual, as the teams would have to extract the netlist, build everything themselves, and create the verification environment. What¡¯s needed to drive greater productivity in the testing phase is an automated die-to-die testing solution.

Multi-Die System Test and Repair | Synopsys

Figure 7: Automation in the silicon testing process can result in a more exhaustive, productive process.


How Silicon Lifecycle Management Influences System Operation

Silicon health can also be evaluated via silicon lifecycle management (SLM) technology. SLM involves integrating monitors into components of the design to extract data throughout a device¡¯s lifecycle, even while it¡¯s in the field. The deep, actionable insights gathered from silicon to system allow for continuous analysis and optimization.

With multi-die systems, the monitoring infrastructure should be unified across multiple dies. The idea is to capture a profile of environmental, structural, and functional conditions throughout the lifecycle of the chip. The challenges lie in complexity-driven reliability, power management, and interconnect concerns.

Given the system interdependencies, design teams will need to know, for instance, where to place two dies with very different thermal characteristics so that the heat dissipating from one die won¡¯t negatively impact the operation of the other¡ªor that of the system. Once in the field, chips are affected by aging and temperature, making continuous monitoring a valuable function. Access to the individual dies once they¡¯ve been packaged is also more challenging in the disaggregated world. If dies are stacked vertically, for example, there needs to be an efficient way to access them for in-field characterization.

Silicon Lifecycle Management for Multi-Die Systems | Synopsys

Figure 8: Silicon lifecycle management delivers actionable insights for chips throughout their lifecycles, including while they are in the field.

The emergence of the cloud for EDA workloads adds the benefit of predictive analysis to the mix. Being able to predict, for example, in-field chip degradation or failure can trigger corrective action to prevent these outcomes.

Chips designed at advanced nodes typically have on-chip monitors, but this isn¡¯t always the case for those on older processes. Also, not all vendors provide access to this data to their customers. When using die from multiple sources and on multiple technology nodes, design teams will need to determine their optimal cost and coverage tradeoff for testing their complex modules. Incorporating traceability and analytics mechanisms across a module of multi-source dies can help improve cost, quality, and reliability. There is not yet a standardized approach for how to monitor and share data, but vendors in the semiconductor industry are pushing for this.


Comprehensive Approach to Integration of Heterogeneous Dies

The large scale and scope of multi-die systems calls for validated, unified, and comprehensive solutions developed with a deep understanding of all the interdependencies in these designs. Synopsys offers the industry¡¯s most comprehensive, trusted, and scalable Multi-Die System Solution, facilitating the fastest path to successful multi-die system design. The solution, consisting of comprehensive EDA tools and IP, enables early architecture exploration, rapid software development and validation, efficient die/ package co-design, robust and secure die-to-die connectivity, and improved health and reliability. Production-hardened design engines and golden signoff and verification technologies minimize risk and accelerate the path toward an optimal system.

A broad portfolio of high-quality IP aligned with industry standards¡ªincluding UCIe for high-bandwidth, low-latency die-to-die connectivity; and secure interfaces to protect against tampering and physical attacks¡ªalso reduces integration risks while speeding time to market.

Multi-Die System Solution | Synopsys

Figure 9: Synopsys¡¯ Multi-Die System Solution was built from the ground up to scale to support increasingly demanding systems and applications.


Summary

With compute demands rising and our smart everything world getting even smarter, monolithic chips are no longer enough for certain types of applications. AI, hyperscale data centers, networking, mobile, and automotive are changing the silicon landscape, pushing multi-die systems into the forefront. Compared to their counterparts, these disaggregated dies reaggregated in a single package support massive performance requirements without a penalty on power, area, or yield. The ability to mix and match dies from different process technologies to support different functions provides designers with a new way to derive more from Moore¡¯s law.

Because they are complex systems with a myriad of interdependencies, multi-die systems require a comprehensive approach every step of the way, from design to verification, power management, testing, SLM, and more. Co-design and analysis from a system standpoint helps ensure the design can achieve the PPA promise of this architecture. EDA solutions that leverage the cloud and AI contribute to a streamlined design and verification process with better outcomes.

Engineers have never shied away from tough challenges. Moore¡¯s law will wane while compute and connectivity demands soar. The emergence of multi-die systems presents a way forward for the electronics industry¡¯s continued push to create the products that are transforming our lives.

At Synopsys, we understand the tremendous potential of multi-die systems in the semiconductor industry and are leading the way with a comprehensive solution for fast heterogeneous integration. Our solution, which includes EDA tools and IP, provides a path for designers to efficiently deliver innovative products with unprecedented functionality at a cost-effective price. By reusing proven dies, our solution helps reduce risk, accelerate time to market, and rapidly create new product variants with optimized system power and performance. If you're looking to stay ahead of the curve and benefit from the many advantages of multi-die systems, we invite you to get started with our solution today. Join us in embracing the future of chip design and discover the possibilities of multi-die systems

Click here to talk with our team and get started on your Multi-Die System design today!  ¡ú

Continue Reading