91³Ô¹ÏÍø

Definition

A die-to-die interface is a functional block that provides the data interface between two silicon dies that are assembled in the same package. Die-to-die interfaces take advantage of very short channels to connect two dies inside the package to achieve power efficiency and very high bandwidth efficiency, beyond what traditional chip-to-chip interfaces achieve.

A die-to-die interface is typically made of a PHY and a controller block that provides a seamless connection between the internal interconnect fabric on two dies. The die-to-die PHY is implemented using a high-speed SerDes architecture or high-density parallel architecture, which are optimized to support multiple advanced 2D, 2.5D, and 3D packaging technologies.

A die-to-die interface is a key enabler of the industry trend away from monolithic SoC designs toward multi-die SoC assemblies in the same package. This approach mitigates growing concerns around high cost/low yield of small process nodes and provides additional product modularity and flexibility.


How Do Die-to-Die Interfaces Work?

A die-to-die interface, just like any other chip-to-chip interface, creates a reliable data link between two dies.

The interface is logically divided into a physical layer, link layer, and transaction layer. It establishes and maintains the link during chip operation, while presenting to the application a standardized parallel interface that connects to the internal interconnect fabric.

Link reliability is guaranteed by the addition of error detection and correction mechanisms such as forward error correction (FEC) and/or cyclic redundancy code (CRC) and retry.

The physical layer architecture can be SerDes-based or parallel-based.

  • A SerDes-based architecture includes parallel-to-serial (serial-to-parallel) data conversion, impedance matching circuitry, and clock data recovery or clock forwarding functionality. It can support NRZ signaling or PAM-4 signaling for higher bandwidth, up to 112 Gbps. The primary role of a SerDes architecture is to minimize the number of I/O interconnects in simple 2D-type packaging like organic substrates.
  • A parallel based architecture includes many low-speed, simple transceivers in parallel, each made of a driver and a receiver with forwarding clock techniques to further simplify the architecture. It supports DDR-type signaling. The primary role of a parallel architecture is to minimize power in dense 2.5Dtype packaging, like silicon interposers.

The Benefits of Die-to-Die Interfaces

Modern chip implementations are trending towards solutions based on assembling multiple dies in the package to increase modularity and flexibility. Such a multi-die approach also facilitates more cost-effective solutions by splitting functionality into several dies to improve yield as (monolithic) chip size approaches full reticle size.

The interface between the dies must address all the critical requirements for such a system:

  • Power Efficiency. Multi-die system implementation should be as power efficient as the equivalent monolithic implementation. Die-to-die links use short-reach, low-loss channels with no significant discontinuities. The PHY architecture takes advantage of the good channel characteristics to reduce PHY complexity and save power.
  • Low Latency. Partition of a server or accelerator SoC into multiple dies should not result in a non-unified memory architecture due to access to memory in different dies with significantly different latency. Die-to-die interfaces implement simplified protocols and connect directly to the chip interconnect fabric to minimize latency.
  • High Bandwidth Efficiency. Advanced server, accelerator, and network switches require transfers of massive amounts of data between dies. The die-to-die interface must be able to support all the required bandwidth with a reduced die edge occupancy. Two alternatives are commonly used to achieve this goal: minimize the number of required lanes by deploying the PHY with a very high data rate per lane (up to 112 Gbps) or increase the density of the PHY by using finer bump pitch (micro-bumps) at  low data rate lanes (up to 8 Gbps/lane) that are parallelized in large numbers to achieve the required bandwidth.
  • Robust Link. The die-to-die link must be error free. The interface must implement low-latency error detection and correction mechanisms robust enough to detect all errors and correct them with low latency. These mechanisms typically include an FEC and a retry protocol.

Multi-Die System Solution

A comprehensive solution for fast heterogeneous integration


Die-to-Die Interface Use Cases

By combining multiple dies into one package, chiplets provide another way to extend Moore¡¯s law while enabling product modularity and process node optimization. Chiplets are used in compute-intensive, workload-heavy applications like high-performance computing (HPC).

There are four major use cases for die-to-die interfaces targeting applications like HPC, networking, hyperscale data center, and artificial intelligence (AI), among others:

Scale SoC

The objective is to increase compute power and create multiple SKUs for server and AI accelerators by connecting dies through virtual (die-to-die) connections, achieving tightly coupled performance across dies.

Scale SoC | Die-to-Die Interface

Split SoC

The objective is to enable very large SoCs. Large compute and network switch dies are approaching the reticle limits. Splitting them into several dies leads to technical feasibility, improves yield, lowers cost, and extends Moore¡¯s law.

Split SoC  | Die-to-Die Interface

Aggregate

The objective is to aggregate multiple disparate functions implemented in different dies to leverage the optimal process node for each function, reduce power, and improve form factor in applications such as FPGAs, automotive, and 5G base stations.

Aggregate  | Die-to-Die Interface

Disaggregate

The objective is to separate the central chip from the I/O chip to enable easy migration of the central chip to advanced processes, while keeping I/O chips in conservative nodes to lower risk/cost of product evolution, enable reuse, and improve time to market in server, FPGA, network switch, and other applications.

Disaggregate | Die-to-Die Interface

Die-to-Die Interface and Synopsys

Synopsys combines a broad portfolio of die-to-die 112G USR/XSR and HBI PHY IP, controller IP, and interposer expertise to provide a comprehensive die-to-die IP solution to support die splitting, die disaggregation, compute scaling, and aggregation of functions. The SerDes-based 112G USR/XSR PHY and parallel-based 8G OpenHBI PHY are available in advanced FinFET processes. The configurable controller uses error correction mechanisms with replay and optional (FEC) to minimize bit error rate for reliable die-to-die links. It supports Arm?-specific interfaces for coherent and non-coherent data communication.

Related Resources


Continue Reading