Cloud native EDA tools & pre-optimized hardware platforms
How many transistors can you pack into the space of a handheld device? The number increases by orders of magnitude with each passing decade. Portable music players illustrate the point. In 1979, a transistor radio required roughly 200 transistors. In 1984, the CD player needed around 1,500 transistors. In 1990, an MP3 player needed approximately 10,000 transistors. For the digital audio player of 2015, it was around 1,000,000 transistors. Advances in process technology has enabled these increases, and today multi-die systems, largely driven by advanced AI and 5G applications, are helping to enable another gigantic leap forward. In fact, in 2023, a typical smartphone had over 10 billion transistors! The user advantages are clear. In the case of music, this translates to better fidelity, more space to play, store, and stream your favorite tunes, more features in terms of sharing, playback, interoperability with other devices, and more. But sophistication is not without its design challenges.
The increased need for compute resources is not in practical alignment with the capital expense of on-premise servers or the time it takes to install them. In the face of cost pressures, shrinking market windows, and market demands¡ªbetter performance and more features¡ªon-premise storage is a burden that many businesses can no longer bear. The need for the elastic scaling of compute resources for IC design in the cloud has arrived.
The first broad-scale SaaS solution to enable you to leverage the cloud for IC design is Synopsys Cloud. Synopsys Cloud combines the availability of advanced compute and storage infrastructure with unlimited access to EDA software licenses on demand. As part of the , Synopsys recently collaborated with TSMC and Microsoft to conduct a test case for performing design rule checks (DRCs) in the cloud on the process using Synopsys IC Validator? physical verification, a Synopsys Cloud offering.
The results? DRC in the cloud can help get your next big, complex IC designs to sign off faster ¨C here¡¯s how.
So why did we choose DRCs for our testcase? DRCs ensure that designs operate correctly and can be manufactured in the foundry. Performing them using traditional on-premise compute resources can take precious time, especially as designs get larger and more complex.
Because today¡¯s design sizes are larger, the number of process rules have increased. In fact, process rules in many of today¡¯s designs can number in the thousands, and the increased design complexity can result in hundreds of steps. For multi-die systems that have billions of transistors, a DRC or layout-versus-schematic (LVS) job can run multiple days and utilize hundreds of CPU cores.
The increased compute power that is needed in smaller time-to-market (TTM) windows causes physical verification challenges. This is especially true as process nodes advance from 7nm, to 5nm, to 3nm, and beyond. For instance, at 3nm a runset can contain over 15,000 complex rules and require 10x this number of DRC computational operations to execute the rules. As a result, full-chip DRC sign off can consume tens of thousands of CPU hours just for a single iteration. While physical verification has always been compute intensive, the size and complexity of today¡¯s designs take this challenge to an entirely new level.
Serial dependencies to run DRC and LVS jobs mean that purchasing more compute power does not necessarily equate to faster run times. IC validation that requires computational scale means some of that compute power sits idle at times during the serial operations. If you don¡¯t find a way to optimize your computational resources for this kind of scenario it will impact your bottom line¡ªyou will be paying for those unused resources.
Using cloud computing for your IC verification can help you eliminate this. With cloud verification, you can scale up and down from hundreds of on-premise CPU cores to thousands of CPU cores in the cloud. This elasticity gives you flexibility, agility, and scale, using only the compute resources you need, when you need them. The DRCs inside your runset can be distributed to run in multiple cores in parallel optimizing compute resources, saving you time and money.
In a collaboration between Synopsys, TSMC, and Microsoft, we evaluated cloud verification against on-premise verification. To kick off the test, process design kits (PDKs) and DRCs from TSMC were uploaded to the Synopsys Cloud environment. We selected different resources based on design type within Synopsys IC Validator, a separate app for physical verification within the Synopsys Cloud environment, and compute options came pre-selected for the resources. After we uploaded the required scripts to run the test case and chose the Microsoft Azure instances¡ª and compute and for the shared storage¡ªwe created clusters of virtual machines (VMs) in a matter of clicks that included hundreds of CPU cores.
Our flows for the experiment were ready to be executed in a few hours, and we quickly executed a large test case to compare results from a job using the TSMC N3E process run in the cloud versus on premise at TSMC. All of the results (cloud and on premise) were saved in a GDSII file using XOR operation and any errors in the two runs had to fully match for a clean result.
The cloud job had a real-time run hour reduction from approximately 50 hours to under 20, a 65% improvement in the cloud over the on-premise job. In addition, CPU hours and cost were down by 25% in the test run in the cloud versus on premise.
IC Design in the Cloud Reduces Runtime by 65% in TSMC N3E Process
Image credit: TSMC
Synopsys IC Validator is a physical verification tool that can distribute jobs across thousands of CPU cores. At the heart of the success of this technology is the scheduler, queuing the commands for each core to optimize the file locations with the DRC sequence. It also estimates and balances the memory needs across the cores and minimizes peak disk usage, dynamically monitoring the load on each core and adjusting the system to improve core and memory utilization. Because it works in heterogenous configurations with real-world latency through fault tolerance capabilities, it can detect and recover from host reboots, network and socket failures, machine crashes, and disk space limitations.
IC Validator dynamic elastic CPU management works seamlessly with popular job queuing systems such as load sharing facility (LSF) and sun grid engine (SGE). It can be used in different types of compute networks such as on prem and cloud. Its resource and cost optimization happens while it accelerates timing closure to meet tape-out schedules, using up to 40% compute resources while maintaining similar performance over traditional DRC and LVS jobs. This translates to cost savings in the cloud where resources and storage are billed in time.
In addition to these benefits, IC Validator doesn¡¯t need to wait until all resources are available to begin the job. It can start immediately with minimal resources and use greater resources as they become available. The and accelerated networking (single root I/O virtualization, SR-IOV, enables offloading much of Azure¡¯s software-defined networking stack from CPUs to FPGA smart network interface cards, or NICs) to help ensure scaling that is optimized for virtual machines and increased data throughput, respectively.
In addition to all of the time and cost benefits, you can also keep your EDA in the cloud deployment secure by taking the steps to ensure your systems are properly protected. Stay up to date with the latest standards and ensure your cybersecurity systems are current. Having a well-managed and segregated virtual network (VNET) is key.
For more details, check out the TSMC best practices guide on easily performing physical verification in the cloud, or read about Synopsys IC Validator in the Cloud.