91³Ô¹ÏÍø

Leveraging High-Performance Compute for Cloud-Based EDA

Sridhar Panchapakesan

Aug 17, 2022 / 5 min read

Synopsys Cloud

Unlimited access to EDA software licenses on-demand

High-performance computing (HPC) uses supercomputers and computer clusters to process data and perform complex calculations. A typical computer with a 3 GHz processor can make approximately 3 billion calculations per second. This number may seem like a lot, but in reality, it pales in comparison to HPC¡¯s ability to perform quadrillions of complex calculations per second.

High-performance compute has a plethora of unique uses, from streaming sports to tracking weather to analyzing stocks. Below, we discuss how high-performance compute works. We also examine one of its most crucial use cases of electronic design automation (EDA) and chip design.  

High-Performance Compute in a Nutshell

High-performance compute is the ability to process data and perform intensely complex calculations at high speeds. This process is accomplished through multiple methods including parallel programming, integration systems administration, utilizing digital electronics, advanced computer architecture, system software, and algorithms. 

HPC allows for aggregating computer power faster and more efficiently than traditional computing. The large volumes of computers and storage devices work in unison to process data at high speeds. Through high-performance compute, we have been able to solve major challenges in science, engineering, and business. 

High-performance computers are now more accessible than ever through the cloud. Cloud-based technology is changing the way we develop products and conduct research, as it allows for fewer prototypes and decreases time to market.  

Nowadays, high-performance computers use computing clusters and grids more than they use supercomputers. An HPC cluster has hundreds or thousands of computer servers, known as nodes that are networked together. Each node works in parallel, boosting the overall processing capability for high-performance computing. 

How HPC Works

HPC steps in for workloads that are too great for a single computer to process, such as DNA sequencing or large simulations. Instead of relying on a single computer, HPC and other supercomputing environments break down complex challenges into individual nodes working coherently as a single unit in a cluster. 

Software programs and algorithms run simultaneously on the servers within the cluster, which is set up to capture the output. When each component operates as intended, the whole functions seamlessly, splitting up the total calculation workload and allowing you to process massive amounts of data and calculations in a short period.

High-performance compute oftentimes utilizes one of two workloads:

  • Embarrassing parallel workloads are computational tasks divided into smaller, simpler tasks. There is limited dependency between the tasks, but they can run at the same time. Molecular modeling, contextual search, logistics simulations and large data processing all utilize embarrassing parallel workloads.
  • Tightly coupled workloads also break down a large workload but it maintains communication during operation. Each node within the cluster communicates as it processes. This process requires each component to keep pace for maximum performance. Storage must output and save data to and from computing servers as they operate. 

HPC in the Cloud

In the past to enable HPC, enterprises primarily utilized on-premises infrastructure. With the rise of accessible cloud computing, however, HPC resources are now more accessible to the commercial sector than ever, without the need for large investments into equipment.  

Past implementations of on-premises HPC deployment have required organizations to construct clusters of servers, storage, and infrastructure that they needed to manage over time. Since cloud HPC deployments manage infrastructure, businesses can use a pay-as-you-go model.  

Hybrid deployments are also possible, particularly for companies with pre-existing on-premises infrastructure.  

Applications in EDA

You can apply HPC to many fields, including computational fluid dynamics, building, transaction processing, and virtual prototype testing.  

HPC is especially useful in this final example of reducing the need for physical tests. HPC enables advanced, complex simulations rather than performing them in real life. In the automotive industry, for example, this process allows for cheaper and faster simulation setups instead of physical automotive crash accident tests. This convenience is also applicable to circuit boards and EDA. 

HPC drives EDA-based innovation. It is the groundbreaking force behind scientific discoveries that improve the quality of life for individuals around the world. HPC is also often cheaper, as it provides faster answers. It is a reasonable choice for small businesses and startups that can afford to run HPC workloads, scaling up and down as needed.  

All of these aforementioned benefits make HPC a key element in the feature of chip design and EDA usage. Chip design is both computationally and memory intensive, requiring multiple design phases. Frontend tools may be bound to single threads and CPUs, while backend tools may rely on optimized storage and bountiful memory. EDA simulations may also utilize 3D modeling, fluid dynamics, and other computational intensive processes that necessitate high-performance computing data solutions. 

Compute farms with a fixed-size model can result in jobs waiting in queue for a license or the right-sized computational node. Key performance improvements through HPC for EDA include: 

  • Efficient license utilization. As EDA tools are often one of the most expensive line items, more efficient utilizations reduce cost and accelerate time-to-market for new chips. 
  • High productivity. As opposed to hiring and training more engineers, HPC increases current engineers¡¯ productivity by reducing job wait time and run time and delivering products to market faster. 
  • Infrastructure cost. With lower infrastructure costs, you can allot resources for research and development, driving innovation.  

Since organizations often run multiple IC projects at the same time, resource allocation can be complicated. An on-premises cluster that is too small can result in slower time-to-market, while oversizing can lead to wasted resources. Many cloud HPC services offer an elastic cluster that grows and shrinks to fit an organization¡¯s workload. The ability to scale compute resources at any time enables chip design companies to reduce the risk of unavailable licenses. Furthermore, when tasks are complete and instances are idle, elastic scaling can terminate them to optimize costs.  

As more industries, including chip design, are turning to HPC, the global HPC market continues to expand. With cloud performance becoming more reliable, secure, and powerful, companies can optimize their chip design process. Through elastic clusters in cloud-based HPC, IC enterprises can optimize license utilization, engineering productivity, and costs, allowing for efficient innovation in the next generation of integrated circuits.  

Synopsys, EDA, and the Cloud

Synopsys is the industry¡¯s largest provider of electronic design automation (EDA) technology used in the design and verification of semiconductor devices, or chips. With Synopsys Cloud, we¡¯re taking EDA to new heights, combining the availability of advanced compute and storage infrastructure with unlimited access to EDA software licenses on-demand so you can focus on what you do best ¨C designing chips, faster. Delivering cloud-native EDA tools and pre-optimized hardware platforms, an extremely flexible business model, and a modern customer experience, Synopsys has reimagined the future of chip design on the cloud, without disrupting proven workflows.

 

Take a Test Drive!

Synopsys technology drives innovations that change how people work and play using high-performance silicon chips. Let Synopsys power your innovation journey with cloud-based EDA tools. Sign up to try Synopsys Cloud for free!


About The Author

Sridhar Panchapakesan is the Senior Director, Cloud Engagements at Synopsys, responsible for enabling customers to successfully adopt cloud solutions for their EDA workflows. He drives cloud-centric initiatives, marketing, and collaboration efforts with foundry partners, cloud vendors and strategic customers at Synopsys. He has 25+ years¡¯ experience in the EDA industry and is especially skilled in managing and driving business-critical engagements at top-tier customers. He has a MBA degree from the Haas School of Business, UC Berkeley and a MSEE from the University of Houston.

Continue Reading