91³Ô¹ÏÍø

Cloud-Based Chip Design: The Outlook for Cloud EDA Tools

Mike Gianfagna

Feb 28, 2022 / 3 min read

Synopsys Cloud

Unlimited access to EDA software licenses on-demand

Migration to the cloud is in full swing. Whether you realize it or not, a growing percentage of your life is hosted in the cloud. Your personal finances, your photos, a growing percentage of the work you do for your employer and your entire social media footprint are all hosted on a public computing platform. The trend is undeniable. Some still resist the move to the cloud for various reasons ¨C more on that in a moment. Regardless of the current state, virtually everyone agrees cloud computing will take over as the dominant IT infrastructure. If only we could agree on when that will happen.

There are many analogies that apply here. The easiest one is computing as a utility. Most private individuals and large corporations don't construct their own power plants. It's far more cost-effective to buy electric power off the grid from utility companies that can deliver serious economies of scale. Solar power is changing this dynamic, but it's a supplement strategy vs. a replacement strategy. I do know folks who bought land in the middle of nowhere, far from the grid and set out on a path to become energy self-sufficient. The project quickly became exponentially complex and expensive. Yet, we still have many companies running on-premises data centers ¨C the equivalent to running your own private power grid.

Security is another "buy vs. build" decision. All corporate enterprises possess very sensitive data that is housed in compute farms. Putting it all behind a corporate firewall gives a sense of security and control. When you look a bit closer, there is another dynamic at play, however. The larger and more prominent a corporate entity becomes, the more of a target they become for hackers. This means an ever-escalating investment in that firewall security. We get to another economy of scale issue with this. Public cloud providers are also huge hacking targets. A typical enterprise of this type will invest vast amounts of resources to secure their interface to the outside world. This investment is usually far greater than any one corporate entity can shoulder, no matter its size. The scale tips in the favor of public cloud providers for robust security in this case. The public cloud is where most of your credit card transactions are processed. It tends to be safe.

Over the clouds. Sight out of a plane.

The Case for Chip Design in the Cloud

The dynamics above are shared by many markets and applications. The design of complex chips brings another set of challenges to cloud migration. Chip design workloads can be unique. Different parts of the design process will demand vastly different profiles for CPU power, CPU parallelism, memory utilization and disk space. EDA design flows also assume an NFS file structure, something early cloud environments had trouble with.

The dynamic range of compute infrastructure requirements for chip design workloads also presents a challenge. Request too little and the job will fail. Request too much and you will spend your entire budget quickly. Without help this can be a very difficult problem.

A Case Study for Chip Design in the Cloud

While all the above paints a daunting picture, I'm here to tell you designing chips in the cloud, reliably, predictably, and cost-effectively is indeed possible. In a prior life, I ran marketing and IT for an ASIC and IP company. How those two responsibilities found me is a story for another day.

Our engineering infrastructure was running on a privately hosted data center. A typical complex ASIC design flow could balloon by 5 ¨C 10X in terms of compute requirements for parts of the design flow. Having a fixed compute footprint didn't do well in this scenario. So, we embarked on an ambitious project to move our entire FinFET-class design flow to Google Cloud Platform. These were early days for cloud migration and Google was interested in a test case for chip design on its cloud. Good news for us.

One at a time, we tamed things like the NFS file system problem, the tiered storage problem (solid state disks are very expensive, use them wisely), the massive data transmission problem and the interactive latency problem. This is just a short list of all the challenges we faced. In the end, we were successful. On one sunny Friday afternoon, we transferred *all* engineering workloads from our private data center to two Google Cloud instances in Iowa and Singapore. Our private data center went dark.

The best part was no one noticed. Without missing a beat, ASIC and IP design work continued, worldwide. The lack of impact was indeed the best reward we could hope for. We began this effort before there were complete cloud offerings, from EDA companies and others. Today, the world is a different place and support for chip design in the cloud is better developed, and improving all the time. Indeed, the outlook for chip design in the cloud is clear skies with a warming trend. There will be a lot more to say about this trend going forward.

In the meantime, if you want to learn the whole story of my adventure in cloud computing, .

Continue Reading