91³Ô¹ÏÍø

Powering the AI Chip Design Process with AI

Mike Gianfagna

Jul 14, 2021 / 5 min read

Everyone is familiar with the riddle, ¡°What came first, the chicken or the egg?¡± Many of us had fun with that one as kids. This kind of riddle is an exercise in causality ¨C what caused what. Putting aside the chicken and egg version (which can make your head hurt), I¡¯d like to examine a more contemporary and high-tech version, ¡°What came first: AI or chips that accelerate AI?¡± At first glance, this seems like a trick question. AI algorithms have been around a long time. The field of study started in the 1950s, clearly way before there were chips to accelerate AI algorithms. Before there were chips at all for that matter.

There is another twist to the riddle, however. When did AI become real? One could argue that happened once there were chips powerful enough to run AI algorithms at speeds that matched human needs. When someone asks Alexa a question, the answer needs to come back in under a second, not next week. So, AI accelerator chips make AI real, but without AI algorithms there¡¯s nothing to accelerate. We can now embark on a circular process to determine what came first.

Computer chip microprocessor with AI sign

Who Cares?

It is quite possible you¡¯re ready to dismiss this line of reasoning as potentially interesting but irrelevant. Before you do, consider this.

As AI becomes more ubiquitous there is an ever-growing need for faster, smaller, and lower power accelerators. Designing these accelerators is quite difficult. The need to do it faster to meet time-to-market demands just makes it harder. What if AI could be applied to designing AI chips? That could very well be a game changer. It turns out that¡¯s exactly what¡¯s happening. Let¡¯s take a closer look.

AI for AI Chips ¨C the Buzz

It turns out this is real, and a lot of people are talking about it. If you want a thought-provoking take on the topic, go to  this year. The chairman and co-CEO of Synopsys, Aart de Geus, will be presenting a keynote, Does Artificial Intelligence Require Artificial Architects?

If you want to better understand the forces at play between chip design and AI, I highly recommend this blog post from Stelios Diamantidis. Stelios leads strategy and product management for Synopsys AI and is a founder of the Machine Learning Center of Excellence, where he looks at applying machine learning (ML) technology to key disruptions in the design and manufacturing of integrated computational systems. He¡¯s thought a lot about the interactions between AI and chip design.

He¡¯s also done a few things about it as well. One high-profile application Stelios worked on is the use of AI to learn from prior chip design efforts to consistently achieve better results, in every project. The technology is called design space optimization, or DSO. I¡¯ll take a few words from Stelios¡¯ blog post to explain what it does. He¡¯s way better at explaining it than I am.

One disruptive application of AI in chip design is design space optimization (DSO), a generative optimization paradigm that uses reinforcement-learning technology to autonomously search design spaces for optimal solutions. By applying AI to chip design workflows, DSO facilitates a massive scaling in the exploration of choices while also automating a large volume of less consequential decisions. The approach creates an opportunity for the technology to continuously build on its training data and apply what it has learned to, ultimately, accelerate tapeouts and achieve power, performance, and area (PPA) targets. And one of the key advantages of AI is its support of reuse: the retained learnings gained for one project can be utilized for future projects, bringing greater efficiency into the design process.

The impact of a tool like this is best shown graphically. The figure below depicts a case study to find the lowest power while maintaining total negative slack (TNS). In this case, there was no prior learning. You can see that an automated, AI-guided system can examine many, many data points to converge on a solution that is superior to what an expert human can achieve manually.

Timing vs. power DSO case study (Part 2) | Synopsys

DSO.ai case study, no prior learning

Now, see what happens when system learning occurs. An even better result, in less time.

Timing vs. power DSO case study (Part 1) | Synopsys

DSO.ai case study, with prior learning

Now that blurs AI and chip design. Starting to wonder what came first? With the significant influence of AI, the product name became DSO.ai?. Synopsys first launched the product over a year ago and it has had quite an impact. I¡¯ll talk more about the impact in a moment. The World Electronics Achievement Awards honored DSO.ai with the Innovative Product of the Year award in 2020. Award-winning AI assisting with the design of AI chips. Nice.

So, Who Uses AI to Design AI Chips?

I can give you some perspective on this question. At the recent  there was an executive panel discussion entitled How Is AI Changing the Way We Approach Chip Design? The panel was led by our own Stelios Diamantidis. Panelists included:

  • Artour Levin, VP of engineering, Visual and Machine Learning IP, Intel
  • Paul Penzes, VP of engineering, Design Technology, Qualcomm
  • Sangyun Kim, VP of engineering, Chip Design Methodology, Samsung Foundry
  • Thomas Andersen, VP of engineering, AI & Machine Learning, 91³Ô¹ÏÍø Group, Synopsys

This group clearly understands chip design, AI, and how they intersect. The panel was one of the most highly rated at SNUG World and in the top 10 in terms of views. Since there were over 200 sessions, this is a meaningful accomplishment.  SNUG World content is open to registered Synopsys users. If you¡¯re registered and haven¡¯t yet watched this one, I highly recommend it.

For those without access, I¡¯ll give you a sense of what the conversation was like. Here are some AI technologies mentioned and what was said about them by the customer members of the panel. I¡¯m sorry I can¡¯t connect names and companies to this data. You¡¯ll need to access the SNUG World content for that.

DSO.ai

  • Achieve superior PPA scaling for new process nodes
  • ¡°Allows more global optimization that is not practical to achieve using traditional approaches¡±
  • Total power reduction of 7 ¨C 14% (combined dynamic and leakage power)
  • 200-MHz higher Fmax, one week, one engineer
  • 15% floorplan shrink, 2.5 weeks, one engineer
  • Improved power delivery network optimization
  • PPA target convergence, from six weeks to 1.5 weeks

Intelligent Test Selection

  • Achieve the same coverage with 60% fewer seeds
  • Significantly more bugs were found overall

Machine Learning-Assisted Library Characterization

  • 10X ¨C 100X faster than SPICE
  • ~1.5X throughput improvement with higher accuracy

A few other bits of information to share. Recently, Google and Nvidia weighed in on the benefits of using AI and reinforcement learning to optimize the chip design process. .

The other tool references cited above are generic. If you¡¯re wondering where they come from, remember this is the worldwide Synopsys user forum.

So, there¡¯s some background on the interplay between AI and AI chip design. Maybe it doesn¡¯t matter which came first. A more interesting question to ponder is, ¡°How far will AI take us for AI chip design?¡± I will certainly be watching with great interest.

Continue Reading