91³Ô¹ÏÍø

AI Chip Design Tools Deliver Autonomous Insights

Mike Gianfagna

Sep 28, 2021 / 4 min read

Everywhere you look folks are talking about artificial intelligence (AI). It¡¯s impacting almost every product we buy and every service we use. The conjectures about what it all means seem limitless. Meanwhile, closer to home, we talk about AI being used to design chips (AI or otherwise). The topic borders on science fiction, but it¡¯s very real. You can find out about this reality in my prior blog post, where I explored whether autonomous chip design will take your job away. Spoiler alert:  It won¡¯t.

So, what exactly is the long-term impact of AI on chip design? I recently attended a very enlightening presentation on the topic of AI from one of the members of our incredibly talented architect team. These folks all bring formidable knowledge to hard-to-solve problems. I¡¯d like to summarize some of the eye-popping information shared by Stelios Diamantidis, senior director of Synopsys AI 91³Ô¹ÏÍø. The perspectives offered by Stelios will help you predict the future, at least with respect to AI.

Artificial intelligence competing in the game of go

AI ¨C How Big Is Big?

 is an AI research laboratory. Its rather far-reaching mission is to ensure that artificial general intelligence benefits all of humanity. The lab also studies and publishes information about the way AI is growing and how it¡¯s being used. If you have followed Moore¡¯s law over the years, some of these facts will get your attention.

OpenAI points out that since around 2012, the amount of compute used in the largest AI training runs has been increasing exponentially, with a 3.4-month doubling time. Remember Moore¡¯s law had a roughly two-year doubling period. This means that since 2012, the increase is in excess of 300,000x. Note that a two-year doubling would yield a 7x increase. In my view, this defines explosive growth. In the words of Stelios, ¡°Software is eating the world, and AI is eating software.¡± Let¡¯s look at what this means with a specific example ¨C a program that learns to master the game of Go, which is significantly more complex than chess. For perspective, chess has about 10123 states and Go has about 10360 states.

According to , AlphaGoZero uses a novel form of reinforcement learning in which AlphaGoZero becomes its own teacher. If the algorithm was able to compute at a rate of one petaflop, the process would take about three years. Note one petaflop is equal to one thousand million million (1015) floating-point operations per second, so this isn¡¯t slow, just not fast enough. The throughput of new AI hardware is mind-boggling.

Some notable quotes are worth repeating here. This will provide a broader perspective on what¡¯s ahead:

  • ¡°NOR flash enables 50x denser weight storage, resetting Moore¡¯s Law.¡± - Michael B. Henry, Mythic
  • ¡°46,255mm2 of silicon, 400,000 cores, 9 PByte/s memory bandwidth ¨C training on 45,000 years of human intelligence.¡± - Andrew Feldman, Cerebras
  • ¡°Data center power consumption doubling every year (1/5 of all energy produced by 2022).¡± - Zaid Khan, Qualcomm

That last one should give you pause. We have a long, long way to go regarding energy efficiency. Low-latency speech recognition can demand the equivalent energy consumption of ~41 U.S. households. By contrast, the human brain is two to three orders of magnitude more efficient than today¡¯s silicon-equivalent processing power, delivering 1016 FLOPS with about 20 watts of power consumption. And all this in the face of component variability with multiple failure modes. So, be excited, but realistic about what is left to conquer.

Back to Chip Design

OK, let¡¯s bring all this back to the problem at hand ¨C designing impossibly complex chips against impossibly short schedules. The key points to remember about AI are:

  • AI is evolving fast, with no signs of slowing down
  • Hardware is fueling differentiation, and will continue to do so
  • We are still >1,000x less efficient than the human brain
  • New computing paradigms are around the corner (2023)

That last bullet deserves some discussion. The  dates back to 1901. It is the oldest physical science lab in the U.S. Today, NIST tracks innovations ranging from advanced semiconductors to earthquake-resistant skyscrapers and global communication networks. One of the items it is tracking is .

This technology promises to dramatically improve the efficiency of computational tasks such as perception and decision making. In other words, it¡¯s a turbo-boost for AI. I invite you to explore further. You will learn about things like spin torque oscillators and magnetic Josephson junction devices. No, this isn¡¯t like the flux capacitor from ¡°Back to the Future¡±; it¡¯s real. These concepts are ¡°bio-inspired,¡± so it¡¯s only a matter of time before we reproduce the human brain. The time involved for that may be astronomical in scale, so don¡¯t quit your day job any time soon.

Recall I mentioned a technology called DSO.ai in my last post. This innovative technology uses reinforcement learning to explore the solution space for chip design, finding the configurations for physical implementation that are superior to what humans can find in short periods of time. This is exciting and liberating for the designers who have better things to do than search solution spaces.

But consider the full breadth of the solution space involved in a complex chip design. When you think about the interaction of form function and physics, the number of possibilities becomes vast, far greater than human comprehension can handle. Just looking at floorplanning options for a complex design can present 1090,000 possibilities. All variables for chip design yield a far greater number. I would point out that, given the fast evolution of AI hardware and the promising new approaches to AI implementation, searching these spaces will be within reach much faster than any of us expect. The result will be a designer who has infinite visibility to the possibilities available. Autonomous design is really all about autonomous insight. With infinite insight, imagine the possibilities for innovation. This may be closer than any of us think.

Continue Reading