Cloud native EDA tools & pre-optimized hardware platforms
Artificial Intelligence (AI) has become pervasive in recent years and has rapidly established itself as a transformative technology. AI is powered by machine learning (ML) algorithms, which require massive computational power. Designers have traditionally relied on graphics processing units (GPUs) to execute these ML algorithms. Originally developed for graphics rendering, GPUs have proven well suited for performing the matrix and vector operations essential to machine learning. However, the AI hardware landscape is undergoing dramatic changes. The increasing complexity of computational requirements and the need for improved energy efficiency are driving the emergence of startups specializing in domain-specific AI processors. These startups are developing specialized AI processors with architectures optimized for ML algorithms, delivering significantly improved performance per watt compared to general-purpose GPUs.
As AI technology continues to advance, the demand for greater computational power and energy efficiency will continue to increase. According to an analysis by Semianalysis (source: ), AI data center power needs are projected to surpass the non-AI data center power needs by 2028, accounting for more than half of global data center power consumption, compared to less than 20% today (Figure 1).
Figure 1: Power need trends for AI data centers and non-AI data centers
The data center industry is attempting to alleviate the power demand by moving away from traditional air-cooled systems and turning to more expensive but highly effective liquid cooling solutions. However, relying solely on advancements in external cooling is not enough. To manage these increasing power demands, AI hardware developers must also innovate within the system design itself, exploring more comprehensive avenues for power optimization.
While developing system-on-chips (SoCs), designers can perform power optimization at various stages of the design, including at the architecture level, the implementation level, and the underlying technology level. Synopsys Foundation IP can help designers address these target areas (Figure 2). Power dissipation on an SoC is mainly attributed to dynamic power from circuit switching and leakage (or static) power. Dynamic power is dissipated when the processors are executing instruction workloads and is proportional to CV^2f, where C is switching capacitance, V is operating voltage, and f is the clock frequency of the circuit. Leakage power is dissipated both when the processor is idle, or active and scales with threshold voltage, transistor size and temperature. Various power management techniques, such as power gating and dynamic voltage and frequency scaling (DVFS), are used at the architectural level to reduce total power. At the implementation and process technology levels, design optimization and carefully managing the operating conditions of logic cells and embedded memories directly impact power consumption. Enabling logic cells and memories to operate at the lowest possible voltage whilst still maintaining the required performance, along with minimizing the capacitance on active nodes by using specialized cells, can significantly contribute to power savings.
Leveraging a raft of experience and deep capability built over multiple generations of Foundation IP optimization, Synopsys can play a crucial role in enabling power optimization for AI SoCs. The advanced solutions offered by Synopsys Foundation IPs include highly optimized, silicon-proven Logic Libraries, General Purpose IOs (GPIOs), and Embedded Memories. With the richest cell set in the industry, Synopsys Logic Libraries and IOs are co-optimized with Synopsys electronic design automation (EDA) tools to fully exploit the process technology benefits and deliver optimum power, performance and area (PPA) trade-offs. Synopsys memories incorporate key ML algorithm-specific features that translate to significant area and power savings for AI chips.
Figure 2: End-to-end energy efficient design flow
Let us dive deeper into how Synopsys Foundation IP helps in the reduction of power dissipation, specifically for AI processors.
Figure 3: (a) MAC unit block diagram (b) Memory read and write for a MAC unit (source: https://iopscience.iop.org/article/10.1088/1674-4926/42/1/013104)
Figure 4: The increasing complexity of on-chip variation for low supply voltage
As application requirements and AI technology evolve, the demand for developing computationally powerful and energy-efficient AI processors is pervasive. Both traditional GPU-based architectures and some of the evolving optimized AI architectures are driving the power efficiency curve to the limits. Traditional library and memory offerings optimized for enabling CPUs and previous generations of GPUs can fall short of meeting the specialized needs of today¡¯s demanding AI SoC designs. As the leader in Foundation IP, Synopsys has been innovating for optimal PPA for more than 20 years, consistently delivering specialized solutions to meet the demanding and changing design needs of the semiconductor industry. Supported by a robust R&D team and skilled application engineers, Synopsys leverages its expertise in logic libraries, IOs, and embedded memories to deliver uniquely tunable solutions to enhance the full spectrum of AI chip capabilities.
In-depth technical articles, white papers, videos, webinars, product announcements and more.