Cloud native EDA tools & pre-optimized hardware platforms
With increasing complexities surrounding system-on-chip (SoC) architectures, the gamut of modern chip design and verification methodologies has evolved immensely to meet demanding and changing operating needs related to power.
As high-performance processor architectures, software, and artificial intelligence (AI) continue to enable the era of smart everything, faster computation to fuel data-intensive applications has emerged as the need of the moment. Be it image recognition, virtual reality, mobile applications, next-gen networking, or autonomous driving, the success of the underlying silicon to drive these advanced technologies relies heavily on the hardware¡¯s ability to process large amounts of data. But, given the constraints of silicon technologies ¡ª not to mention bigger picture environmental issues in some cases ¡ª it must be done with optimal power consumption.
The push for increasing intelligence at the edge has made it even more difficult to manage and sustain the power-hungry winds of change.
For a long time, emulation has proven to be essential for hardware verification, performance validation, and early software bring-up. Today, as the need for power optimization moves up the priority chain, bringing new silicon successfully to the market is no longer enough. Comprehensive solutions for power reduction and power management verification are critical for design teams to gain actionable power insights and verify the entire silicon to software ecosystem.
Read on to learn more about what makes the world of power so challenging and why semiconductor companies need to leverage a ¡°shift left¡± approach to accelerate power verification using emulation.
While chipmakers have always factored power into the larger design equation, it did not hold as much importance as it does today.
A decade ago, the primary driver for chip development was frequency and the ability to drive faster processor speeds measured in gigahertz (GHz). While some obsessed about how a 2-GHz processor would have 2 billion opportunities per second to perform core operations, others competed with 2.6 GHz that powered 2.6 billion chances ¡ª making it roughly 20% faster. Chip designers leveraged this emphasis on higher frequencies to evaluate how fast arithmetic logic units (ALUs) could process data to perform all computations and obtain important performance benchmarks to market their product.
However, this focus on computational performance has changed the power-to-performance equation.
Today¡¯s enormous compute-driven market demands means that frequency and compute power can no longer be compared in percentages, but multiples.
Additionally, power analysis with realistic software workloads could traditionally only be performed post-silicon. The design would go through its phases of architectural exploration, register-transfer level (RTL), physical design, and verification, all the way to when decisions are finalized and shipped to the foundry ¡ª with no certainty around power consumption under real life conditions. Searching this vast space for optimization opportunities is a very labor-intensive effort and typically requires many weeks of experimentation, often relying on past experiences and repeated trial and error. This arduous process introduces a high amount of risk to mission-critical high-power situations, exposing companies to significant cost and power-performance tradeoff hazards.
Today, there are three underlying power metrics that design and verification teams need to pay close attention to. To give you a sense of the complexity of these metrics, consider the following comparison to a modern-day cellphone:
These implications extend to other fast-growing application areas as well, such as high-performance computing (HPC), 5G, AI, automotive chips, etc.
With the market emphasis on power analysis increasing significantly in recent years, power discussions have moved all the way up to the architectural exploration stage. This upgrade reflects the imperative nature that power management and prediction of power-critical components play in the larger silicon story.
As transistor dimensions shrink and the tradeoff between computing power and performance widens, emulation is becoming the modus operandi for power verification. While there is an increasing push for chip designers to build optimal designs that can reduce power consumption, it is easier said than done.
Let¡¯s say a chip consumes 1 watt of power and you want to reduce its power consumption at the implementation level ¡ª this approach would allow for about 5-7% power reduction. However, doing that alone does not solve the problem. For greater power efficiency, identifying and reducing activity flow in the system, be it during the packet generation process or identifying redundant data in the flow, can reduce power consumption by 2x. As power continues to be a bigger piece in the silicon puzzle, this approach of leveraging multi-stage activity is witnessing growing interest from chipmakers.
How does emulation fit into the picture?
Essentially, emulation mimics the behavioral characteristics of the actual hardware, implements the design mapped, and accurately simulates the activity flow in the system. While dynamic power and peak power are a function of the streams of data flowing through the system, it is important to evaluate the average power demands of the system for accurate power profiling.
Compared to when simulation was the powerhouse for verification, emulation provides a 1000x speed up. This equips chipmakers with the ability to run realistic workloads across an exhaustive number of cycles to perform accurate diagnostics of the average power consumed and identify components where peaks are high ¡ª a critical component in power verification.
Adopting a ¡°shift left¡± approach becomes imperative to ensure timely detection of power-hungry activities early in the software development life cycle (SDLC). The idea being that it is faster and more cost-effective to detect vulnerabilities and resolve power issues early on than in post-silicon. This requires fast and comprehensive verification tools to ensure teams can complete successful SoC power analysis and optimization within the tight schedule constraints of a design cycle.
The effectiveness of compiling complex designs on emulation systems depends on several factors like capacity, its ability to run in specific functional modes to determine pass or fail, and debugging capabilities. To meet with such formidable challenges, it is critical for next-gen emulation systems to leverage fast emulation hardware technologies to meet short turnaround times.
Synopsys has always been a leader in responding to the most pressing requirements, with verification being an important focus. As multi-billion-gate SoC workloads and high-power designs continue to grow in complexity, we have been closely discussing the mounting design challenges that come with power and developed a novel technology needed to address this daunting challenge.
Delivering maximum compute performance, is the industry¡¯s first and fastest power-aware emulation system for multi-billion gate designs that enables power verification within hours using real-life workloads. Leveraging the best emulation and power calculation engines, ZeBu Empower allows designers to quickly identify power-hungry areas that can improve dynamic and leakage power of the system early on, in addition to meeting mission-critical compute needs with high speeds.
Synopsys has built unique market leadership by creating highly scalable and efficient power calculation to ensure both design size and emulation cycles can be parallelized. From an architectural standpoint, this breakthrough innovation allows teams to scale design cycles and tie emulation capabilities unlike ever before. The performance of ZeBu Empower enables multiple iterations per day with actionable power profiling in the context of the full design and its software workload.
The industry¡¯s need to shift verification left from post-silicon to pre-silicon has already driven wide adoption of ZeBu Empower. Our customers are now empowered to meet the demanding needs for hardware-software power verification and develop a new generation of power-optimized SoCs. This unmatched reliability allows verification teams to take advantage of the fast emulation capabilities of ZeBu Empower to optimize operating systems and chip designs, while dramatically reducing heat sink complexations and missed SoC power targets.
Power is the next frontier for SoC designs. As the future of AI and its impact on our everyday lives continues to be realized, the need for power analysis and power-efficient computations will grow across verticals that depend on chip design. The avenues of multi-stage activity analysis and using internal protocols (IP) with identified power consumption models promises exciting opportunities for advancements in power optimization. Ultimately, it will result in more convenience, safety, automation, and seamless communications across just about every aspect of our lives.
Power verification will be the fulcrum that ties modern areas of chip design now and going forward.