Cloud native EDA tools & pre-optimized hardware platforms
In the world of processor development, flexibility is becoming a distinct advantage. As an open-standard instruction set architecture (ISA), the fifth iteration of reduced instruction set computing (RISC-V) embodies this direction. It is rapidly changing the industry by opening new possibilities for collaboration, innovation, and design autonomy.
Currently widely applied in the context of embedded applications and microcontrollers, RISC-V is also likely to play an important role in the future of high-performance computing and data centers. Within this context, energy efficiency is a central theme. The most advanced technology has a voracious appetite for energy, and finding ways to support innovation while reducing power consumption is a priority for businesses everywhere. It¡¯s no exaggeration to say that in today¡¯s world, every fraction of a percent of power reduction is critical.
RISC-V designs leverage a hardware description language to describe the processor micro-architecture. This description, commonly referred to as Register Transfer level (RTL) code, underpins the development of energy-efficient RISC-V designs. Read on to learn more about how it works and how Synopsys supports the design and implementation of systems that save valuable power.
RISC-V¡¯s flexibility comes from its unique set of features that enable users to optimize both software and hardware for specific use cases, ideally improving both performance and energy efficiency.
Beyond flexibility, RISC-V offers developers and designers greater control over their computing environments and allows them to fine-tune the system without relying on third parties. As an open-standard platform, RISC-V also allows users to avoid the license fees otherwise associated with proprietary architectures.
Finally, a third benefit for developers is added visibility into the code base, its evolution, and potential security risks.
Across almost every design form, application, and market segment, energy efficiency has become a key consideration. On one side are the companies that design hardware for portable applications such as mobile devices and wireless connectivity. In this context, battery life is as important a concern as ever.
With the end-user experience in mind, driving down power while optimizing performance is critical; it means the difference between being able to view high-definition video on a smartphone for the entire duration of a film or only five minutes. In the medical field, energy efficiency dictates the effectiveness and reliability of implantable devices such as pacemakers.
Then, there is the field of Internet of Things (IoT). Here, devices spend most of their time in sleep mode, intermittently coming to life to process and send information. This field also requires a long battery life.
On the flip side, power can become a constraint in that it sets thermal limits and becomes a reliability issue, leading companies to find ways to optimize performance per watt. In this context, reducing power becomes the basis for increasing performance and pushing the power-performance envelope.
The flexible nature of RISC-V is a valuable asset in terms of the customization potential it affords developers. But it also poses questions around the implementation of a software algorithm. Will it be general-purpose, hardware-instruction set architecture? Or are there benefits to be had from adding or extending the instruction set architecture with new instructions? Such decisions determine how efficiently the algorithm will run from both a performance and energy efficiency standpoint.
Of course, there is no such thing as a free lunch. Extending the instruction set architecture might enhance performance and energy efficiency. But the upshot is also a more complex processor. Implementing new instructions translates to logic gates and the penalty of additional design area usage. There is an inevitable trade-off between power, performance, and area (PPA), and a developer or designer making these choices must be clear on what it is and on what the function of the process is going to be.
But how can the designer assess these PPA trade-offs?
Once the design architecture is defined as RTL, power analysis tools such as PrimePower RTL can be used to assess the power consumption of the design early in the process. PrimePower RTL delivers consistent accuracy as compared to the final power signoff results, allowing the user to make informed, confident decisions to optimize their design.
The other crucial component for this process is emulation. With the Synopsys ZeBu? emulation system and ZeBu Empower ¨C a massively parallel power analysis tool ¨C power analysis is performed with a realistic workload, avoiding the pitfalls associated with synthetic simulation vectors that don¡¯t represent real-world use cases. With the aid of these Synopsys tools, the designer can explore the PPA trade-off and, after running analysis on different architectures for comparison, choose the most appropriate one for the task at hand.
It bears emphasizing that RTL is a technology-independent language that requires physical implementation, which is the role of the Synopsys Fusion Compiler? solution. Fusion Compiler is a hyper convergent implementation solution where unified RTL-to-GDSII optimization engines unlock new opportunities to identify the best performance, power, and area. When used in conjunction with?Synopsys DSO.ai?, it becomes possible to streamline the process to produce lower-power, higher-performance, smaller-area designs automatically within a compressed timeframe.
Data centers are among the most power-intensive applications. In fact, the Electric Power Research Institute expects data centers¡¯ consumption of U.S. electricity to . While AI is also in the spotlight for its sizeable energy requirements, the technology has an important role to play in driving down overall energy usage.
In one example of an energy-efficient processor design, a Synopsys DSO.ai-driven reference flow delivered surprising results for a RISC-V-based high-performance CPU core targeted for use in data centers. Using 5nm process technology with a starting point of 1.75GHz at 29.8mW of power, the target was 1.95GHz at 30mW. The estimated timeline for two expert engineers was a month. Synopsys¡¯ AI-driven reference flow achieved 1.95GHz at 27.9mW within just two days, also hitting the expected area target.
As demands for power optimization become more rigorous, we can expect AI¡¯s presence in design to continue to expand, offering energy-efficient SoC solutions that would be impossible to achieve manually. See how Synopsys supports the future of energy-efficient processors developed with RISC-V, via our ARC-V processor IP and RISC-V solutions.