Cloud native EDA tools & pre-optimized hardware platforms
Archimedes famously said, ¡°Give me a place to stand, and a lever long enough, and I will move the world.¡±
Well, would you rather move the earth by brute force alone or find yourself a magic flying fulcrum and lever combo to aid you in your efforts? Would you rather spend your time and effort tweaking your design tools to hit your power, performance, and area (PPA) targets? Or would your expertise be better used differentiating your chip designs?
The answer (to the latter questions at least) is pretty obvious. Who wouldn¡¯t want to innovate? But whether or not you have the means to spend more of your time doing so is the difference brought on by using disparate design and implementation tools versus a highly convergent and correlated tool flow.
In this blog post, we¡¯ll take a closer look at what it means when your tool flow can actually be a force multiplier for your organization.
Everyone is in a race to get to market first with a chip that has the best PPA. Engineering ingenuity goes a long way toward this goal. After all, smart engineers can always make things work, but not without a huge hidden cost. Time spent tinkering with disparate tools to increase correlation of results is money and design cycles wasted¡ªand time not spent further differentiating your design. Some designers find themselves putting margins in and essentially making the implementation tool work harder to achieve similar numbers at timing signoff. Unfortunately, this ¡°forced correlation and convergence¡± approach can significantly ¡°overcook¡± the system, manifesting itself as extra power or area in the resulting design.
Say you have a year to complete a project. If you could reduce the time it takes to hit your initial PPA targets down to six months, you have another half of a year to architect and further enhance your PPA metrics¡ªperhaps well beyond what you imagined would be possible. Freed up to do certain things earlier in the cycle, you can enjoy the efficiencies and positive outcomes of a ¡°shift left¡± approach. With today¡¯s market pressures and distributed design teams, anything that brings greater efficiency could be turned into a competitive advantage.
The changing landscape of the hardware design world also calls for a reinvented tool flow. These days, traditional chip design houses are joined by hyperscalers who are designing their own high-performing chips for the massive data centers that support their core businesses, such as social media, search engine, and e-commerce platforms. These companies need their design engineering teams to ramp up and be productive quickly. A common platform of chip design tools simplifies the effort and fosters better outcomes, eliminating the need to spend time making the tools work together.
Tightly correlated tools also share meta data that could prove useful for optimization later in the design flow. By contrast, if you¡¯re using point tools and typical standardized database or ASCII hand-offs, then the meta data gets lost. For example, when you¡¯re synthesizing an adder, you create the netlist and, often, even the idea that it¡¯s an adder gets lost in the broad sea of elemental logic gates. A converged tool flow, however, will remember that you¡¯ve got a 32-bit input adder. Later in the flow, you can easily change the structures of the adder if you find that it is on a critical data path that needs to be faster. Power intent is another useful parameter when shared across tools (and in a common data model). When you read the RTL in, you can understand the power intent. You can see that a particular register set that you¡¯ll infer later is connected, so you can treat it is a bank of multi-bit registers, keeping the information available in the data model. This way, you can optimize this bank of registers accordingly¡ªsomething that wouldn¡¯t be possible if you had no knowledge of the power intent across each register and that they were meant to exist as a common structure.
Chip designs are getting bigger and more complex as engineers aim to extract more from Moore¡¯s law. Hyper-convergent designs that integrate multiple architectures, technologies, and protocols into one massive, interdependent, and highly complex design are driving the need for hyper-convergent chip design flows. Parameters that were previously analyzed independently¡ªsuch as signal and power integrity, performance, and heat dissipation¡ªnow benefit from a more holistic approach because everything in the design is so intertwined. In short, the traditional, highly iterative approach to digital chip design no longer provides sufficient agility for today¡¯s demands. It is no longer enough to be innovative¡ªto thrive, you must innovate faster.
Envisioning the advantages that a scalable, convergent, and correlated design flow would bring to the industry, Synopsys embarked on a path to build an integrated platform of design tools with a common data model and shared engines. The innovative Synopsys Fusion Design Platform?, featuring the industry¡¯s only integrated and golden-signoff-enabled RTL-to-GDSII design flow, accelerates semiconductor innovation by delivering unprecedented full-flow quality-of-results and time-to-results. Its common data model ensures that nothing gets lost in translation as the design moves through its phases. Machine-learning capabilities in the platform mean that choices and learnings from early in the flow can be transferred downstream in the tools to speed up optimizations. Using Synopsys¡¯ DSO.ai? technology, the industry¡¯s first AI application for chip design, with the Fusion Design Platform, further enhances productivity and extends achievable PPA metrics.
So far, the customer feedback is inspiring:
As chip designers pack more into their SoCs, and the physics effects associated with shrinking geometries become ever more pernicious, it¡¯s clear that using a highly correlated, convergent chip design flow is a good direction. This way, nothing gets lost in translation through the design cycle, and engineers gain additional, valuable time to further differentiate their designs. Compared to using disparate tools, having a common data model and a singular technology framework draws the best from each tool in the integrated chain, so that the sum of all of these parts yields something much, much greater.