Cloud native EDA tools & pre-optimized hardware platforms
As you embark on creating or updating your next SoC design, you're undoubtedly driven by customer demands and predictions of future market requirements. Nobody sets out to create a product that resembles their competitors, so how your chip will be differentiated is likely also a top-of-mind consideration.
Simultaneously, you have performance, power, and area (PPA) targets to hit. Especially for SoCs going into edge and battery-operated devices where area and power budgets come at a premium, the efficiency of the processor is critical. What if you could customize the processor IP to tradeoff PPA for your specific use case?
As the "brains" of your SoC, the processor is a good ¨C and sometimes overlooked ¨C place to start when it comes to making your chip stand out from the competition and achieving optimum PPA. In this blog post, we'll take a look at how and why companies are deploying customized processor IP and examine the technologies and tools that streamline the process (Spoiler alert: you don't have to be a processor architect to implement a customized processor).
Back in the day, it wasn't uncommon for large semiconductor companies to maintain their own proprietary processors that were designed with their specific applications in mind. Eventually, however, the expense of doing this, along with the need for a software development ecosystem that went beyond what a single company could pull off, changed the landscape. Use of processor IP based on standardized instruction-set architectures (ISAs) became the norm in SoC design.
However, not all standard ISAs are created equal. Those that accommodate extensibility ¨C or the ability for users to extend the instruction set with their own instructions ¨C enable chip designers to effectively create customized processors that meet the specific requirements of their application. This flexibility is especially important for embedded designs, where power and area are often constrained and custom instructions can significantly reduce cycle counts for commonly used software algorithms.
As Moore's law slows, design teams need different ways to achieve their PPA goals. While a general-purpose processor can do a lot of things, it may not perform important or repeated functions as efficiently as needed. So, a team might opt to start with an extensible ISA and customize it for its product's unique needs. Taking it a step further, if no off-the-shelf processor IP meets a team's PPA requirements, a team may opt to build its own specialized processor or accelerator. This could be a daunting but necessary task, and the effort depends on whether the design approach is manual coding or automated in some way. The tradeoffs in choosing the best path for implementation are the usual ones: level of hardware optimization versus time-to-market versus ease of programming, among other considerations. There is no one-size-fits-all solution for every processor socket.
Of course, processor IP succeeds or fails by the ease of programming it. No matter how good the processor implementation, without software it doesn't accomplish anything. It takes a village to build a robust ecosystem for software development, a shared investment across commercial and open-source parties. Standardized ISAs help aggregate the software investments made by application developers, tools vendors, OS providers, etc. However, extensible ISAs come with the risk of fragmenting that investment if software compatibility isn't enforced across the various implementations of the ISA.
Many of today's SoCs handle a lot of specialized software workloads. The notion of one big applications processor that can do it all is now outdated. Instead, SoC architectures that instantiate a heterogenous set of processor cores, each working on specific software workloads, are common. For embedded devices in particular, where every gate and picojoule matters, the efficiencies of specialized processors (e.g., CPUs, DSPs, GPUs, ISPs, NPUs, and custom accelerators) are absolutely critical to make the design feasible.
The profile of adopters of processor IP is also changing. It's no longer just traditional semiconductor companies that are in the chip design business. OEMs, such as smartphone and automotive companies, are increasingly embracing vertical integration in their design practices, including implementing their own SoCs and even their own processors. From small startups to large systems companies, designers are seeing silicon and processor customization as a means toward product differentiation.
Processor customization covers a wide spectrum, and it's not necessarily an either/or proposition. For example, there are:
Licensable processor IP such as Synopsys DesignWare? ARC? Processor IP provides designers with significant flexibility for matching hardware resources to power and area budgets. ARC cores are highly configurable so that each instance on the chip can be customized for the best possible PPA. Designers are in control of what logic and memories get instantiated, and they get to make their own decisions of what's required to meet their application requirements, without carrying unnecessary gates.
ARC processors are also extensible, enabling users to not only add their own instructions, but also registers, conditions, and status codes and even their own hardware design (Verilog RTL). These customizations can significantly speed up software execution, reducing code size and cycle counts, which in turn reduces energy consumption. Most ARC licensees take advantage of this patented ARC Processor EXtension (APEX) technology.
Figure 1: Energy and cycle count reduction running sensor application software with APEX accelerators.
All ARC processors ¨C including CPUs, DSPs and AI-based processors ¨C are built on a common ISA with a common programming environment and tool chain to ease software migration across the portfolio. A broad ecosystem of OSs, compilers, debuggers, middleware, etc., provided by Synopsys, commercial partners and open-source initiatives, enables ARC programmers to preserve and leverage their software investment across multiple designs and multiple generations of devices.
Synopsys' ASIP Designer tool takes processor customization a step further. Using a model-based approach that enables rapid architectural exploration and implementation, it automates the creation of application-specific instruction set processors (ASIPs) and the corresponding SDKs (compiler, debugger, profiler, simulator). One of the primary values of this unique tool is the ability for designers to rapidly iterate on the processor architecture. ASIP Designer automatically generates an SDK based on the processor model, enabling users to run their actual software on the design and adjust the architecture, then repeat as necessary to meet PPA targets. When requirements are met, the tool automatically generates synthesizable RTL. If off-the-shelf processor IP just can't achieve PPA requirements, ASIP Designer gives design teams the ultimate flexibility to design a programmable processor or accelerator tailored to their specific use case.
The chip industry is fast approaching the stage when it will no longer be feasible to hit PPA goals through a process node shrink alone. At the same time, design teams need to explore every means possible to differentiate their chip from the competition. Customizing your SoC's processor is a way to address both of these goals. While processor customization covers a spectrum of methods ¡ª from fine tuning the configuration to incorporating extensions to building your own processor from the ground up ¡ª it's an increasingly popular approach that chip developers have at their disposal to put their unique stamps on their designs.