91³Ô¹ÏÍø

SoC Design and Verification 91³Ô¹ÏÍø for a New Era of AI Chips

Kiran Vittal

Mar 08, 2024 / 9 min read

¡°Hey Google, who¡¯s my next meeting with today?¡¯¡¯

It¡¯s great to have Google keep track of your meetings, play songs, or update you with latest weather conditions, but wouldn¡¯t it be catastrophic if a hacker had access to all your data and transactions? In today¡¯s era of pervasive intelligence, artificial intelligence (AI) and security have become pivotal differentiators to leap over the boundaries of conventional chip design.

A majority of applications driving the semiconductor industry¡¯s burgeoning growth incorporate AI techniques such as deep learning (DL) and machine learning (ML) that are compute-intensive and require dedicated chips and robust designs to power intelligent functions. From applications like speech and text recognition to high-performance computing (HPC), data centers, AI-based PCs, and autonomous vehicles, the underlying silicon driving such computation-heavy workloads relies on sophisticated architectures that can not only pack enormous power, but can also be customized to improve decision-making capabilities over time. New levels of horsepower need to be unlocked to perform effective data analysis and number-crunching across market segments like scientific and medical research, weather forecasting, finance, oil/ gas exploration, etc.

The AI momentum is building. With more and more smart devices getting connected to the cloud, the potential for AI to evolve exponentially has created market opportunities. The speed required to make decisions based on real-world conditions means that key portions of AI-related computations must be done in the hardware. Specialized ¡°AI chips¡± are essential for implementing AI at scale cost-effectively, bringing about new, disruptive solutions designed for specific applications.

However, the current generation of chips for AI/ML/DL applications contain custom processor architectures and complex datapaths to accurately perform the necessary arithmetic analysis. As the industry¡¯s desire to process more data, automate more functions, and integrate intelligence into every application continues to grow, chip designers and verification teams need to be equipped with modern verification techniques to fuel AI¡¯s next act.

Read on to learn more about how the recent boom of AI chips has changed the silicon engineering landscape; key power, performance, and area (PPA) challenges; opportunities to expand the use of AI chips across applications; the need for advanced verification; and why hardware security will be vital going forward.

SoC Design Tools Abstract

It¡¯s Not Just Semiconductor Companies Designing Chips

As Moore¡¯s law approaches saturation, it¡¯s becoming more difficult to achieve the desired performance gains from general-purpose processors. As a result, more companies outside of the traditional semiconductor space are designing chips in-house for very specific applications

Companies like NVIDIA, Intel, AMD, Qualcomm, Meta, Amazon, Alibaba, Microsoft, and Google are now heavily investing in the development of their own custom ASIC (application-specific integrated circuit) chips to support their AI software and fit specific application requirements. Ten years ago, no industry expert would have predicted a social media firm like Meta would venture down this path.

Building dedicated hardware architectures in-house has also extended to system and software companies in markets such as automotive, HPC, and cloud computing. As the silicon engineering landscape opens to more industry players, the subsequent market growth provides an opportunity for a new arsenal of design tools and solutions for today¡¯s demanding chip design environment.

RISC-V Processor Architecture Adoption for AI Designs

Initial RISC-V adoption was primarily in the embedded applications and microcontrollers spaces. Over the years, the open-source standard has continued to gain traction in a broad array of application areas such as automotive, data centers, and high-performance computing, with growing promise for AI workloads. Here¡¯s a look at key application areas where we¡¯re seeing strong adoption of the RISC-V architecture:

  • AI: AI chips tend to be heterogeneous, with designers opting for off-the-shelf processors where they can (with RISC-V being one of the choices) and focusing their expertise on developing high-performance, energy-efficient AI accelerators for tasks such as neural network processing and natural language processing.
  • Automotive: For automotive SoCs, RISC-V processors can help meet requirements for performance, power, cost, and security for systems including infotainment, advanced driver assistance, and communications.
  • High-performance computing (HPC) and data centers: RISC-V cores are well suited for taking care of complex computational tasks with customized ISAs, and RISC-V extensions can support development of simple, secure, and flexible cores that deliver the energy efficiency needed for these applications.

What Makes AI Chip Designs Different?

From AI startups to the world¡¯s largest cloud providers, some of the industry¡¯s  GroqChip, Nvidia H100 GPU, Ambarella CV52S, Atlazo AZ-N1, AWS Trainium, and Google TPU v4, to name a few¡ªhave made waves in accelerating the industry¡¯s race to faster and more efficient AI chips.

Today, we're seeing how data-centric computing is transforming the PC itself. AI-based PCs are poised to bring power intelligence capabilities to the masses.  by 2025. The chipmaking giant is teaming with Microsoft to define the AI PC, and the resulting machines are expected to feature a neural processing unit for AI workloads and Microsoft's Copilot AI chatbot. 

One of the key characteristics driving new AI system-on-chip (SoC) investments is the capacity to perform several calculations as a distributed operation, instead of the limited parallelism offered by traditional CPUs. For AI/ML-based hardware, the design entails data-heavy blocks consisting of a control path where the state machine processes outputs based on specific inputs and a compute block comprised of arithmetic logic crunches the data (think adders, subtracters, multipliers, and dividers). These features dramatically accelerate the identical, predictable, and independent calculations required by AI algorithms.

While the arithmetic compute block may not be extremely challenging for most design teams, the level of implementation complexity increases significantly as the number of arithmetic blocks and bits increase, thus adding additional strain to verification teams.

Consider the case of a simple 4-bit multiplier. To verify its complete functionality, test vectors need to be written for all possible input combinations, i.e., 24 = 16. The challenge? When it comes to verifying realistic scenarios of today¡¯s AI chips, teams need to verify adders that have 64-bit inputs, owing to the sheer amount of data processing. This means that 264 states need to be verified ¨C a feat that would take years using classical approaches.

This is just the case for one multiplier or divider in a design. Compounding these concerns, as the adoption of AI chips quickly expands and the amount of data generated continues to explode, time-consuming challenges associated with hardware verification make the need for modern, secure, and flexible verification solutions critical.

Key Chip Verification Challenges

When teams design AI chips, the design algorithm is written in C/C++, which is fast and widely used by engineers across teams. Once the functional code is written up, the information needs to be translated into a more hardware-oriented representation using RTL (register transfer language) for the design to be implemented. This requires teams to either develop test vectors for all possible combinations or compare whether the RTL matches the original C/C++ architectural model ¨C both are a daunting challenge.

When comprehensive verification is desired, but a continuously iterative approach is impractical, techniques like formal verification methods are considered. With formal verification, mathematical analysis is done to consider the entire hardware design at one time. Test vectors don¡¯t need to be written for every input combination. Instead, by leveraging model checkers, the design is verified against a set of assertions specifying the intended behavior.

A decade ago, formal verification was considered to be a technique that only experts could perform because of the high-level assertions involved. That notion has completely reversed. Today, any RTL designer or verification engineer can quickly learn the tricks of the trade and adopt them to a design, making it necessary for modern verification tools to be easy to use. Moreover, providing better debug capabilities in tools is critical to comprehend complex and unfamiliar design behavior and unify diverse and complicated design environments.

However, the sheer size, scale, and complexity of today¡¯s AI chips mean that they cannot be fully proven by model checking. Verifying these mathematical functions using traditional methods is inefficient, time-consuming, and impractical in the long run. Another new challenge with flexible and customizable RISC-V architectures is to make sure that all configurations are exhaustively verified, whenever a new custom instruction is added.

AI and ML Applications Require Advanced Datapath Verification

Using other forms of formal verification (e.g., equivalence checking) provides verification engineers with a powerful method to verify even the most complex of AI datapaths. Through this technique, two representations of the design are compared, and the designs are either proven to be equivalent or the specific differences between them are identified. With sufficiently powerful formal engines, the two representations can be at vastly different levels of abstraction, and even written in different languages ¨C a massive advantage. This method is commonly used to check the RTL input against the gate-level netlist produced by logic synthesis.

For instance, a chip design¡¯s detailed RTL implementation can be compared to a high-level C/C++ architectural model. The comparison confirms that the same set of inputs produces the same outputs for both representations. This powerful technique is a natural fit for many AI projects given that most already have C/C++ models available for results checking in simulation or as part of a virtual platform to support early software development and test.

Formal equivalence checking continues to be the only technology that can provide exhaustive verification of design datapaths against a proven reference model. To fuel the undeterred growth of AI applications and verify complex functional units of even the most prevailing of AI applications going forward, verification tools and solutions need to be easy to use, scale to bigger designs, and possess advanced debug capabilities that detect bugs quickly.

On the implementation side, there are the usual challenges in achieving the desired PPA. The latest gate-all-around (GAA) technology nodes can help with this, as can multi-die design architectures. Synopsys.ai full-stack, AI-driven EDA suite handles repetitive tasks such as design space exploration, verification coverage, and regression analytics, providing a faster path to optimized PPA.

 

From Today¡¯s AI Accelerators to Tomorrow¡¯s Cognitive Systems

Hardware design has become a core enabler of AI innovation. As modern compute workloads evolve, the relentless push for reduced design and verification cycle times will increase. Today, Synopsys is the only market player in the industry to have proven verification solutions that enable designers to verify complex AI architectures across applications segments.

With next-generation formal verification solutions like Synopsys VC Formal?, teams have the capacity, speed, and flexibility to verify some of the most complex SoC designs. The solution includes comprehensive analysis and debug techniques to quickly identify root causes by leveraging the Synopsys Verdi? debug platform.

The VC Formal solution provides an extensive set of formal applications, including the VC Formal Datapath Validation (DPV) app with integrated HECTOR? technology that has a long history of successful deployment for the most demanding AI chip projects. With custom optimizations and engines for datapath verification (ALU, FPU, DSP, etc.), the solution reports any differences in the results of the RTL and C/C++ models for diagnosis in the Verdi SoC debug platform and proves equivalence once all differences have been resolved. The solution has witnessed stellar results by several innovative chip developers, as well as emerging AI/ML chip companies.

Our solutions also leverage the beauty of parallelism and allow simulations to run across multiple cores simultaneously and benefit from the advantages of the cloud. This means that companies that need to use a large number of processors for only a day can still use our tools to design AI hardware. As the AI market expands to horizons previously unseen, we are excited to support the industry with advanced verification solutions and help usher in a new era of AI chips and software.

In the Near Future: Homomorphic Encryption for AI Chips

With the industry continuing to churn trillions of bytes of data and requiring high-performance chips to sustain this computational paradigm, the increasing number of bits is inevitable. Universities and research organizations across the globe are looking at possibilities of working with larger bits of input data (for example 4096) and building contingencies to design chips that can support this influx ¨C a great application for VC Formal datapath validation and Formal Security Verification (FSV) App.

With this influx of data comes the need for hardware security. Instances like the  , the biggest theft ever in the world of decentralized finance, exposes the looming threats and vulnerabilities that cybercriminals can take advantage of, making end-to-end security critical. Homomorphic encryption will be integral to the growth of AI/ML chips. Simply put, you can encrypt data and perform the same arithmetic computations required by the AI system without decrypting it and, thus, reduce risks of data breaches. To fuel its widescale adoption, next-generation tools will be needed to fuel this growth ¨C a promising direction to boost the productivity and quality-of-results for AI chip designs.

Summary

As AI becomes pervasive in computing applications, the success of AI chips in any market segment will require fully verified designs; no one wants their self-driving car to collide with an obstacle overlooked by image recognition analysis. New edge AI devices will drive an explosion of real-time abundant-data computing and transform how chip designers approach semiconductor design, leading to higher productivity, faster turnaround times, and better verification solutions.

The dawn of an AI-first world is nearer than it has ever been before. Can the everyday prompts to our virtual assistants be taken over by a real-life version of Tony Stark¡¯s J.A.R.V.I.S? Only time will tell.

Continue Reading