91³Ô¹ÏÍø

Definition

An AI accelerator is a high-performance parallel computation machine that is specifically designed for the efficient processing of AI workloads like neural networks.

Traditionally, in software design, computer scientists focused on developing algorithmic approaches that matched specific problems and implemented them in a high-level procedural language. To take advantage of available hardware, some algorithms could be threaded; however, massive parallelism was difficult to achieve because of the implications of .

Effect of Amdahl's Law | Synopsys


How Does an AI Accelerator work?

There are currently two distinct AI accelerator spaces: the data center and the edge.

Data centers, particularly hyperscale data centers, require massively scalable compute architectures. For this space, the chip industry is going big. , for example, has pioneered the Wafer-Scale Engine (WSE), the biggest chip ever built, for deep-learning systems. By delivering more compute, memory, and communication bandwidth, the WSE can support AI research at dramatically faster speeds and scalability compared with traditional architectures.

The edge represents the other end of the spectrum. Here, energy efficiency is key and real estate is limited, since the intelligence is distributed at the edge of the network rather than a more centralized location. AI accelerator IP is integrated into edge SoC devices which, no matter how small, deliver the near-instantaneous results needed for, say, interactive programs that run on smartphones or for industrial robotics.


The Different Types of Hardware AI Accelerators

While the WSE is one approach for accelerating AI applications, there are a variety of other types of hardware AI accelerators for applications that don¡¯t require one large chip. Examples include: 

  • Graphics processing units (GPUs)
  • Massively multicore scalar processors
  • Spatial accelerators, such as 

Each of these are separate chips that can be combined by the tens to hundreds into larger systems to enable processing large neural networks. Coarse-grain reconfigurable architectures (CGRA) are gaining significant momentum in this space as they can offer attractive tradeoffs between performance and energy-efficiency on one side and flexibility to program different networks on the other.

Broad range of AI accelerator architectures | Synopsys

For example, consider Megatron, one of the world¡¯s largest transformer-based language neural network models for natural language processing (NLP). Created by the Applied Deep Learning Research team at NVIDIA, Megatron provides an 8.3 billion parameter transformer language model with 8-way model parallelism and 64-way data parallelism, according to . To execute this model, which is generally pre-trained on a dataset of 3.3 billion words, the company developed the NVIDIA A100 GPU, which delivers 312 teraFLOPs of FP16 compute power. Google¡¯s TPU provides another example; it can be combined in pod configurations that deliver more than 100 petaFLOPS of processing power for training neural network models.

AlexNet to AlphaGo Zero: A 300,000x Increase in Compute

Source: 

 

Different AI accelerator architectures may offer different performance tradeoffs, but they all require an associated software stack to enable system-level performance; otherwise, the hardware could be underutilized. To facilitate connectivity between high-level software frameworks, such as TensorFlow? or PyTorch?, and different AI accelerators, machine learning compilers are emerging to enable interoperability. A representative example is the  compiler.

Measuring performance of AI accelerators has been a contentious topic. For an independent assessment of training and inference performance of machine learning hardware, software, and services, teams can consult , an independent organization formed by a group of engineers and researchers from industry and academia.

As intelligence moves to the edge in many applications, this is creating greater differentiation in AI accelerators. The edge offers a tremendous variety of applications that requires AI accelerators to be specifically optimized for different characteristics like latency, energy efficiency, and memory based on the needs of the end application. For example, while autonomous navigation demands a computational response latency limit of 20¦Ìs, voice and video assistants must understand spoken keywords in less than 10¦Ìs and hand gestures in a few hundred milliseconds.

In the future, cognitive systems, which aim to simulate human thought processes, will emerge with greater prominence. Compared to today¡¯s neural networks, cognitive systems have a deeper understanding of how to interpret data at a different level of abstraction. 


The Benefits of an AI Accelerator

Given that processing speed and scalability are two key demands from AI applications, AI accelerators play a critical role in delivering the near-instantaneous results that make these applications valuable. Let¡¯s dive into the top benefits of AI accelerators in some more detail:

  • Energy efficiency. AI accelerators can be 100-1,000x more efficient than general-purpose compute machines. Whether they¡¯re used in a data center environment that needs to be kept cool or an edge application with a low power budget, AI accelerators can¡¯t afford to draw on too much power or dissipate too much heat while performing voluminous amounts of calculations.
  • Latency and computational speed. Thanks to their speed, AI accelerators lower the latency of the time that it takes to come up with an answer. This low latency is especially important in safety-critical applications like advanced driver assistance systems (ADAS), where every second counts.
  • Scalability. Writing an algorithm to process a problem is challenging. Taking this algorithm and parallelizing it along multiple cores for more processing capability is even more challenging. In the neural network world, however, AI accelerators make it possible to achieve a level of performance speed enhancement that can be almost equal to the number of cores involved.
  • Heterogeneous architecture. This approach allows a particular system to accommodate multiple specialized processors to support specific tasks, providing the computational performance that AI applications demand. It can also take advantage of different devices, for example, magnetic and capacitive properties of different silicon structures, memory, and even light for computations.


AI Accelerator and Synopsys

Hardware design has become a core enabler of innovation for the age of AI. At the same time, it is presenting a unique set of challenges to its pioneers, with both cloud and edge segments pushing the limits of existing silicon technologies for performance, power, and area.

Data center AI designs are characterized by massive dimensions, multiple levels of physical hierarchy, locally synchronous and globally asynchronous architectures, and very fragmented floorplans. Edge AI designs need to handle hundreds of design corners, extreme variability, ultra-low power requirements, and heterogeneous integration (e.g. sensors).

Synopsys delivers the industry¡¯s most comprehensive AI design portfolio, from IP for edge devices to the Zebu? Server 4 emulation system for fast bring-up of complex workloads to the Fusion Design Platform for full-flow, AI-enhanced quality-of-results (QoR) and time-to-results (TTR) for IC design.

Synopsys has introduced the first autonomous AI application for chip design: DSO.ai? (Design Space Optimization AI). DSO.ai can search for optimization targets in very large solution spaces of chip design. By massively scaling exploration of options in design workflows and automating less consequential decisions, DSO.ai can dramatically accelerate the design of specialized AI accelerators to market.

Synopsys.ai: AI-Driven EDA

Optimize silicon performance, accelerate chip design and improve efficiency throughout the entire EDA flow with our advanced suite of AI-driven solutions.

Continue Reading