91³Ô¹ÏÍø

Enabling Cars to See with Efficient Vision Processors

By Michael Thompson, Sr. Product Marketing Manager, Synopsys

You can already buy a car that can drive itself on the highway, and in a few years you will be able to get a fully autonomous car that will drive you everywhere. There is no application where embedded vision capabilities are more apparent than cars. Automotive vision is advancing rapidly because of the potential to enhance safety and simplify driving. The newest capabilities are enabled by a number of advances in technology, but one of the biggest enablers is embedded vision processors, which are giving cars the ability to see. Vision processors support HD resolutions, multiple camera inputs, and fusion of vision and other sensors (Figure 1). As the capabilities of self-driving cars increase, the performance of vision processors will also have to increase, but they will have to do so with little change in the power and cost budgets. This will be a huge challenge for vision processor designers and users alike.

Figure 1: Number and use of cameras in cars today

Uses of Vision Processing in Automobiles

Vision in cars is already being used for more than just backup cameras, and the use of vision in cars will only continue to grow. Giving vision to the various systems in cars, and thereby giving it to the system designers, provides a tremendous amount of information about the current situation in and around the vehicle that can be used for decision making. Although there are other sensors that could be employed (radar, LIDAR, infrared, etc), none of these have the versatility of vision processing.

Vision brings a dimension to car electronics that other technologies cannot. For example, a camera in the passenger mirror can show you what is in the lane next to you. This same camera input could also be examined with a vision processor to determine if there is anything in the lane and warn you if a vehicle is there before you change lanes. This may seem unnecessary if you can see the full blind spot with the camera, but drivers can be distracted, while the vehicle¡¯s electronics cannot be. By looking at the same camera input that the driver sees, the vehicle can effectively help the driver prevent an accident, increasing safety. Of course, not all of the camera inputs in the vehicle will be viewable by the driver, nor is that even desirable. Cameras provide so much information in real-time that human analysis of all of the data is not practical. Current estimates predict that within a few years the average car will have 15 or more cameras. Viewing the input from all of these¡ªwhile difficult for the driver¡ªwill be easy and useful for the car's electronic systems, and will enable automotive designers to create systems that make real-time decisions based on the current conditions around the car for driver assist, driver warning and to take control if necessary.

Camera Input Requirements & Effects on Vision Processors

A large percentage of the cameras that are used in today¡¯s vehicles support VGA resolutions, but vehicles that are currently being designed are quickly moving to 1 megapixel (MP) and 2 MP cameras. Higher resolutions are important where smaller portions of the visual field have to be examined. A car traveling at 70 MPH will cover more than 300 feet in three seconds. At 300 feet at VGA resolutions, a pedestrian will not be easily distinguished from the background. At the much higher resolution that a 2 MP camera provides, a pedestrian can be recognized and the vehicle can warn the driver or take evasive action, if needed, while there is still enough time to effectively respond.

The use of higher resolution cameras comes with added cost and higher power consumption due to the increase in memory and bus bandwidth as well as the processing power needed to evaluate the camera output in real-time. While it is not difficult to design a vision processor that can handle the input from a 2 MP camera, the real challenge is controlling the increase in cost and power consumption. This requires specialized, power-efficient vision processors that minimize memory bandwidth and the power needed to process the video stream. In addition to managing the input from 2 MP cameras, vision processors must also evaluate input from other sensors (radar, LIDAR, infrared, etc) and combine it with the vision input to make decisions. The requirement to interpret data from multiple inputs significantly increases the capabilities and accuracy of automotive systems, and results in additional load on the vision processor. While this processing could be offloaded to other processors in the car, most car designers are keeping the processing and analysis of the sensor input close to the source. This design decision reduces the potential for problems, the need for memory buffering, and the power consumption resulting from moving large amounts of data around the vehicle. However, it also puts greater demands on the vision processor to analyze the sensor input, refine it, and send the results on to the vehicle¡¯s systems. This also has to be done with little to no increase in the power consumption of the camera sensor module, which includes the vision processor.

In recent years, automotive vision applications have started using convolution neural network (CNN) technology, which operates much like our brains do to identify objects and conditions in visual images. CNN graphs are trained to recognize any object or multiple objects and to classify them, and the graphs are then programmed into a vision processor. The CNN vision capability is more accurate than other vision algorithms, and is, in fact, approaching the accuracy and recognition capabilities of humans. This is very desirable in vehicles where recognition and accuracy are critical for understanding the objects to avoid or ignore.

New Vision Processors for Automotive Applications

Synopsys¡¯ investments in vision processing has led to the introduction of the DesignWare? EV6x family of vision processors, which is designed to meet the high performance requirements of vision applications like autonomous vehicles, drones, virtual reality, and surveillance (Figure 2). The processors feature support for HD video streams up to 4K, while maintaining a power and cost envelope that is realistic for automotive and other embedded applications. 

Figure 1: Number and use of cameras in cars today

The EV6x processors are ideal for automotive vision applications, offering the efficiency of hardware with the programmability of software. These are the most integrated vision processors available, and can be used for either host offload or standalone operation. The EV6x processors bring together the high performance of a specialized vision CPU with a dedicated CNN engine. The CNN engine has the performance needed to accurately and efficiently detect objects or perform scene segmentation on HD video streams up to 4K. The vision CPU operates in parallel with the CNN engine to increase throughput and efficiency and can process vision data with data from other sensor inputs. Each vision CPU core includes a 32-bit scalar RISC and a 512-bit wide vector DSP. The EV6x family supports up to four vision CPU cores offering scalable performance for vision designs.

The dedicated CNN engine is programmable, and with up to 880 MACs per cycle, it offers accurate and efficient support for object detection including semantic segmentation. Training of the CNN engine is done offline and the resulting graph is then programmed into the engine with the easy-to-use graph mapping tools.

To facilitate the development of automotive vision applications, the EV6x family is supported by a high productivity toolset that includes OpenVX, OpenCV, an OpenCL C compiler and the MetaWare development toolkit. This complete tool suite gives flexibility in the implementation of current automotive vision algorithms and the ability to address future requirements.

  • The OpenVX runtime eases the implementation of the vision graph so that utilization of the processor resources is straightforward for multiple cameras and functions.
  • The standard OpenVX kernels have been ported and optimized for use with the processors, and can be combined with the large range of functions that are available in the OpenCV library to build vision applications.
  • The MetaWare C/C++ compiler can be used with the OpenCL C vectorizing compiler for easy programming of the EV6x processors.

Summary

Automotive vision capabilities are advancing rapidly because of their potential to enhance safety and simplify driving. Advancements in vision processors like Synopsys¡¯ EV6x family are enabling the use of HD resolutions with multiple camera and sensor inputs, while staying within automotive designers¡¯ cost and power consumption limits.

If you don¡¯t already own one, your first experience with a self-driving car is not too far away. In time, embedded vision and autonomous vehicles will become ubiquitous and invisible, and with advanced vision processors like Synopsys¡¯ EV6x family, we will take them for granted in just a few years.

For more information, visit ARC EV Processors page.