91³Ô¹ÏÍø

How Virtual Sensors Streamline Complex System Development

Emilie Viasnoff

Feb 16, 2024 / 4 min read

Systems tend to become smarter: a car can now interact with its environment, cameras on your smartphone can correct the image quality by taking environmental conditions into account, and augmented reality goggles can track your head, eyes, and gestures to display specific content. This trend of pervasive intelligence is only possible because sensors are everywhere. But when it comes to sensors, the physical variety have their limitations. Namely, they can be prohibitively expensive and laborious to implement in more and more complex systems.

Take the field of autonomous cars as a simple example. To operate effectively, a vehicle¡¯s autonomous driving system requires millions of miles of driving to build an ¡°understanding¡± of all the possible situations it may encounter, involving an extensive mapping of the environment captured by embedded sensors.

One way to do this is by manually deploying a large number of cars equipped with physical sensors to drive on the roads of a city such as San Francisco and record what they see, amounting to millions of images. To do this, you must factor in both significant time and financial outlay, running from fitting out the cars to hiring the drivers and covering vast stretches of road.

Alternatively, employing virtual sensors and leveraging high-performance computing technology minimizes the heavy lifting as well as the hazards associated with extensive real-world driving. Read on to discover more about the benefits of virtual sensors, potential applications, and more.

virtual sensors prototyping tools

What is a Virtual Sensor?

A virtual sensor has its roots in the concept of digital twins. As the name suggests, where a physical sensor generates data based on what it ¡°sees¡± in its immediate environment, a virtual sensor computes ¡ª or extrapolates ¡ª based on third-party information. This data acts as a representation of the environment and can be based on one, two, or three dimensions, which the virtual sensor processes and converts into a digital image when the sensor is a camera.

There are multiple scenarios in which a virtual sensor can act as both the chip and ed-product designer¡¯s friend. Let¡¯s take the variables of a camera as an example. The principles could apply to any physical sensor, such as those found in LiDAR and radar.

In one instance, a virtual camera can help determine the specifications of the physical camera for a particular autonomous vehicle model: Should it be black and white or color? How many pixels should it have? What would be the optimum depth of field? Where should this camera be positioned on the car to get the best information? Experimenting with a virtual camera ¡ª adjusting the pixel count, color balance, signal processing, and so forth ¡ª can help yield the answer to these questions and more without spending money on expensive prototypes.

In the design phase, a camera may consist of three or more components: the lens, the sensor, and the image signal processor. A virtual camera can simulate the resulting image quality to test the interplay of these components and establish the right mix.

Virtual Sensors in Testing and AI Model Development

Post-design, it¡¯s time to test the camera in situ. A virtual installment behind the windscreen or on a side mirror will provide an accurate understanding of how the final product will look at the environment in which it operates and ultimately how well it will function when integrated into a vehicle.

A further aspect where virtual sensors come into play is in developing the AI models that interpret the environments that the camera senses. Virtual images of elements such as road signs and pedestrians enable the model¡¯s training and development, removing the need for extensive physical information collection. Once fully formed, the AI model can be utilized by the physical camera that allows the autonomous vehicle to operate as it should.

In short, a virtual sensor has the potential to generate scenarios that a real autonomous equivalent will need to respond to, as opposed to seeking them out or waiting for them to happen (such as how to react to a child darting out in the middle of the road in search of a ball or how to give right of way to an emergency vehicle). In the field of automotive chip design, we can expect digital twin models and virtual sensors to play an increasing role in architecture exploration to meet the growing performance demands of OEM workloads; software development, hardware and software integration, and system-on-chip (SoC) validation before silicon availability; and the testing and validation of semiconductor models.

At the same time, it bears noting that autonomous vehicles have yet to become prevalent on our roads despite the presence of highly advanced chips, cameras, and automotive technology. Among other concerns, the volume of data that the system as a whole must process remains a barrier. Currently, autonomous cars are designed on a component-by-component basis rather than as a complete system that can streamline that data. Virtualization in terms of cameras, electronic control units (ECUs), and environments can help enable better teamwork and system optimization by providing a view of the components¡¯ interdependencies and breaking down the silos in which they take shape.

Virtual Sensor Applications and Tools

While the development of autonomous cars is still a work in progress, we are seeing the field for virtual sensors, and digital twins more broadly, extend from automotive to consumer applications such as smartphones and augmented reality to areas such as aerospace and defense. In this context, digital twins can deliver virtual renderings of a semiconductor subsystem and demonstrate how an integrated hardware and software system would work.

The key ¡ª and biggest challenge ¡ª to any digital twin-based development is, of course, the trustworthiness of the model and the accuracy of a given virtual sensor. Advancing this is a priority, as studies have shown that virtual development and testing has the power to dramatically .

To this end, Synopsys offers developers a range of tools including Synopsys Optical Platform, featuring CODE V, LightTools, LucidShape, and RSoft for optical sensor design and testing, Synopsys Platform Architect? for analysis and optimization of multicore SoC and multi-die SoC architectures; Synopsys Virtualizer? to enable pre-RTL software development using virtual hardware; and Synopsys Silver to support system design and validation by providing software developers with instantaneous feedback.

From back-end system development to the front-end technology that will enable the autonomous driving revolution, the capacity for accurate virtualization is likely to be an increasingly important quality among the organizations that succeed in turning visions of the future into reality.

Continue Reading