91³Ô¹ÏÍø

Implementing MIPI Camera and Display Interfaces in New Applications Beyond Mobile

By: Hezi Saar, Staff Product Marketing Manager, Synopsys

 

The use of cameras and displays in automotive, IoT and multimedia applications is increasing and designers need image and display interface solutions to meet stringent power and performance requirements. Traditionally, designers have leveraged the MIPI Camera Serial Interface (CSI-2) and Display Serial Interface (DSI) to connect image sensors or displays to application processors or SoCs in mobile applications such as smartphones. However, due to the MIPI interfaces¡¯ proven advantages and successful implementation, they are now implemented in new applications such as Advanced Driver Assistant System (ADAS), infotainment, wearables, and augmented/virtual reality head mount devices. This article describes how designers can implement MIPI DSI and CSI-2 in automotive, IoT and multimedia applications to allow multiple use of cameras and displays inputs and outputs, while meeting their bandwidth and power requirements.

 

MIPI CSI-2 and DSI ¡ª Starting in Mobile Applications

The mobile market, specifically smartphones, has been growing immensely in the past 10 years while MIPI CSI-2 and DSI have been the interfaces of choice to enable multiple cameras and some displays in mobile devices. The interfaces allow low-power, low-latency and low-cost chip-to-chip connectivity between hosts and devices that allows designers to connect to both low-resolution and high-resolution cameras and displays. Both interfaces utilize the same physical layer ¨C MIPI D-PHY ¨C to transmit data to the application processor or SoC (Figure 1).

Figure 1: Implementation of MIPI DSI and CSI-2 in mobile applications

 

MIPI Interfaces in Automotive Applications 

The MIPI camera and display interfaces are implemented in ADAS and infotainment applications as shown in Figure 2. In today¡¯s car, multiple cameras ¨C front, back and two sides ¨C are installed to create a 360-degree view of the driver¡¯s surroundings. In such an implementation, the MIPI CSI-2 image sensor is connected to an image signal processor which is then connected to a bridge that allows the entire module to connect to the main system in the car. In some cases, in-vehicle infotainment systems use DSI to enable a display interface using the same implementation.

Figure 2: Example an ADAS application utilizing the MIPI DSI and CSI-2 specifications 

 

MIPI offers a complete portfolio of specifications for automotive applications (Figure 3):

  • DSI: for driver information, mirrorless display and infotainment.
  • CSI: for ADAS applications, backup camera, collision avoidance, mirrorless vehicle and in-cab passenger capture.
  • Other interfaces: MIPI I3C for sensor connectivity, JEDEC UFS for embedded and removable card storage, SoundWire and RFFE.

Figure 3: Use of MIPI Alliance specifications in automotive applications (Image courtesy of the ) 

 

MIPI Interfaces in IoT Applications

While there are many forms of IoT SoCs, let¡¯s describe a superset of components or interfaces, including MIPI, typically found in an IoT SoC. CSI-2, potentially DSI, and a processor comprise the vision processing component of the SoC. The memory component consists of LPDDR for low-power DRAM and embedded Multi-Media Card (eMMC) for embedded flash. For wired and wireless communication, specifications like Bluetooth low energy, Secure Digital Input Output (SDIO) and USB are leveraged depending on the target application. To secure the data that travels through the cloud and stored in the device, security becomes an essential component, which mainly includes engines such as true random number generators and cryptographic accelerators.

The component where dozens or more sensors are connected is the sensor and control subsystem with I2C or I3C and serial peripheral interface (SPI). I3C is a new MIPI specification that incorporates and unifies key attributes of I2C and SPI while preserving the two-wire serial interface. System designers can connect a larger number of sensors in a device while minimizing power consumption and reducing component and implementation costs. At the same time, utilizing a single I3C bus enables manufacturers to combine a variety of sensors from different vendors to enable new functionalities while supporting longer battery life and cost-effective systems.

 

MIPI Interfaces in Multimedia Applications

A new use case for the MIPI camera and display interfaces is in multimedia applications such as virtual/augmented reality devices with high resolution cameras and displays. In such devices, the interfaces transmit and receive multiple images from various sources that are then processed and sent to the user with the utmost quality. Below are three examples of a multimedia application implementations:

  • High-end multimedia processor: In this implementation, multiple display and camera inputs ¨C typically coming from another application processor that has already processed and received the image but is not yet ready for transmission ¨C come into the image signal processor via CSI-2 and DSI. The image signal processor then transmits the image via CSI-2 or DSI to either the camera or display.
  • Multimedia processor: This implementation is mainly for gesture or movement recognition or human machine interface. Two image sensors, via CSI-2 protocols, interface with the processor where the movement or gesture is recognized and processed for further analysis and manipulation. The processed movement or gesture data is then transmitted to the application process via the CSI-2 protocol.
  • Bridge IC: Since there are multiple image inputs and outputs, as explained in the automotive section, there is a need for bridge ICs. The bridge IC allows for the output of one application processor to split into two display streams.

 

Advantages of the MIPI Interfaces

MIPI CSI-2 leverages the MIPI D-PHY physical layer to communicate to the application processor or SoC. The image sensor or CSI-2 device captures and transmits an image to the CSI-2 host where the SoC resides. Before the image is transmitted, it is placed in the memory in individual frames. Each frame is then transmitted through the CSI-2 interface via virtual channels. A virtual channel is used and required for multiple image sensors, which could support different pixel streams, sometime multiple exposures, and assign virtual channel identifications to each frame. Each virtual channel is divided into lines which are transmitted one at a time, allowing for transmission of a complete image from the same image sensor but with multiple pixel streams.

MIPI CSI-2 uses packets for communication which include data format as well as an error correction code (ECC) functionality to protect the header and CRC for the payload. This implementation applies to every packet transmitted from the image sensor to the SoC. A single packet travels through the CSI-2 device controller via the D-PHY and then split into the number of required data lanes. The D-PHY distributes the data to several data lanes operating in high-speed mode and transmits the packet to the receiver via the channel. The CSI-2 receiver using its D-PHY physical layer, extracts and decodes the packet, which is ultimately delivered to the CSI-2 host controller. This process is then repeated frame by frame from the CSI-2 device to the host in an efficient, low power and low cost implementation.

In a typical system with multiple camera and displays, the same physical layer (D-PHY) by both CSI-2 and DSI protocols is utilized. Depending on the target application, there are many considerations to account for during the discovery phase such as required bandwidth and device type. Knowing such considerations can help designers determine the D-PHY version with the required number of lanes and speed per lane, which can then determine the number of pins required to implement in the system. Ultimately, designers can determine the required interface and memory for their target applications. For example, there are implementations where CSI-2 over D-PHY is operating at 1.5 Gbps per lane and other implementations where the operation is up to 2.5 Gbps per lane. The operation at lower speed has implications on power and area, but most importantly, it is not future proof with newer image sensors and display designs that support the faster speed.

 

Summary

The use of multiple cameras and displays are now in applications beyond mobile such as automotive, IoT and multimedia including augmented/virtual reality devices. All of these applications demand high speed and low power camera and display interface solutions that meet the demands of today's high-resolution image processing. MIPI CSI-2 and DSI are proven interfaces in the mobile market, mainly smartphones, and because of their successful implementation, they are being utilized in new applications. Synopsys' broad portfolio of MIPI IP solutions, consisting of controllers, PHYs, verification IP and IP Prototyping Kits, are compatible with the latest MIPI specifications, allowing designers to incorporate the required functionalities in their mobile, automotive and IoT SoCs while meeting their power, performance and time-to-market requirements.