Technology

New Era of CMOS Image Sensors… From Multi Camera to AI

By October 11, 2019 December 9th, 2019 No Comments

The new era of the CMOS Image Sensor (CIS), the so-called ‘eye’ for smartphones, has arrived. At IFA 2019 (Internationale Funkausstellung; International radio exhibition), LG Electronics unveiled its new 5G phone, LG V50S ThinQ, while Samsung displayed its first foldable, the Galaxy Fold. Other major smartphone players were active too, with Sony introducing the Xperia 5, which comes with a 6.1-inch OLED panel and impressive cinema widescreen 21:9 aspect ratio, and HMD Global unveiling the new Nokia 6.2 and Nokia 7.2 models. Later on in September, Apple announced the iPhone 11 Pro and iPhone 11 Pro Max with new triple-camera systems, the very first smartphones able to record 4K video at up to 60 fps and 120 fps slow-motion via each camera. Keeping with the rapidly growing demand for CISs, the competition to secure the most cutting-edge technology is fiercer than ever.

The Golden Era of Multi Cameras Beyond Physical Limitations

One of the most considered specifications when customers purchase a new smartphone is the camera function. New smartphone designs, such as ‘hole-in displays’ and ‘notch-displays’ that maximize the screen size, are emerging as powerful industry trends. Therefore, minimizing the size of camera modules is more important than ever, and to accomplish this, and the size of the image sensor must be decreased along with pixel size.

An image sensor’s performance is defined by the number of image signals it can bring without a defect or noise. Therefore, one of the industry’s most important challenges lies in boosting the number of signals received with the same pixel unit size. As the front screen gets bigger, as many pixels as possible should be stored in the smaller module. However, decreasing the size of pixels to cut down on a smartphone’s camera module size results in poor image quality, due to less light being absorbed. At the same time, it is impossible to enlarge chip size to increase the number of pixels, which is why a technology that goes beyond physical limitations is essential, so that pixel size can be decreased while retaining high-quality camera performance that typically only large pixels can produce.

In this sense, ultra-high-resolution equipment is even more crucial for improving image quality than decreasing the pixel size. Recently, it is common to see smartphones boasting ultra-high-resolution cameras of over 20million pixels at the front, and over 40 million pixels at the back. For instance, Xiaomi is installing an image sensor in their new Mi Mix Alpha 5G smartphone that has 108 million pixels, which was released in September this year. 108 million pixels is the highest level among today’s commercially available smartphone cameras.

As it stands, the minimum pixel size of a mass-produced image sensor is 0.7 to 0.8μm (micrometer), with the maximum number of pixels at 108 million. Image sensor of 0.7μm pixels are expected to be mass-produced more and more from 2020.

Image Download

As a core component applied to numerous devices including smartphones and vehicle cameras, an image sensor is a semiconductor that converts light absorbed by a lens into digital signals. It can be compared to the film of a camera, or
the structure of the human body where light absorbed via a person’s eyes is instantaneously delivered to their brain.
A pixel is a cell to unit that makes up a CIS. Each pixel sends out an electrical signal with the intensity of the light received and this signal is converted to Red, Green and Blue. A digital image is then formed with combination of R, G,B.

Competition to Secure Cutting-Edge Technologies is Fierce

Technological innovation for even smaller cutting-edge cameras is well underway. High-speed cameras can not only have a great impact on our daily lives, it can also be highly utilized in the ‘Automotive’ sector courtesy of its ability to meticulously measure the fast motion of moving cars. It is said that more than 10 cameras will be equipped to autonomous vehicles over level 4, which will be realized by 2023, and accordingly, demand for image sensors in the electronic device sector is expected to rise rapidly.

Competition in the advanced sensor market is fiercer than ever before. Global smartphone manufacturers are now applying intelligent 3D sensors to their flagship smartphones. As a major strength, 3D sensors can execute orders by understanding the shapes and motions of the human body, such as a person’s face or hand, which allows the smartphone to be controlled without even touching the screen.

Particularly, a Time of Flight (ToF) sensor, which measures distance from the time light from the camera travels to and from an object while simultaneously utilizing stereoscopic recognition – such as distance analysis, is now among the biggest trends in the industry. ToF is a cutting-edge 3D sensor that calculates space information, movement, and a three-dimensional look of an object via comprehensive distance analysis. While the Structured Light (SL) method was applied to the 3D sensor TrueDepth camera found in Apple’s 2017 iPhone X, it certainly had its limitation in terms of its distance calculations inferior to ToF. Currently, most smartphone manufacturers are applying superior ToF technology to their latest models, taking advantage of ToF’s diverse functions that allows highly regarded technologies such as biometric authentication, movement recognition, augmented reality (AR) and virtual reality (VR) to be realized. AR and VR content will benefit greatly from the advanced facial recognition technology, giving users the ability to create the perfect personal avatar or dance with their favorite game characters who can magically appear on their desk when the area is selected via the camera. The technology can even recommend the perfect furniture for any living room by analyzing the room’s three- dimensional structure with 3D CIS cameras.

Furthermore, the development of RGB + Infrared Radiation (IR) sensors that integrate the IR sector for better object recognition regardless of poor lighting, as well as research into vision sensors capable of pinpoint movement detection, is now at full throttle. For example, Dynamic Vision Sensor (DVS) is a super-speed motion sensor that reacts to changes in light intensity, incredibly effective in tracking positional changes. Because it solely captures movement, with faces not recorded, the innovation’s utmost strength for application lies in privacy protection. For this reason, this technology can be used in various sectors including VR, autonomous driving, movement recognition, danger detection, as well as important rescue and surveillance cameras.

Image Download

The semiconductor industry’s ultimate goal relates to the successful development of an effective AI sensor. This fast-paced sector aims to create technology that surpasses the human eyes’ capability of reaching an eye- watering 576-million pixels. The industry is now paying special attention to technology that can sense and process the movement of an obstacle or person ahead, even in complete darkness, and that can recognize positional changes and Region of Interest (ROI) in real time for more accurate and reliable autonomous driving. The development of image sensors that precisely measure high-speed movements in environments of limited lighting is assisting groundbreaking studies and research for technical progress, especially in the autonomous driving sector.

In conjunction with industrial trends relating to AI and 5G, the potential of CIS technology and its application in future innovations, such as 3D Vision, fully autonomous driving and Smart Security, is limitless.