Image Sensor Roadmap

Trends and Future Direction of CMOS Image Sensors
This article introduces the direction Sony Semiconductor Solutions is considering for the evolution of image sensors for applications in the industrial sector and also gives case examples from the application perspective.
Figure 5: Because of the new stacking technology the wafer processing for the pixel section can be separated from that for the circuit section.
Because of the new stacking technology the wafer processing for the pixel section can be separated from that for the circuit section.Image: Sony Semiconductor Solutions Corporation

As a leading company of image sensors, Sony Semiconductor Solutions has been responding to any demands with a wide-ranging product portfolio that includes products from compact to large sizes, from low speeds to high speeds, from low pixel counts to high pixel counts, and from ultraviolet light to infrared light. In addition, the package pin assignment has been standardized per series to make it easier to expand the deployment of the lineup in machine vision camera development. Image 1 shows the future evolution of the image sensors. Three axes have been defined as the directions of evolution. As representative examples of the performance improvement that forms the base of the first of these axes, this article introduces the simultaneous achievement of higher resolution and smaller sizes, and also technology for larger image sensor sizes and high speed readout. The second axis examined is the perspective of performance expansion from imaging to sensing. The new sensing technology introduced is SWIR image sensors. The third axis is functional expansion that is optimized as an edge system.

Higher Resolution & Smaller Size

In visual inspection processes to detect items such as minute scratches and foreign particles, the demand for even higher inspection accuracy can be met by simply increasing the number of pixels in the image sensor. However, this creates the issue that it increases the chip size, which was previously a problem as it would also make the camera larger. Conversely, if the pixel size was reduced to maintain the camera size, the light collection area per pixel would be reduced, and the resulting deterioration in image quality made a reduction in recognition and inspection performance unavoidable. To address these issues, Sony developed the technology for a stacked CMOS image sensor which is equipped with a global shutter function and uses a back-illuminated pixel structure (Pregius S). This sensor is suitable for the industrial sector and simultaneously achieves both higher resolution and a smaller size. This new structure has a superior light collection efficiency compared to the conventional front-illuminated structure. In addition, the development of technology to block the light entering the memory area adjacent to the pixel has enabled the miniaturization of the pixel area to 2.74µm square, which is about 63% of the conventional models. Furthermore, the stacked integration of the signal processing circuits that were previously arranged around the pixel has simultaneously achieved both 1.7 times higher definition (12.37MP to 20.35MP) and a reduced package size (91% of the conventional model*1) with the same optical system.

The stacked structure has enabled the inclusion of two AD converters and the sensors are equipped with a function to internally combine data with different gain. – Image: Sony Semiconductor Solutions Corporation

Larger Sizes and Faster Readout

C-mount lens compatible image sensors are a standard option for machine vision cameras, but it is expected that further productivity improvements will be achieved by using image sensors with larger image sizes to expand the imaging area. For example, if the images taken with the IMX253 (12.37MP, 17.6mm diagonal length) image sensor are compared with those of the large diameter IMX661 (127.68MP, 56.7mm diagonal length) image sensor, the latter enables to lower capture frequency. It also enhances recognition accuracy by the high resolution imaging. In inspections of flat panel displays, moire occurs when the resolution of the image sensor is insufficient for the resolution of the panel, so it is highly effective to perform oversampling inspections using an ultra-high resolution image sensor. In addition, with the IMX661, the use of the SLVS-EC (Scalable Low Voltage Signaling with Embedded clock) high-speed interface standard realizes a high-speed image readout that is four times faster than on the conventional models.

SWIR Image Sensor

The IMX990/IMX991 image sensors released in 2020 made it possible to use a single image sensor to take images over a wide range of wavelengths including visible light, from 0.4 to 1.7m. In the development of the SWIR sensors, the use of a Cu-Cu connection realized a smaller pixel pitch and wideband imaging, creating a new type of image sensor supporting SWIR. The SWIR wavelength region makes it possible to see indentations under the surface layer on fruit (by making the differences in moisture density visible), and to detect plastic or metal fragments included in food by using the light absorption and reflection characteristics in the SWIR range. The technology is also used in inspections that utilize the properties of light in the SWIR range, such as in inspections that utilize the characteristic of SWIR band light that it passes through silicon materials. It is a device that increases the possibilities in inspections, such as because inspections that previously used multiple cameras for imaging in visible light and in SWIR can now be performed with just one camera and to increase throughput with higher image processing speeds.

Optimized for Edge Systems

When image data is processed by AI or machines, an effective method to shorten the inspection time and improve the data processing efficiency is to cut out just the areas necessary, to narrow down the information and reduce the processing time. Our image sensors are equipped with an ROI function to identify just the areas necessary, and a self-trigger function to output just the data from the instant necessary. The stacked structure has also enabled the inclusion of two AD converters and the sensors are equipped with a function to internally combine data with different gain. The HDR processing that is normally achieved by overlaying multiple images can be processed without the occurrence of artifacts. In addition, completing the combination inside the sensor means that the volume of data output remains the same as on the conventional models, realizing a highly robust sensor. This kind of functional expansion is realized by stacking technology. As shown in Image 5, the key point for stacking technology is that the wafer processing for the pixel section can be separate from that for the circuit section, so it is possible to have image quality improvement and function expansion scalability.

www.sony.net

Das könnte Sie auch Interessieren