Pixel Perfect

Panel Discussion: Image Sensors - What's next?
Closing the first day of our inVISION Days, Gpixel, Onsemi, Photolitics and Teledyne e2v joined Editor-in-Chief Dr.-Ing. Peter Ebert for a panel discussion regarding the day's main topic 'Cameras'. Together they tried to answer the question: What's next for image sensors?
Peter Ebert 
(inVISION)
Peter Ebert (inVISION)Bilder: TeDo Verlag GmbH

To start things off: What are the current trends in image sensors?

Wim Wuyts (Gpixel): First of all, we see that the bandwidth race is still ongoing and that customers still require higher resolutions and frame rates. You would think that it might saturate at a certain level because people have enough data, but that seems not to be the case. The faster sensors we bring to the market are always welcome, and sometimes it takes a while before the industry catches up. But they do catch up over time and there is always a market afterwards. So the race is definitely not over.

Martin Wäny (Photolitics): I agree that the race for more data is still going on but what we also see – and have been for the last 20 years – is the requirement for smaller and smaller pixels in spite of the optical limitations. That is a trend that industrial vision is definitely following: Nowadays we have global shutter sensors with pixel sizes of 2 to 3m. So I think we have pixels that are close to perfection in physical limits, thanks to technologies like deep trench isolation for example we were able to reduce cross talk which allowed us to make even better and smaller pixels.

Vincent Hector (Teledyne e2v): Marketwise we are even facing kind of a first consolidation. Similar standard sensors with high resolution and small pixels. With power and performances not always needed by all the applications. Digital features-wise we are starting to have a real standardization which makes it not so easy for camera makers to differentiate themselves on the market.

Ganesh Narayanaswamy (Onsemi): It’s highly dependent on the application. Of course we have the smartphones which are bleeding edge and are always looking for more pixels and more bandwidth that need to be handled; not every application has these demands to keep pushing the boundaries upwards. Many applications dictate that you get the best image quality with a certain (silicon) real estate and that does not have to be only through more pixels. That said, the bandwidth problem is definitely still existent in a different way. Because if you do have smaller resolutions, increase the frame rate of these sensors, it does present an equivalent issue. This is where you have to take your bandwidth and your interfaces into consideration. Current image sensors are trying to address these in a variety of ways through serialized traffic, lower power consumption, smaller footprint – all easy innovations that seek optimal solutions to the use case needs. AIoT applications particularly benefit from these approaches.

Vincent Hector 
(Teledyne e2v)
Vincent Hector (Teledyne e2v)Bild: TeDo Verlag GmbH

Now we are going beyond the visible range. Considering SWIR, VIS-SWIR or UV, what’s next for wavelengths?

Narayanaswamy: Over the last few years, we have moved into the NIR spectrum and I think the next big milestone that the industry will be looking at is how to use the same sensor to capture the shortwave IR region as well. You have the choice to put in two different kinds of sensors and that brings its own complications. So I guess the next monumental step is to have a single sensor capable of covering the entire spectrum all the way through SWIR. But a big question will be if this is possible in an economical way.

Wäny: We definitely see this trend as well. Industrial machine vision applications were at first focused on the visible range, simply because that’s how we humans are able to see, and also because of the fact that most of the sensors are still based on silicon. So you have this extra NIR band up to maybe 900 or 1.000nm, which can be technically very easily covered with silicon sensors. There we have seen quite a big growth in applications that exploit this near-infrared band. Now there are some technologies that allow to have actual SWIR sensitivity with very similar sensors. But I think for the next couple of years, applications that need these combined VIS-SWIR/NIR capabilities will remain very niche.

Wuyts: There is also an economy to it. I mean you need to have the volume. Typically the innovations have always been driven by needs in the market, for example cellphone innovations, because the semi conductor industry which we are part of is very capital-intensive. So it’s never doable to develop something specifically for machine vision. It always has to come from other high volume applications and then the machine vision industry can benefit from it.

Hector: That is exactly the issue. We all agree that NIR and other technologies are progressing. The issue afterwards is if you want to go in something which is a little bit more specific. It’s very difficult to define one single product which would be okay for the whole Industry. So it’s again about customization, and there I agree that it’s difficult to get the return of investment. The point is that we need to have the filters, and then the customization is done for one application and one market. It’s impossible to have one single product to cover everything.

Ganesh Narayanaswamy 
(Onsemi)
Ganesh Narayanaswamy (Onsemi)Bild: TeDo Verlag GmbH

What will the standard sensor interface be for the high data rate sensors in the coming years?

Wuyts: I think the industry is craving for a standard. So I don’t think all our customers will like it if every sensor vendor makes their own interface. MIPI becomes faster and faster actually so I think that might be quite a good competitor. I do expect more and more efforts to standardize interfaces. But again, it also depends on the exact total bandwidth that is needed.

Hector: For the small resolution, MIPI is really getting more and more popular. But going to the very big arrays, our customers love using LVDS. It’s much more flexible to choose the surrounding hardware.

Narayanaswamy: MIPI’s progression from where it started to where it’s moving has been phenomenal. With the advent of the CPHY shows its proliferation as a standard and it has definitely taken hold in the industry.

Increased bandwidth and resolution are impressing but what about power consumption?

Wäny: We see more and more that the total system power is approaching thermal limits. But on the processing side we really take advantage of the main electronics technology progress with voltage going down. We have much more power efficient vision processing modules that will do much more operations than five years ago. The interfaces are definitely more efficient in terms of power per bit. But all in all, the power is still increasing if you increase resolution and speeds.

Hector: I think the paradigm is changing a little bit: Before, we used to have this one sensor which was very often big and fast to fulfill all the needs, and then we were adapting at the system level. Today, having sensors which are dedicated for a specific application opens a real space for innovations.

Wuyts: Tying into that, what is also an interesting trend is wafer stacking, which becomes more and more available for lower volume applications. So then you can actually do some processing on the image sensor itself and as a consequence, you don’t need to transfer all the data to other systems. So I think there are already some examples in the industry, for example 3D profile applications, where they do local processing on images. The question is of course, what do you do on an image sensor or what do you do more efficiently on a FPGA? If you want to keep it generic, it’s always very application-specific.

Martin Wäny 
(Photolitics)
Martin Wäny (Photolitics)Bild: TeDo Verlag GmbH

Changing the topic: In the future, if I need a special image sensor, am I going to buy it off the shelf or will I have to develop it by myself or by a company?

Wäny: Well, if you can buy it off the shelf, it’s not that special anymore. So indeed our business is to develop these special image sensors that people have applications for, which justify having some specific developments. So there are attractive ways to do really special sensors. One sensor for one application has always been a part of what is driving technology. Especially if we think about such things as combining AI with the sensor. I think we do need these very large applications which are quite specific but can afford the specific developments. So there will always be custom-made sensors that target one application. Of course, sometimes from those developments you will have technology trickling down into standard sensors and to a wider market.

Hector: This is of course from the viewpoint of a market leader and takes us back to our earlier discussion about customization. If you can make the step to invest in a custom sensor, then you will be able to make a real differentiation for your application, your product, on your market.

How small can pixel sizes still get and which sensor formats can be expected in the future?

Wuyts: Of course mobile phones prove that pixel sizes around one m are possible. Global shutter pixels will probably shrink down as well. But the first question, if you have those discussions today, is typically if there are optics available at a viable pricing rather than is it technically possible. Again, you need to look at what you are gaining from a total system perspective.

Hector: In reality, reducing the size of the pixel was more about reducing the size of the silicon. So the smallest was the cheapest and it has been true for 20 years. But now the issue is: If your pixel is too small, the issue lays in optics. The savings you got on the silicon, you will lose them on the surrounding system.

Wim Wuyts 
(Gpixel)
Wim Wuyts (Gpixel)Bild: TeDo Verlag GmbH

From small pixel sizes to big sensors: 200MP sensors have already been announced, and my question is: How high can you fly? Where’s the limit?

Hector: We used to say that the sky is the limit. The design of big sensors is quite easy, but when you go really big, the issue is to manufacture them with acceptable yields. That is a part of the discussion that sometimes one forgets. It’s not only design, it’s also about the supply chain, test, and quality.

Wuyts: Physically the wafer size is always limited. These days it’s 300mm diameter and you need to have some rectangular square out of that. But I agree, it has to be manufacturable and you have to take optics into consideration, again. So these big sizes are nice to show off, but if you want to have some repeating volume business, then you need to stay within reason.

Wäny: I agree completely. I think we will have gigapixel image sensors in a not too far future, but it will be limited to very specific applications. It will be very expensive sensors obviously with extremely expensive optics. So all in all, expensive systems for astronomy purposes where you can afford this sort of developments. I expect mainstream resolution growth to flatten out, just because there are fewer applications where you really get the benefit of it. Making smaller pixels just drives the data rate if you’re not actually getting additional information from it.

Changing the topic: There are new special image sensors such as event based, neuromorphic or curved sensors. What else can we expect?

Wäny: I think one of the most interesting new technologies is probably the color steering. The current standard in image sensors for RGB visual space is that you put a matrix of absorbing color filters on top of the pixels. As we get smaller pixels, you can actually have diffraction-based color filters where you put the red light that’s incoming on one group of pixels to the red pixel and you put the green light of that same photo site to the green pixels. This way you don’t actually throw away two third of the light as we do in RGB color filters. I think that is a technology which is on the brink of getting possibly mature for mass manufacturing.

Wuyts: Typically the way I see the common divider of all those more exotic technologies is they solve one particular problem for typically a few applications. But most of the time they are not really mainstream enough to justify a breakthrough in all sensors. I can be wrong, but I think that’s the case with curved sensors. For event-based imaging I have a similar feeling. The question is always: Is it technology looking for business or is a business really looking for that technology?

Narayanaswamy: I think some of these technologies are just starting to turn up from ideation to research papers. From there on, it is still a way to go to early prototypes. However, in the case of event-based or neuromorphic sensors there is potential because at the end of the day we are trying to get the image sensor as close as possible to the human eye. And there is value behind that, when you look at it from the point of bandwidth, power consumptions or the necessity to become super efficient. There are some characteristics that definitely are value additions into the mainstream. But are they going to come up as a complete sensor or is it going to trickle down in some way into the existing sensors? (bfi)

www.gpixel.com

www.onsemi.com

www.photolitics.com

www.teledyne-e2v.com

Das könnte Sie auch Interessieren