Pixel Perfect

Peter Ebert (inVISION)
Peter Ebert (inVISION)

Hector: In reality, reducing the size of the pixel was more about reducing the size of the silicon. So the smallest was the cheapest and it has been true for 20 years. But now the issue is: If your pixel is too small, the issue lays in optics. The savings you got on the silicon, you will lose them on the surrounding system.

From small pixel sizes to big sensors: 200MP sensors have already been announced, and my question is: How high can you fly? Where’s the limit?

Hector: We used to say that the sky is the limit. The design of big sensors is quite easy, but when you go really big, the issue is to manufacture them with acceptable yields. That is a part of the discussion that sometimes one forgets. It’s not only design, it’s also about the supply chain, test, and quality.

Wuyts: Physically the wafer size is always limited. These days it’s 300mm diameter and you need to have some rectangular square out of that. But I agree, it has to be manufacturable and you have to take optics into consideration, again. So these big sizes are nice to show off, but if you want to have some repeating volume business, then you need to stay within reason.

Wäny: I agree completely. I think we will have gigapixel image sensors in a not too far future, but it will be limited to very specific applications. It will be very expensive sensors obviously with extremely expensive optics. So all in all, expensive systems for astronomy purposes where you can afford this sort of developments. I expect mainstream resolution growth to flatten out, just because there are fewer applications where you really get the benefit of it. Making smaller pixels just drives the data rate if you’re not actually getting additional information from it.

Changing the topic: There are new special image sensors such as event based, neuromorphic or curved sensors. What else can we expect?

Wäny: I think one of the most interesting new technologies is probably the color steering. The current standard in image sensors for RGB visual space is that you put a matrix of absorbing color filters on top of the pixels. As we get smaller pixels, you can actually have diffraction-based color filters where you put the red light that’s incoming on one group of pixels to the red pixel and you put the green light of that same photo site to the green pixels. This way you don’t actually throw away two third of the light as we do in RGB color filters. I think that is a technology which is on the brink of getting possibly mature for mass manufacturing.

Wuyts: Typically the way I see the common divider of all those more exotic technologies is they solve one particular problem for typically a few applications. But most of the time they are not really mainstream enough to justify a breakthrough in all sensors. I can be wrong, but I think that’s the case with curved sensors. For event-based imaging I have a similar feeling. The question is always: Is it technology looking for business or is a business really looking for that technology?

Narayanaswamy: I think some of these technologies are just starting to turn up from ideation to research papers. From there on, it is still a way to go to early prototypes. However, in the case of event-based or neuromorphic sensors there is potential because at the end of the day we are trying to get the image sensor as close as possible to the human eye. And there is value behind that, when you look at it from the point of bandwidth, power consumptions or the necessity to become super efficient. There are some characteristics that definitely are value additions into the mainstream. But are they going to come up as a complete sensor or is it going to trickle down in some way into the existing sensors? (bfi)

www.gpixel.com

www.onsemi.com

www.photolitics.com

www.teledyne-e2v.com

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert