UST035

A new process technology boosts the performance of CMOS image sensors while reducing their size (Courtesy of Sony Semiconductors) 86 Focus | Image sensors are required to 10%, or getting more coverage over a longer distance using the same number of LEDs. The Nyxel approach also uses a dual conversion technique to increase the dynamic range. A capacitor underneath each pixel stores the value of the first exposure, which is then combined with the value of a second exposure. Another short exposure is combined with in-pixel processing and post-processing to boost the HDR further. This is combined with LFM for an HDR-LFM engine combination algorithm that will emerge in image sensors in vehicles in 2024 and 2025. The size, and therefore sensitivity, of the pixel is at odds with the overall cost – the larger the pixel array, the higher the cost. Pixels are currently typically 2.2 µm for an 8 MP array, which is being used for automated ground vehicles and UAVs, while automotive designers are looking at arrays of 12 and 15 MP for Level 4 and Level 5 driverless car designs. LFM The other issue that is increasingly important for unmanned system sensor designs is LFM. It is a consequence of the power supplies to the LEDs, which operate at certain frequencies to avoid them burning out. The frequencies can vary from 50 kHz up to 2 MHz: the higher frequencies enable the use of smaller power supplies but cause the LEDs to flicker. This is not visible to the human eye but is picked up by the image sensors. When LEDs are used to illuminate a scene around a vehicle, the flicker can easily be filtered out. Because the frequency of the power supply to the LED in the vehicle is known, it can be used in the image sensor to synchronise the detection of the image. However, LEDs are increasingly used outside the vehicle, for example in road signs and car headlights. These have a wide range of frequencies in the power supplies that are unknown to the image sensor. Without mitigation this can lead to the sensor picking up only part of the road sign, being saturated by oncoming headlights or not picking up the brake lights of a car in front. That is because the LED light source is repeatedly turned on and off, so if the timings of the shutter and light emission don’t match, the light looks to be off. If a car could not detect a red light and crashed into other cars or pedestrians, that would of course be a big problem. To eliminate flickering, the exposure must be prolonged. However, if the exposure is longer, it can easily cause a halo of light around objects in the image, which can confuse the machine learning frameworks used for object detection. Global versus rolling shutter A global shutter gives one image at a time of the full pixel array, and is used in sensors with higher pixel counts to detect fast movements inside the vehicle, for example to monitor the driver to make sure their eyelids aren’t drooping as a result of fatigue. The global shutter approach requires a lot of processing power, however, as data from all the pixels are downloaded at the same time and have to be processed. Outside a vehicle a rolling shutter gives the best performance. This scans a row of pixels one at a time, giving more time to process the data from each pixel. For detecting fast-moving objects though, this approach needs more sophisticated processing, as the same object can show up in consecutive lines. December/January 2021 | Unmanned Systems Technology A global shutter is used inside a vehicle in sensors with higher pixel counts, while outside a vehicle a rolling shutter gives the best performance

RkJQdWJsaXNoZXIy MjI2Mzk4