Unmanned Systems Technology 028 | ecoSUB Robotics AUVs I ECUs focus I Space vehicles insight I AMZ Driverless gotthard I InterDrone 2019 report I ATI WAM 167-BB I Video systems focus I Aerdron HL4 Herculift

86 then added back to the compressed stream. When a complete stream is being compressed, there are hooks within the encoders to ensure that synchronisation. However, the new techniques using ML raise significant challenges of ensuring the metadata is accurately synchronised with an edited video stream. Onboard processing System developers are using a number of techniques to identify areas of interest to send as video, rather than sending the whole video stream. These vary from convolutional neural networks (CNNs) that are trained to identify people, vehicles or other UAVs, to sophisticated edge-detection algorithms. Once an area is identified, there are several ways to proceed. One is to send only that area, linking with the control of the gimbal to ensure that the object of interest is always in the centre of the frame. The frame can then be cropped to meet the bandwidth available. Another is to use image processing algorithms to subtract the background from the video. This can be challenging when a UAV is circling over a particular region, but by identifying the same stationary points in each frame, the background can be removed. The background is then sent as a single still image, and the moving objects in the frame are superimposed on that image at the ground station. This allows hundreds of moving objects to be tracked, and reduces the bandwidth requirement dramatically. Using ML for video analysis and compression over a satellite link is driving the need for multi-core processors. This is a combination of CNN networks and image processing algorithms from the OpenCV library of functions. This combination can be used to October/November 2019 | Unmanned Systems Technology There are a number of different ways of connecting an image sensor to a video compression system. CameraLink CameraLink is a legacy serial protocol that is widely used on machine vision systems. It supports up to three transceiver chips, using 28 bits to represent up to 24 bits of pixel data and 3 bits for the video sync signals. The data is serialised with seven parallel lines into one (7:1) and the four data streams and a dedicated clock are driven over five low-voltage differential swing (LVDS) pairs of signal lines. Universal Serial Bus (USB) The USB3 protocol provides a data rate of 5 Gbit/s using the USB Type-A or USB Type-C connectors. The latest USB protocol, USB4, is currently being released and will support speeds from 10 to 40 Gbit/s to connect a camera to the encoder. MIPI Chips have been developed to extend the range of the latest standard from the MIPI Alliance. MIPI was originally set up to provide a serial interface for smartphones, but over the past few years it has evolved to address the needs of autonomous systems, particularly driverless cars. MIPI CSI-2 (Camera Serial Interface-2) supports 1080p, 4K and 8K video from single or multiple cameras. Version 3 supports 24-bit colour from each pixel, which can decipher whether darkness on an image is a harmless shadow or a pothole in a roadway to be avoided. The v3.0 standard also adds support for Smart Region of Interest (SROI) for handling data from specific areas of images and CNN algorithms. There is also a Unified Serial Link (USL) for encapsulating connections between an image sensor module and application processor to reduce the number of connecting wires. MIPI CSI-2 can be implemented on either of two physical layers from the MIPI Alliance: MIPI C-PHY v2.0 or MIPI D-PHY v2.5. It supports speeds of up to 41.1 Gbit/s using a three-lane (nine-wire) MIPI C-PHY v2.0 interface, or 18 Gbit/s using a four-lane (10-wire) MIPI D-PHY v2.5 interface under MIPI CSI-2 v2.1. RAW-16 and RAW-24 colour depth optimises intra-scene high dynamic range and signal-to-noise ratio (SNR) to bring ‘advanced vision’ capabilities to autonomous vehicles and systems. Latency reduction and transport efficiency provides image sensor aggregation without adding to system cost. It facilitates real-time perception, processing and decision-making, and optimises transport to reduce the number of wires, toggle rate and power consumption. Differential Pulse Code Modulation 12- 10-12 compression reduces bandwidth while delivering superior SNR images devoid of compression artefacts for mission-critical vision applications. Scrambling reduces power spectral density emissions, minimises radio interference and allows further reach for longer channels. The capability of the Camera Command Interface (CCI) to work with the MIPI I3C v1.0 sensor interface supports advanced imaging performance requirements for autofocus and optical image Connection standards

RkJQdWJsaXNoZXIy MjI2Mzk4