Uncrewed Systems Technology 046

42 for the landers and the orbiter to oversee camera operation, image processing and additional processing including hosting a landing algorithm. Using a separate video encoder and AI chip allows a single-frame buffer to supply both the encoder in the video chip for transmission and AI chip for image identification. This architecture also opens up the possibility of transcoding by passing the decoded video to the AI chip and re-coding for lower bandwidth using AI techniques. This is key for using NASA’s low-bitrate Deep Space Network that is used to send signals from the Moon. Analogue in memory AI Using an analogue AI inference technique has reduced the power consumption from 8-10 W to 3-4 W, allowing such devices to be used on UAVs. It uses a memory technology to store analogue values for the weights for the AI framework, avoiding the need to keep going off-chip to external memory, thereby cutting the power consumption. One reference design for a UAV uses an analogue AI chip that measures 19 x 15.5 mm in a ball grid array package. This has been mounted on an M.2-format board measuring 22 x 80 mm for image processing, and provides a four-lane PCIe 2.1 interface with up to 2 Gbit/s of bandwidth to the system CPU, as well as 10 general- purpose I/Os, I2Cs and UARTs to link to other parts of the UAV. The analogue AI chip has 76 analogue tiles that handle the ML frameworks in a more power-efficient way than a digital GPU. Because the ML frameworks are executed in a different way to the digital chips, the software compiler is key. Deep neural network models developed in standard frameworks such as Pytorch, Caffe and TensorFlow are optimised, reduced in size and then retrained for the analogue AI chip before being processed through a specific compiler. Pre-qualified models such as the YOLOv3 image recognition framework are also available for developers to get started. The resulting binary code and model weights, with around 80 million parameters, are then programmed into the analogue AI chip to analyse the images coming from the camera on the reference platform to identify key elements. This combination of hardware and software allows execution of models at higher resolution and lower latency for better results, and even allows multiple frameworks to be used in the chip at the same time. October/November 2022 | Uncrewed Systems Technology H.264 The Advanced Video Coding (AVC) standard, also called H.264 or MPEG- 4 Part 10, was drafted in 2003 and is based on block-oriented, motion- compensated coding. It supports resolutions of up to and including 8K Ultra HD (UHD) by using a reduced- complexity integer discrete cosine transform (integer DCT) variable block- size segmentation, and predicting motion between frames. It works by comparing different parts of a frame of video to find areas that are redundant, both within a single frame and between consecutive frames. These areas are then replaced with a short description instead of the original pixels. The aim of the standard was to provide enough flexibility for applications on a wide variety of networks and systems, including low and high bit rates, low and high resolution video, broadcast and IP packet networks, hence its popularity. H.265 High Efficiency Video Coding (HEVC), also known as H.265 or MPEG-H Part Encoding standards NXP’s i.MX8M processor includes blocks for H.264 and H.265 video processing

RkJQdWJsaXNoZXIy MjI2Mzk4