Unmanned Systems Technology 038 l Skyeton Raybird-3 l Data storage l Sea-Kit X-Class USV l USVs insight l Spectronik PEM fuel cells l Blue White Robotics UVIO l Antennas l AUVSI Xponential Virtual 2021 report

20 In conversation | Richard Bernard but make use of all the ‘tricks’ available within them to keep both latency and bandwidth consumption to a minimum. “We are able to reach a latency of 16 ms from capture to display,” he says, adding that such a figure is close to the limits of what can be done and is “almost as good as a cable”. At the same time, the bandwidth the codec requires for an HD video stream is between 4 and 5 Mbit/s. As with the first moving pictures, a video stream is a succession of still images presented in such rapid succession that they give the appearance of smooth, continuous motion to the human visual system. Compression algorithms look for spatial and temporal redundancies, which are items that are repeated within and between frames, and only encode them for transmission a minimum number of times. “The encoding technology is based on [an understanding of] human perception of images,” Bernard explains. “The whole magic of the compression technology is to match what human vision looks for – edges, differences between images, moving objects and so on. All those things are taken into account when our engineers design the codecs.” He adds that the compression also anticipates the expected behaviour of human perception and adjusts itself accordingly, encoding only those parts of the image that it deems most relevant, to save time and bandwidth. One technique for this is temporal encoding, in which the algorithm analyses movement between frames – before and after a given frame – to ensure that it only encodes the right part of the image at any time. The core hardware that embodies the codec tends to be either an ASIC or an FPGA. While an ASIC provides the ultimate in terms of SWaP efficiency, it cannot be modified once the logic is hard-wired into its circuitry. Because they use less power for any given task than other devices, ASICs are well-suited to use in rugged units that are constructed to high ingress protection standards without causing cooling problems. Using an FPGA however enables the codecs to be run and then adjusted according to customer feedback, as the logic gates in an FPGA can be reconfigured throughout the chip’s service life. “That is very important for us,” Bernard says. “Sometimes an ASIC is the right choice, but sometimes the customer will require additional features that we can provide through an FPGA.” Whether an ASIC or an FPGA is chosen to host the encoding logic, the chip will live on a board or a card that will also be home to supporting devices including a CPU comparable in power to those found in early smartphones. In the ROV project, for which VITEC is applying its technology for an undisclosed customer in the offshore industry, the idea is to allow safe and effective control of underwater vehicles via satellite links, potentially from the June/July 2021 | Unmanned Systems Technology If you were to do the encoding in software you’d be faced with threads, delays and user requests that could interfere with the whole process In the future, integrating hardware codecs into ROVs will allow imagery to be transmitted over Internet Protocol links, allowing wider use of much thinner tethers exemplified by this Fathom Slim Tether (Courtesy of Blue Robotics)