Uncrewed Systems Technology 052 l Keybotic Keyper l Video encoding l Dufour Aero2 l Subsea SeaCAT l Space vehicles l CUAV 2023 report l SkyPower SP engine l Cable harnesses l Paris Air Show 2023 report I Nauticus Aquanaut

Read all back issues online www.ust-media.com UST 52 : OCT/NOV 2023 UK £15, USA $30, EUROPE €22 Code-makers Focus on video encoding technology The cable guys How to get the best from wiring harness suppliers Four-legged friend Keybotic’s quadruped robot for dirty and dangerous jobs in heavy industry

Electric power for UAVs More power. More products. Acutronic designs, builds and delivers a full range of UAV power systems. • Alternators • Starter-alternators • Voltage regulators • Starters The power of experience. With decades of engineering and manufacturing experience, Acutronic builds power systems trusted by customers globally for their high power density and efficient design. Proudly made in the U.S.A. We solve your power systems integration challenges. acutronic.com Scan to view technical data

3 October/November 2023 | Contents Uncrewed Systems Technology | October/November 2023 20 24 58 108 04 Intro AI is is making its way into wireless systems, and cutting road deaths, but traditional technologies will still hold sway for a while 06 Platform one: Mission-critical info Simulator projects to improve sensor and data models, robot can take eight different forms to suit its environment, study shows how to make drivers take manual control, and much more 20 In conversation: Davide Scaramuzza This robotics and perception professor explains his work on developing vision-based navigation for UAVs 24 Dossier: Keybotic Keyper How having four legs allow this UGV to overcome the challenges of working in dirty and dangerous industrial environments 38 Focus: Video encoding Don’t turn to AI for compressing UAV video feeds just yet, there’s still plenty of life left in existing techniques, as we explain 48 Digest: Dufour Aerospace Aero2 Hybrid power and a tilt-wing layout help this UAV to carry heavy payloads over long distances. Here’s how it was developed 58 In operation: Subsea Tech SeaCAT This USV-ROV-UAV combination is aimed at marine clean-ups – we explain how the system was used in a major project 64 Insight: Uncrewed space vehicles Autonomy is making further inroads into space exploration, and is even spawning a separate industry supplying small satellites 74 Show report: CUAV Expo 2023 The sheer number and range of new uncrewed systems on display here has led to this, our most in-depth report on the expo 88 Insight: SkyPower’s SP engines How this engine developer uses commonality and modularity of components to allow integrators to choose the optimum UAV powertrain, or reconfigure it for different applications 94 Focus: Cable harnesses We explain the lengths harness suppliers go to in order to deliver the best designs and components for a given use case 104 Report: Paris Air Show 2023 Uncrewed and autonomous systems were just as much in evidence here as crewed aircraft, as we highlight here 108 Digest: Nauticus Robotics Aquanaut The development of this offshore survey AUV, which is part of a system that includes a small, optionally crewed topside vessel 114 PS: Security of UAV data links The various ways UAV data links can be subjected to cyber attacks, and how they can be countered 88

ELECTRIC, HYBRID & INTERNAL COMBUSTION for PERFORMANCE ISSUE 148 AUGUST/SEPTEMBER 2023 New solutions for engine construction The future of non-metallic materials IndyCar engine design debate Who was first post-Goossen? Cutting-edge nitro engine Jaska Salakari’s mind-boggling V2 www.highpowermedia.com UK £15, US/CN $25, EUROPE e22 THE COMMUNICATIONS HUB OF THE ELECTRIFIED POWERTRAIN Up to speed Fast charging meets pacey performance in this Lotus-derived demonstrator Read all back issues and exclusive online-only content at www.emobility-engineering.com ISSUE 021 | SEP/OCT 2023 UK £15 USA $30 EUROPE €22 Material witness Staying in contact Focus on battery surface analysis Meeting the challenges of interconnect and busbar systems 4 October/November 2023 | Uncrewed Systems Technology Intro | October/November 2023 There is an air of inevitability about AI overtaking people’s capabilities. The AI behind the Swift UAV for example has out-competed champion fliers, and on page 20 we talk to Prof Davide Scaramuzza, the leader of the Swift team, about the development of its machine learning algorithms detailed in Platform One on page 6. Also, driverless cars are operating in commercial services in California, Arizona, Beijing and Shanghai, with improved safety records over people. Highlighting this, the latest figures from US regulator the National Highway Traffic Safety Administration (NHTSA) show that over 42,000 people died on US roads in 2022. The NHTSA sees its emergency braking proposals, which rely on AI, as cutting that figure by half. The industry has been working for decades on the technology with safety at its heart. Where it is heading is illustrated in our reports from the Commercial UAV show, on page 74, and the Paris Air Show on page 104. AI is also making its way into wireless systems to reduce the bandwidth required for video for remotely operated systems. The quality is not comparable yet, as we discuss on page 38, but as UAV racing has shown, it is only a matter of time before it overtakes the traditional technologies. Nick Flaherty | Technology Editor AI in the fast lane Read all back issues online www.ust-media.com UST 52 : OCT/NOV 2023 UK £15, USA $30, EUROPE €22 Code-makers Focus on video encoding technology The cable guys How to get the best from wiring harness suppliers Four-legged friend Keybotic’s quadruped robot for dirty and dangerous jobs in heavy industry Editorial Director Ian Bamsey Deputy Editor Rory Jackson Technology Editor Nick Flaherty Production Editor Guy Richards Contributor Peter Donaldson Technical Consultants Paul Weighell Ian Williams-Wynn Dr Donough Wilson Prof James Scanlan Dr David Barrett Design Andrew Metcalfe andrew@highpowermedia.com UST Ad Sales Please direct all enquiries to Simon Moss simon@ust-media.com Subscriptions Frankie Robins frankie@ust-media.com Publishing Director Simon Moss simon@ust-media.com General Manager Chris Perry The USE network Having now provided several enterprises around the world with the support and connections they need to implement efficient and sustainable technological solutions, we’re keen to continue expanding this free service. If the uncrewed vehicle and/or system you’re working on could benefit from some independent advice, from engineers specialising in the appropriate field, then please do get in touch. Email your question/challenge/dilemma/predicament to thenetwork@uncrewedsystemsengineering.com or visit www.uncrewedsystemsengineering.com and raise a case with us. All questions will be treated in the strictest confidence, and there’s no obligation whatsoever to follow any recommendations made. Volume Nine | Issue Six October/November 2023 High Power Media Limited Whitfield House, Cheddar Road, Wedmore, Somerset, BS28 4EJ, England Tel: +44 1934 713957 www.highpowermedia.com ISSN 2753-6513 Printed in Great Britain ©High Power Media All rights reserved. Reproduction (in whole or in part) of any article or illustration without the written permission of the publisher is strictly prohibited. While care is taken to ensure the accuracy of information herein, the publisher can accept no liability for errors or omissions. Nor can responsibility be accepted for the content of any advertisement. SUBSCRIPTIONS Subscriptions are available from High Power Media at the address above or directly from our website. Overseas copies are sent via air mail. 1 year subscription – 15% discount: UK – £75; Europe – £90 USA – £93.75; ROW – £97.50 2 year subscription – 25% discount: UK – £135; Europe – £162 USA – £168.75; ROW – £175.50 Make cheques payable to High Power Media. Visa, Mastercard, Amex and UK Maestro accepted. Quote card number and expiry date (also issue/start date for Maestro) ALSO FROM HPM

UXV Technologies Inc A breakthrough in robotic control THE BRAND NEW SRoC SRM compatible Ruggedized 25 different inputs The SRoC is a standardized product built for Defence robotics. It offers high communication flexibility by interfacing through our new Swappable Radio Modules (SRM) and/or the Nett Warrior connector. www.uxvtechnologies.com UXV Technologies, Inc. Contact us The SRoC is a standardized product built for Defence robotics. It offers high communication flexibility by interfacing through our new Swappable Radio Modules (SRM) and/or the Nett Warrior connector. www.uxvtechnologies.com A breakthrough in robotic control UXV Technologies, Inc. SRM compatible | Ruggedized | 25 different inputs THE BRAND NEW SROC Contact us

6 October/November 2023 | Uncrewed Systems Technology Mission-critical info for uncrewed systems professionals Platform one Researchers have developed an AI system for UAVs that can beat the best human operators in competitive racing (writes Nick Flaherty). Reaching the level of professional pilots with an autonomous UAV is challenging, because the robot needs to fly at its physical limits while estimating its speed and location in the circuit exclusively from onboard sensors. A team at the University of Zurich developed the AI system for its quadrotor called the Swift that can race physical vehicles at the level of human world champions (see interview with the team’s leader on page 20). The AI system, detailed in a recent peer-reviewed paper, combines deep reinforcement learning (RL) in simulation with data collected in the physical world. The Swift competed against three human champions, including the world champions of two international leagues, in real-world head-to-head races. It won several races against each of them and posted the fastest recorded race time. The Swift consists of two key modules. A perception system translates visual and inertial information into a representation that is then processed in the second module to produce control commands. The control policy is represented by a feedforward neural network and is trained in simulation using model-free on-policy deep RL. To bridge discrepancies in the sensing and dynamics between simulations and the physical world, the team used noise models estimated from data collected on the physical system. These empirical noise models proved to be instrumental for successful transfer of the control policy from simulation to reality. A combination of learning-based and traditional algorithms is used to map onboard sensory readings to control commands. Optimising a policy purely in simulation yields poor performance of physical hardware if the discrepancies between simulations and reality are not addressed. These discrepancies are caused by the difference between simulated and real dynamics, and the noise in the UAV’s estimation of its state compared to the real sensory data. To mitigate the discrepancies, the team collected a small amount of data in the real world and used it to increase the realism of the simulator. The team recorded onboard sensory observations from the UAV together with highly accurate pose estimates from a motion-capture system while the UAV was racing through the track. During this data collection phase, the robot is controlled by a model trained in simulations that operates on the pose estimates provided by the motion-capture system. The recorded data allows the characteristic failure modes of the perception system and unmodelled dynamics that come from the environment, platform, track and sensors to be identified. It is then fed back into the augmented simulation. The Swift was tested on a physical track designed by a professional UAV racing pilot. The track has seven square gates arranged in a 30 x 30 x 8 m volume, forming a lap 75m in length. The Swift raced this track against three human champions, all using quadcopters with the same weight, shape and propulsion. The human pilots were given a week of practice on the race track. After that, each one competed against the Swift in several head-to-head races. Airborne vehicles AI wins out in UAV racing The Swift can race physical vehicles at the level of human world champions

7 Platform one Uncrewed Systems Technology | October/November 2023 Two UK projects are aiming to improve the sensor and data models used for simulations to develop autonomous systems (writes Nick Flaherty). Both projects include rfPro, which makes simulation hardware and software systems to test equipment as part of systems development. The first project, SIM4CamSense, aims to improve the sensor models in the simulations, and is working in collaboration with the UK’s Compound Semiconductor Applications Catapult and National Physical Laboratory to produce models of the physical camera, radar and Lidar sensors. The second project, DeepSafe, is developing a machine learning system to generate test data to train autonomous vehicles to handle the rare, unexpected edge cases that vehicles must be prepared to encounter on the road. The SIM4CamSense project builds on rfPro’s work with Sony to characterise the Sony image sensor to create a more accurate model that can be used in a simulator. “This can be the chip or full sensors, and our job is to improve the models to correlate to the measurement data from the lab in controlled field trials for all the sensor modalities, the radar, Lidar and the camera,” said Matt Daley, operations director for rfPro. The company aims to include the sensor manufacturers in the project’s steering committee so that the models can be used across industry. “We already have the Sony CMOS chip model in rfPRO but we will be actively encouraging chip makers, image signal processor providers, Lidar and radar companies to participate,” he said. “The best thing we can do is encourage them to provide a black box model. “Sony describes it as the ‘inside out’ model for us to deliver our data. We want that level of detail from the radar and Lidar providers for physical-based models for all the important sensors and chipsets.” This will be used to improve the modelling of how sensors detect particular materials, which is different for image sensors, radar and Lidar. “We have a physical material ID in rfPro so we no longer try to make up an electromagnetic reflectance value; we let the sensor model hit every pixel to create a material ID,” Daley said. “The objective is to integrate the whole process of physical material IDs into the simulation assets.” The DeepSafe project aims to use the models to allow developers to test the sensor systems more thoroughly in simulations. This is led by dRISK.ai with Imperial College London, Claytex and rFpro to develop the simulationbased training data. “dRISK are leading the project on developing the right training data,” Daley said. “They have a knowledge graph with a database of edge cases, with on-road data from sensors, and this is about improving the models and software to simulate as many of the edge cases as possible. Part of the project is to expand the capability of the simulation assets.” One way for a simulator to produce the synthetic data is to have procedures to generate many different use cases to cover the edge conditions, rather than having to store all the data in a huge database. “This will be as procedural as possible, but we are also working to improve the animation APIs for more motion capture input,” Daley said. Simulation Better models projects Developing sensor models for synthetic data

8 A team of engineers in the US has developed a robot that can reconfigure itself into eight different modes, including a UAV (writes Nick Flaherty). The Multi-Modal Mobility Morphobot (M4) robot, developed at the Caltech Centre for Autonomous Systems and Technologies (CAST), can autonomously assess the environment it faces to choose the most effective configuration. The 6 kg robot can roll on four wheels, turn its wheels into rotors and fly, balance on two wheels to peer over obstacles, ‘walk’ by using its wheels like feet, or use two rotors to help it roll up steep slopes on two wheels or even tumble over rough terrain. The 3D-printed carbon fibre articulated body has four legs, each one having two actuated hip joints for frontal and side leg movements, and a shrouded propeller that acts as a wheel and thruster simultaneously. The front joints permit the legs to move in the sideways direction, while the sagittal joints accommodate forward and backward swing movements in each leg. This allows the various transformations. To achieve a UAS configuration, the legs first swing forward and backward. They then turn sideways with the front actuators to be used as propellers. In the M4, the propeller’s shroud acts as a wheel, which is actuated by a motor that drives through the gears attached to the shroud’s rim. The propulsion is generated by the propeller and motor inside the shroud aligned with the wheel axis. If the motion of the propellers and shrouds is considered, the robot possesses a total of 16 actuators and body degrees of freedom (DOF). As a result, the total number of DOFs in the M4, including actuated coordinates, body positions and orientations, is 22. The autonomous operation is aimed at space applications, allowing the M4 to assess offworld terrain and change the mode in response. One of the M4’s key features is the ability to repurpose its appendages to form wheels, legs, or thrusters. When it needs to stand up on two wheels, two of its four wheels fold up and the inset propellers spin upwards, providing balance for the robot. When it needs to fly, all four wheels fold up, and the propellers lift the robot off the ground. “Our aim was to push the boundaries of robot locomotion by designing a system that showcases extraordinary mobility capabilities with a wide range of distinct locomotion mode,” said Alireza Ramezani, assistant professor of electrical and computer engineering at Northeastern University, who worked on the design. “The M4 project successfully achieved these objectives.” The M4 uses two microcontrollers for low-level locomotion control. One is used for posture and wheel motion control based on an ARM Cortex-M7 core, the other is to regulate thrusters using an ARM Cortex-M3. There is also a high-level decisionmaking computer for autonomous multi-modal path planning using an Nvidia Jetson Nano GPU module. Multi-mode vehicles Eight robots from one Platform one October/November 2023 | Uncrewed Systems Technology The M4 can autonomously assess the environment it faces to choose its most effective configuration

2014 Pixhawk2 2015 Cube Black Cube Green 2016 Cube Yellow 2018 Cube Purple 2017 Cube Orange 2019 2020 Cube Gold Cube Orange+ 2021 Cube Blue 2022 Cube Red 2023 .... In the future .... THE CUBE IS TURNING 10!! Tnhewe byaecakrb..one of the UxV industry, the Cube was designed 10 years ago in the With an interface standard that allows you to put even the latest cube into your very first Cube or Pixhawk2 drone. With Hundreds of thousands of Cubes in the community, the cube has saved lives, delivered critical medicine, delivered human organs, kept our friends safe, inspected countless systems which kept humans away from danger. From the very first sketches at the onset of 2014, to saving lives in 2024, CubePilot has been there enabling the industry to grow.

10 Can you take the wheel? Researchers have announced a method to determine whether a driver can take back control of a vehicle in self-driving mode (writes Nick Flaherty). Research found that people’s attention levels and how engrossed they are in onscreen activities can be detected from their eye movements. This provides a new way to determine the readiness of drivers using the self-driving mode to respond to real-world signals, such as takeover requests from the car. When using the self-driving mode, drivers are able to take their hands off the wheel and participate in other activities. However, current models can require the driver to take back control of the car at certain points. For example, they can use the selfdriving mode during a traffic jam on a motorway. But once the jam has cleared and the motorway allows speeds faster than 40 mph, the AI will send a ‘takeover’ signal to the driver, indicating that they must return to full driving control. The researchers tested whether it was possible to detect if a person was too engrossed in another task to respond swiftly to such a takeover signal. The team, at University College London, tested Angoka, in Belfast, has developed a security system specifically for fleets of autonomous vehicles (writes Nick Flaherty). The system starts with a physically unclonable function (PUF), a unique ID assigned to a particular vehicle. This is generated from a set of parameters that cannot be recreated, and means that machine learning model on this data and found they could predict whether the participants were engaged in an easy or demanding task based on their eye movement patterns. “Our findings show it is possible to detect the attention levels of a driver and their readiness to respond to a warning signal, just from monitoring their gaze pattern,” said Prof Nilli Lavie at the UCL’s Institute of Cognitive Neuroscience. “Even when they are aware that they should be ready to stop their task and respond to tones as quickly as they can, they take longer to do so when their attention is held by the screen. Our research shows that warning signals might not be noticed quickly enough in such cases,” he added. securely form an autonomous vehicle to a secure service on Amazon Web Services. The key length is optimised for the network, as a low-bandwidth network such as LoRaWan would be overwhelmed by a long key length. The key can be changed on a schedule set by the operator, ranging from every minute to hours or days depending on the level of security required. Driverless cars Security Vehicle fleets guardian 42 participants across two experiments, using a procedure that mimicked a takeover scenario in a driverless car. Participants were required to search a computer screen with many coloured shapes for some target items and let their gaze linger on targets to show they had found them. They found that when a task demanded more attention, participants took longer to stop watching the screen and respond to a tone. The tests showed it was possible to detect participants’ attention levels from their eye movements. An eye movement pattern involving longer gazes and shorter distances of eye travel between the target items indicated that the task was demanding more attention. The researchers also trained a even if a comms module is moved from one vehicle to another it will not work, avoiding the problem of spoofing. This PUF is used to generate a symmetric quantum key using a random number generator. Angoka has developed a technique to share this key over Ethernet or wireless links; previously a fibre optic connection was necessary. This key then allows data to be sent October/November 2023 | Uncrewed Systems Technology The eye-tracking method determines the ability of drivers to revert to manual control in a self-driving car

T-Motor T-MOTOR The safer propulsion system POWER MAKES YOUR EXPLORATION www.tmotor.com Platform one Beam Connectivity has developed a lowlatency telematics system for all kinds of autonomous systems (writes Nick Flaherty). The system is based around a comms module from Quectel with its own telematics protocols and cloud service. This enables a round-trip latency of 300 ms on a 4G network and around 270 ms on 5G. The vehicle telematics modules support multiple wired and wireless protocols in a single integrated unit with a high-performance antenna. The software runs on spare processing in the Quectel module, eliminating the extra time that would be needed for an external telematics microcontroller. This can run rules for the sample rates and CAN bus monitoring, as well as rules on what to do with old data. Beam also has its own SIM card for the module, to connect to different cellular operators around the world, and its own cloud service on Amazon Web Services to ensure the low latency. The cloud service ensures vehicleto-cloud message integrity, and can be deployed in any region. Comms Low-latency telematics The system can be used on any autonomous system POWER MAKES YOUR EXPLORATION www.tmotor.com

12 Platform one UAV Navigation has developed software that can keep UAVs away from restricted or excluded areas even if comms links fail (writes Nick Flaherty). Restricted areas are limited to authorisation by the competent agency, such as airports, heliports, national parks, military installations, hospitals and nuclear power plants. Excluded areas are zones where the flight is prohibited, and are areas normally associated with military installations or war zones. UAV Navigation’s Visionair ground control software includes a No-Fly Zones Editor to draw the zones on the pre-flight mission planning map to a no-fly zone and reprogram the flight to keep the UAV away from it. The algorithm will also create a new flight path without the operator’s intervention. This automatic manoeuvre will reduce the workload of operators and increase flight safety. That means the UAS will be safe in the event of an incorrect flight plan where one of the legs of the plan intercepts a no-fly zone. The autopilot will autonomously execute the required manoeuvres to reach the next waypoint without entering the area, allowing it to continue its mission. In the same way, if the designated UAS operator commands an erroneous heading, the system will recalculate the route, avoiding the geofenced area. Geofencing ‘No entry’ UAV software improve operator situational awareness and prevent the aircraft from entering unauthorised areas. The geofenced areas this creates, whether circular or polygonal, are stored in the UAV’s autopilot. This means that if the comms fail, the flight control system (FCS) will still know where the no-fly zone is. Once the no-fly zone is defined, Visionair allows the operator to configure the execution of different actions if the aircraft strays into the zone. These can be an automatic parachute release, self-activating a hover mode in the case of rotary wing platforms, or automatic mission replanning. An internal algorithm allows the FCS to detect when the aircraft is heading to October/November 2023 | Uncrewed Systems Technology The no-fly zone can be configured to different UAV actions if it strays into the zone

Click Bond CLICK BOND WHAT’S THE HOLE STORY? Once upon a time, system installations required holes, which meant drilling and welding. This was dangerous and expensive. Attachment hardware, such as bolts, added extra weight. Then a company named Click Bond invented fasteners that didn’t require installation holes – they were adhesive bonded. Because there were no more holes, the fasteners preserved structural integrity and reduced weight. For decades, thousands of design engineers have used them on countless manned and unmanned programs, and they all lived happily ever after. The end. Adhesive-Bonded Assembly Technology #NoMoreHoles A radio receiver system that can detect all kinds of agile signals across 40 GHz of spectrum to detect rogue UAVs has been unveiled (writes Nick Flaherty). The 953 Communications Intelligence (COMINT) Radio Frequency (RF) receiver, from SPX CommTech, identifies and tracks RF signals coming from UAVs. The receiver performs continuous, autonomous remote and real-time signal collection up to 80 MHz bandwidth across a frequency of up to 40 GHz for signal monitoring, collection and direction-finding. This bandwidth delivers a sweet spot between monitoring sufficient signal breadth and amplitude to identify threats, including RF signals that are moving around. It is powered by removable hot-swappable batteries and is IP67-rated to withstand temperatures of up to 50 oC. It also has removable storage of up to 2 Tbytes. It runs on the Blackbird software, which detects, identifies, direction-finds and tracks signals of interest, as well as mitigate threats from electronic warfare. It also tracks the RF emissions of UAVs and their controllers or data links to support counterresponses. The software can also record the signal environment for lookback analysis without interrupting the current mission. This simplifies the collection task and can trigger automated actions and support unattended operations. Blackbird also uses geolocation to enable defence teams to visualise the location of the frequencies for better intelligence gathering and threat management. The system is backwards-compatible with previous COMINT versions and other technologies from SPX, which was formed from the combination of TCI and ECS. It can also be integrated with open source or customer-supplied mapping. Comms Spectrum of threats The 953 COMINT counter-UAS RF detector system

14 Platform one A low-power underwater comms system that can transmit signals across kilometres has been announced (writes Nick Flaherty). Researchers at MIT in the US used piezoelectric transducers and a backscatter technique for communicating over several kilometres. Underwater backscatter enables low-power comms by encoding data in sound waves that it reflects, or scatters, back toward a receiver. This enables reflected signals to be more precisely directed at their source and to use the reflected energy to reduce the overall power consumption. A Van Atta array, in which symmetrical pairs of antennas are connected in such a way that the array reflects energy back in the direction it came from, was used to boost the efficiency of the link and reduce the power consumption. However, connecting piezoelectric nodes to make a Van Atta array reduces their efficiency. The researchers avoided this problem by placing a transformer between pairs of connected nodes so that the nodes reflect the maximum the MIT Media Lab. “There are still a few technical challenges to address, but there is a clear path from where we are now to deployment.” To better understand the limits of underwater backscatter, the team also developed an analytical model to predict the technology’s maximum range. The model, which they validated using experimental data, showed that the system could communicate across kilometre-scale distances. For instance, the researchers needed to derive a function that captures the amount of signal reflected out of an underwater piezoelectric node with a specific size, which was among the biggest challenges of developing the model. Comms Long-range subsea array amount of energy back to the source. When building the array, the researchers found that if the connected nodes were too close, they would block each other’s signals. They therefore produced a new design with staggered nodes that enables signals to reach the array from any direction. With this design, which is scalable, the more nodes in an array, the greater its comms range. The team tested the array in more than 1500 experimental trials in the Charles River in Cambridge, Massachusetts, and in the Atlantic Ocean, off the coast of Falmouth, Massachusetts, in collaboration with the Woods Hole Oceanographic Institution. The device achieved comms ranges of 300 m at 500 bit/s, more than 15 times longer than previously demonstrated, with a bit error rate of 10−3 and an input power of 1.8 W. “What started as an exciting intellectual idea a few years ago — underwater communication with a million times lower power — is now practical and realistic,” said Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, and director of the Signal Kinetics group in October/November 2023 | Uncrewed Systems Technology The underwater comms system uses piezoelectric transducers to send signals over several kilometres SCD has developed the first shortwavelength IR (SWIR) detector that integrates event-based IR and optical imaging (writes Nick Flaherty). The SWIFT-EI uses an InGaAs detector with a resolution of 640 x 512 with 10 µm pixels. It integrates a ReadOut Integrated Circuit that provides two parallel video channels in one sensor, a standard imaging SWIR video channel, and a frame-event imaging channel classification for autonomous UAVs by showing thermal images. Sensors First integrated imager capable of up to 1500 frames/second. Rather than having the whole video frame as the output, an event-driven sensor just outputs any changes in the frame. That reduces the amount of data that needs to be processed and sent by a UAV. The event-based imaging channel can be used for laser event spot detections and multi-laser spot asynchronous laser pulse detection capabilities, while the SWIR event-based imaging increases the scope of target detection and The detector is based on InGaAs technology

The Foundation for BVLOS Operations ACAS X ONBOARD PROCESSOR Ground Station Situational Awareness Displays Flight Control Computers C2/Satcom Radios Non Cooperative Traffic Sensors Radars DAA Computer & Transponder/ Interrogator Non Cooperative Traffic Sensors Cameras Sagetech’s ACAS X Processor has been integrated and tested on multiple aircraft platforms. With Sagetech’s TSO-approved MXS ADS-B transponder paired with the ACAS X sensor fusion and collision avoidance system or developmental combined Interrogator/Transponder, these integrations provide a complete DAA air risk mitigation solution. Our team can assist you with proven situational awareness solutions designed for any aircraft size and weight. To learn more, visit: www.sagetech.com

Researchers in the US have developed software to test the operation of a complete driverless car using synthetic data (writes Nick Flaherty). The team, at Ohio State University, has applied for a US patent for its Vehicle-in-VirtualEnvironment (VVE) method. “With our software, we’re able to make the vehicle think it’s driving on actual roads while actually operating on a large open, safe test area,” said Prof Bilin Aksun-Guvenc, co-director of the university’s Automated Driving Lab. “This saves time and money, and there is no risk of fatal traffic accidents.” The VVE method replaces the output of the high-resolution sensors in a real vehicle with simulated data to connect its controls to a highly realistic 3D environment. After feeding the data into the autonomous driving system’s computers and synchronising the car’s real motion with the simulation, the researchers were able to show that it behaves as if the virtual environment were its true surroundings in real time. Because the method can be calibrated to maintain the properties of the real world while modelling edge events in the virtual environment, it could easily simulate extreme traffic scenarios. It can also use Bluetooth to communicate between a pedestrian with a mobile phone and a phone in the test vehicle. The researchers had a pedestrian dart across a simulated road a safe distance from the test vehicle, but the Bluetooth signal told the car that the person was right in front of it. “The method allows road users to share the same environment at the same time without being in the same location,” said Prof Aksun-Guvenc. Driverless cars Virtually real-world Dr Donough Wilson Dr Wilson is innovation lead at aviation, defence, and homeland security innovation consultants, VIVID/ futureVision. His defence innovations include the cockpit vision system that protects military aircrew from asymmetric high-energy laser attack. He was first to propose the automatic tracking and satellite download of airliner black box and cockpit voice recorder data in the event of an airliner’s unplanned excursion from its assigned flight level or track. For his ‘outstanding and practical contribution to the safer operation of aircraft’ he was awarded The Sir James Martin Award 2018/19, by the Honourable Company of Air Pilots. Paul Weighell Paul has been involved with electronics, computer design and programming since 1966. He has worked in the realtime and failsafe data acquisition and automation industry using mainframes, minis, micros and cloud-based hardware on applications as diverse as defence, Siberian gas pipeline control, UK nuclear power, robotics, the Thames Barrier, Formula One and automated financial trading systems. Ian Williams-Wynn Ian has been involved with uncrewed and autonomous systems for more than 20 years. He started his career in the military, working with early prototype uncrewed systems and exploiting imagery from a range of systems from global suppliers. He has also been involved in ground-breaking research including novel power and propulsion systems, sensor technologies, communications, avionics and physical platforms. His experience covers a broad spectrum of domains from space, air, maritime and ground, and in both defence and civil applications including, more recently, connected autonomous cars. Professor James Scanlan Professor Scanlan is the director of the Strategic Research Centre in Autonomous Systems at the University of Southampton, in the UK. He also co-directs the Rolls-Royce University Technical Centre in design at Southampton. He has an interest in design research, and in particular how complex systems (especially aerospace systems) can be optimised. More recently, he established a group at Southampton that undertakes research into uncrewed aircraft systems. He produced the world’s first ‘printed aircraft’, the SULSA, which was flown by the Royal Navy in the Antarctic in 2016. He also led the team that developed the ULTRA platform, the largest UK commercial UAV, which has flown BVLOS extensively in the UK. He is a qualified full-size aircraft pilot and also has UAV flight qualifications. Dr David Barrett Dr David Barrett’s career includes senior positions with companies such as iRobot and Walt Disney Imagineering. He has also held posts with research institutions including the Charles Stark Draper Laboratory, MIT and Olin College, where he is now Professor of Mechanical Engineering and Robotics, and Principal Investigator for the Olin Intelligent Vehicle Laboratory. He also serves in an advisory capacity on the boards of several robotics companies. Uncrewed Systems Technology’s consultants 16 October/November 2023 | Uncrewed Systems Technology The software makes a driverless car think it’s driving on actual roads

Platform one Recogni has developed an imaging card for its Scorpio AI chip for any autonomous system (writes Nick Flaherty). Scorpio is an AI inference chip with up to 1000 TOPS of performance for a range of autonomous mobility applications. The 25 W chip is mounted on a Pegasus PCI Express (PCIe) card, which has eight lanes of the fourth-generation standard. Each lane runs at 2 gigatransfers per second, with each transfer being 4 kbytes. This gives an overall data rate of 16 gigatransfers per second for the rest of the system. vulnerable road user detection, lane detection, free space detection, and traffic light/sign detection. Unlike traditional solutions that are based on legacy technology and repurposed for the monumental task of AI perception processing for autonomous mobility, Recogni’s system is purposebuilt to provide the performance and low power required for autonomous mobility. From last pixel out to perception results takes less than 10 ms on the Scorpio chip. This provides ample reaction time for the car to navigate safely. Image processing Imaging for any system The card uses the MIPI-CSI2 protocol, which supports four image data lanes and with each lane capable of transferring data up to 2.5 Gbit/s. These connect via an image signal processor and a deserialiser to a Fachkreis Automobil (Automobile Expert Group) Fakra telematics connector that provides a 6 Gbit/s bandwidth. Recogni’s software development kit is used to convert, compile, profile and deploy models that can detect objects at distances of up to 300 m. Perception output includes 3D object detection, The PCIe card can process up to four 8 MP cameras to provide frontal and surrounding views for a vehicle Not enough hours in the day? Use your time effectively with a targeted, dedicated, online portal, solely for the recruitment of uncrewed & autonomous engineers. Contact teddy@uncrewedengineeringjobs.com for more information on our 1x, 5x, 10x & unlimited job packages.

18 Platform one Uncrewed Systems Technology diary October/November 2023 | Uncrewed Systems Technology UAS Summit & Expo Tuesday 10 October – Wednesday 11 October Grand Forks, USA www.uas.bbiconferences.com Intergeo Tuesday 10 October – Thursday 12 October Berlin, Germany www.intergeo.de UAV Show Tuesday 10 October – Thursday 12 October Bordeaux, France www.uavshow.com Dronitaly Wednesday 11 October – Friday 13 October Bologna, Italy www.dronitaly.it Airborne ISR Wednesday 18 October – Thursday 19 October London, UK www.smgconferences.com/defence/uk/conference/ airborne-isr ROBOBusiness Wednesday 18 October – Thursday 19 October Santa Clara, USA www.robobusiness.com UTAC Tuesday 24 October – Thursday 26 October Perry, USA www.utacglobal.com/utac-2023/ Future C41SR Tuesday 31 October – Wednesday 1 November Arlington, USA www.americanconference.com/futurec4isr/ Marine Autonomy & Technology Showcase Tuesday 7 November – Thursday 9 November Southampton, UK www.noc-events.co.uk/mats-2023 Worlds Unmanned Aerial Vehicle Conference Sunday 12 November – Tuesday 14 November Jerusalem, Israel www.wuavconf.com Egypt Defence Expo Monday 4 December – Thursday 7 December New Cairo, Egypt www.egyptdefenceexpo.com 2024 CES Tuesday 9 January – Friday 12 January Las Vegas, USA www.ces.tech UMEX Monday 22 January – Thursday 25 January Abu Dhabi, UAE www.umexabudhabi.ae Geo Week Sunday 11 February – Tuesday 13 February Denver, USA www.geo-week.com UVS-Oman Monday 12 February Muscat, Oman www.uvsc.om Space-Comm Expo Wednesday 6 March – Thursday 7 March Farnborough, UK www.space-comm.co.uk Geo Connect Asia Wednesday 6 March – Thursday 7 March Singapore www.geoconnectasia.com Drones & Uncrewed Asia Wednesday 6 March – Thursday 7 March Singapore www.dronesasia.com

Paris Space Week Tuesday 12 March – Wednesday 13 March Paris, France www.paris-space-week.com Oceanology International Tuesday 12 March – Thursday 14 March London, UK www.oceanologyinternational.com/london Emerging & Disruptive Technologies for Defence Tuesday 19 March – Wednesday 20 March Washington, USA www.americanconference.com/emergingtechnology Next Generation Combat Vehicles Conference Thursday 21 March – Friday 22 March Arlington, USA www.americanconference.com/next-generation- combat-vehicles/ UDT Tuesday 9 April – Thursday 11 April London, UK www.udt-global.com C2ISR Global Wednesday 17 April – Thursday 18 April London, UK www.defenceiq.com/events-c2isrweek XPONENTIAL Monday 22 April – Thursday 25 April San Diego, USA www.xponential.org Mobility Live Middle East Tuesday 30 April – Wednesday 1 May Abu Dhabi, UAE www.terrapinn.com/exhibition/mobility-live-me Uncrewed Maritime Systems Technology Wednesday 8 May – Thursday 9 May London, UK www.smgconferences.com/defence/uk/conference/ Unmanned-Maritime-Systems Future Mobility Asia Wednesday 15 May – Friday 17 May Bangkok, Thailand www.future-mobility.asia DroneShow Robotics Tuesday 21 May - Thursday 23 May Sao Paulo, Brazil www.droneshowla.com/en Japan Drone Wednesday 5 June – Friday 7 June Chiba, Japan ssl.japan-drone.com/en_la Energy Drone & Robotics Summit Monday 10 June – Wednesday 12 June Texas, USA www.edrcoalition.com Eurosatory Monday 17 June – Friday 21 June Paris, France www.eurosatory.com MOVE Wednesday 19 June – Thursday 20 June London, UK www.terrapinn.com/exhibition/move Drone International Expo Thursday 4 July – Friday 5 July New Delhi, India www.droneinternationalexpo.com Farnborough International Airshow Monday 22 July – Friday 26 July Farnborough, UK www.farnboroughairshow.com Commercial UAV Expo Americas Tuesday 3 September – Thursday 5 September Las Vegas, USA www.expouav.com 19 Uncrewed Systems Technology | October/November 2023

20 A native of Terni in the province of Umbria, central Italy, Dr Davide Scaramuzza leads a team of researchers in the Robotics and Perception Group at the University of Zurich, which is working to develop technology to enable autonomous UAVs to fly themselves as well as or even better than the best human pilots. He and his team took a big step in that direction in June 2022 when their tiny Swift quadcopter, equipped with a camera, inertial sensors and a neural network controller, beat a group of world champion UAV racing pilots over a demanding course by half a second – a wide margin at this level of the sport and an important milestone on a development path stretching back almost 15 years. Other milestones include enabling UAVs to perform aerobatic manoeuvres autonomously and to navigate in environments of which they have no prior knowledge. Together, these are steps towards a broader capability he calls agile navigation. Proxies for usefulness Dr Scaramuzza is the first to admit that performing stunts and winning races do not solve any of the weighty problems facing humanity that he really wants to tackle with the help of UAVs, but a lap time is a hard measure of technological progress, and the best human pilots are relatable comparators. “These tasks are not very useful in practice, but they serve as proxies in developing algorithms that can one day be used for things that matter to society, such as cargo delivery, forest monitoring, infrastructure inspection, search & rescue and so on,” he says. With small multi-copters in particular, speed is important if the UAVs are to maximise their productivity in the 30 minutes or so of flight time that their limited battery capacities allow. Also, flying a UAV as close to its best range speed as possible maximises its productivity, as Dr Scaramuzza’s team has shown. Setting his work in context, he places it where robotics, machine vision and neuroscience meet. The spark for his interest here was struck when, in his last year at university, he attended a match Peter Donaldson talks to this robotics and perception professor about his work on developing vision-based navigation for UAVs Field of vision The University of Zurich team supervises their Swift quadcopter as it finds its way around the course, over which it beat several world champion UAV racing pilots (Images courtesy of the University of Zurich) October/November 2023 | Uncrewed Systems Technology

21 in the RoboCup soccer tournament for robots. He recalls being surprised at the use of a camera on the ceiling of the ‘stadium’ to localise the robot footballers, which he regarded as cheating. There was also a small number of robots in their own league, each of which had an onboard camera and a mirror that gave it a panoramic view, an arrangement that he looked on with similar disapproval. “Humans don’t play like that,” he says. “What would happen if you had a camera that could only look in the direction the robot was moving? That got me interested in trying to mimic human or animal behaviour, which in turn interested me in working on vision-based navigation.” He regards his PhD adviser, Prof Roland Siegwart, as a mentor, along with Prof Kostas Daniilidis and Prof Vijay Kumar, who were his advisers during his postdoctoral research work. One piece of advice from Prof Siegwart that has shaped his approach to his work was to start with a problem to solve, a problem that is important to industry and society. “The research questions will come from trying to solve a difficult problem,” he says. “This is different from researchers who start instead with a theoretical problem. They are both valid, but I like the pragmatic approach more.” V-SLAM pioneer One such problem was enabling UAVs to fly autonomously without GPS, using sensors that are small, light and frugal enough with energy for vehicles with small battery capacities. In 2009, after completing his PhD the previous year, Dr Scaramuzza worked with Prof Siegwart and a team of PhD students at ETH Zurich on the first demonstration of a small UAV able to fly by itself using an onboard camera, an inertial measurement unit and running an early visual simultaneous localisation and mapping (V-SLAM) algorithm. The UAV flew for just 10 m in a straight line. It was tethered to a fishing rod to comply with safety rules and was linked by a 20 m USB cable to a laptop running the V-SLAM, but it impressed the EU enough for it to fund what became the sFLY project, with Dr Scaramuzza as scientific coordinator. Running from 2009 to 2011, sFLY developed the algorithm further to enable multi-UAV operations. “The goal was to have a swarm of mini-UAVs that would fly themselves to explore an unknown environment, for example in mapping a building after an earthquake for search & rescue.” V-SLAM is now an established technique for UAV navigation in GPSdenied areas, with NASA’s Ingenuity helicopter using it to explore Mars, for example, and the sFLY project was its first practical demonstration, he says. The challenges centred on the fact that V-SLAM requires the algorithm to build a 3D map using images from the camera while working out the UAV’s position on it. “These images are very rich in information, with a lot of pixels, and you cannot process all the pixels on board,” Dr Scaramuzza says. “So we worked on a means of extracting only salient features from the images – specific points of interest. We would then triangulate those points to build a 3D map of the environment as the UAV moved through it.” Meeting the two high-level requirements of a SLAM task – building the map and working out the UAV’s position within it – is what he calls a chicken-and-egg problem. “You cannot localise if you haven’t built up a map in time, so you have to do both at the same time.” In V-SLAM, the overall task is divided into four component tasks – visual extraction and tracking, triangulation mapping, motion and then map optimisation. The purpose of map optimisation is to correct the errors that build up while the UAV is navigating, he says. “We are using information from the cameras and the inertial sensors, a gyroscope and an accelerometer, which provide angular velocities and accelerations in radians per second and metres per second squared,” he explains. “This is very useful information, because a map built by cameras alone can drift over time, especially when the UAV Davide Scaramuzza | In conversation Uncrewed Systems Technology | October/November 2023 The Swift finds its way through woodland using a camera, inertial sensors and a neural network-based AI autopilot trained in simulation

22 In conversation | Davide Scaramuzza has moved a long way from its starting point, but the accelerometer will always sense gravity when the UAV is stationary, and you can use that information to correct the drift.” Event camera potential Dr Scaramuzza and his team continue to develop and refine autonomous visual navigation for UAVs, with their current work concentrating on exploiting standard as well as developmental ‘event’ cameras. “With a standard camera you get frames at constant time intervals,” he says. “By contrast, an event camera does not output frames but has smart pixels. Every pixel monitors the environment individually and only outputs information whenever it detects motion.” Because they don’t generate a signal unless they detect change, event cameras are much more economical with bandwidth and computing power than standard frame cameras. And because they don’t accumulate photons over time, they are immune to motion blur, he explains. Eliminating motion blur will enable small UAVs to use vision-based navigation at much higher speeds, making them more robust to failures in the propulsion system, for example. It can even allow them to dodge objects thrown at them, he says, and these last two benefits have been demonstrated experimentally by Dr Scaramuzza’s team. In the race against the champions, the Swift quadcopter used a conventional frame camera (an Intel RealSense D450) and an IMU, with a neural network running on an Nvidia Jetson TX2 GPU-based AI computer processing their inputs in real time and issuing the manoeuvre commands. The race took place on a course consisting of a series of gates through which the UAVs had to fly in the correct order. Training the AI autopilot In the Swift, the neural network takes the place of the perception, planning and control architecture that has dominated robotics for 40 years. In most UAVs that use vision-based navigation, the perception module runs the V-SLAM software. “The problem is that it is a bit too slow, and it is very fragile to imperfect perception, especially when you have motion blur, high dynamic range and unmodelled turbulence, which you get at high speeds,” he says. “So we took a neural network and tried to train it to do the same job as perception, planning and control. The challenge there is that you need to train a neural network.” Doing that in the real world with a team of expert pilots was deemed impractical because of the downtime for battery changes and crash repairs. So using a reinforcement learning algorithm, the Swift was trained in virtual environments developed by the games industry. “Reinforcement learning works by trial and error,” he says. “We simulated 100 UAVs in parallel that are trying to learn to fly through the gates as fast as possible so that they reach the finish line in the minimum time. They took hundreds of thousands of iterations, which converged in 50 minutes. A year has passed since the race, and we can now do it in 10 minutes.” Although the AI proved much faster than the human champions, Dr Scaramuzza does not yet claim it is better than human pilots, as there is still a lot to learn. For example, the AI cannot October/November 2023 | Uncrewed Systems Technology Cautious navigation through a gap in a damaged concrete wall, a manoeuvre typical of many urban search & rescue tasks after disasters