Issue 39 Unmanned Systems Technology August/September 2021 Maritime Robotics Mariner l Simulation tools focus l MRS MR-10 and MR-20 l UAVs insight l HFE International GenPod l Exotec Skypod l Autopilots focus l Aquaai Mazu

81 Hardware-software split Viewing an autopilot purely by its appearance, it could be thought of as simply a hardware product. Indeed, investigations into recent UAS crashes by aviation authorities have spurred great scrutiny and focus on the hardware of flight control systems, pushing the topic of software into the comparative background. That would be a simplistic assessment of what constitutes an autopilot though. To understand why, the core functions that an autopilot is meant to serve need to be considered. These encompass navigation and localisation of a vehicle using sensory inputs to estimate its state and position, guiding a vehicle through an area by creating a trajectory towards its objective, and flight (or traction) control by driving actuation systems according to guidance objectives. Other key functions can include what could be called non-piloting tasks such as fault management, internal and external comms, and data logging for predictive maintenance and other analytics. These functions are enabled through the hardware. For example, accelerometers, gyroscopes and GNSS receivers supply the signal inputs for navigation; processors then turn those into command functions for actuators such as those for control surfaces and motor throttles to follow, while hard drives store system-wide health, performance data and so on. But without software to provide the algorithmic sequences to guide its behaviour, the hardware would essentially be empty silicon. It can be concluded therefore that an autopilot system is hardware- enabled but software-implemented. To understand what defines and separates autopilots in terms of their quality and capabilities, the software should therefore be closely assessed. The software in an autopilot will vary in terms of its components and in the architecture within which they are arranged and interact with each other. By and large, however, they can broadly be separated into four definable layers. As with PC and automotive software, the top section of an autopilot’s architecture is typically an application layer, through which the core functions are executed. Supporting this layer from below is what is often called the middleware, which generally contains the choice of real- time operating system that schedules the necessary tasks and access to shared data for the functions to take place. The middleware also tends to be where the networking stack sits, which handles such jobs as managing Ethernet comms and FTP servers. Other modules in the middleware can include an embedded file system for allowing storage of autopilot system parameters on an EEPROM device (or of flight data records on an SD card) or a USB stack, which newer microcontrollers tend to enable. A USB stack can be very convenient for autopilot technicians to securely interface via a laptop for running maintenance checks, firmware updates or in-the-loop tests of new code, processors or sensors. Below the middleware is often an array of peripheral drivers, which are usually defined by the type of microcontroller being used. For example, the ARM Cortex-M7 has risen to something of an industry standard, in no small part because ARM provides CMSIS (Common Microcontroller Software Interface Standard) – a specification for how the Autopilots | Focus Unmanned Systems Technology | August/September 2021 In addition to standard functions, a high-end autopilot is now expected to be capable of significant fault management, comms with extensive and expandable I/Os, customisable control logics, and certifications such as DO-178C and DO-254 (Courtesy of Embention) GUIs for ground control and flight simulation systems are ideal for aiding the development and testing of unmanned systems (Courtesy of Applied Navigation)

RkJQdWJsaXNoZXIy MjI2Mzk4