Issue 53 Uncrewed Systems Technology Dec/Jan 2024 AALTO Zephyr 8 l RTOS focus l GPA Seabots SB 100 l Defence insight l INNengine Rex-B l DroneX 2023 show report l Thermal imaging focus l DSEI 2023 show report l Skyline Robotics Ozmo

50 minimal set of memory, a serial port and a virtual network interface (NIC) . An engineer using a separation kernel must allocate hardware resources to VMs statically at boot-up time, and any sharing must be specially arranged. It is common to use one CPU core per VM, and useful to have extra NICs and serial ports to get access to the VM, but that is changing in favour of multiple VMs on each core. Degraded environments One area where this separation, either via a secure RTOS or separation kernel, is key for uncrewed systems is in a degraded visual environment (DVE). This is one of the most challenging tasks for rotor craft for example, particularly when some of that condition is created by rotor wash blowing up clouds of dust, sand or snow. DVE conditions have many causes, including naturally occurring smoke, fog, smog, clouds, sand, dust, heavy rain, blowing snow, darkness and flat light. These can occur in combination, and some of the most challenging DVEs are induced by the aircraft itself, creating a brownout or whiteout from dust, sand or snow. The primary problem with a DVE is the loss of visual references, such as the horizon, the ground and any nearby obstacles. Situational awareness of the terrain and obstacles is required for safe operations during all phases of flight, and losing this situational awareness en route can result in a crash or a collision with human-made obstacles. In an autonomous helicopter, for example, the loss of visual reference during take-off or landing can lead to undetected drift or bank of the helicopter, or even create a sensation of self-motion called vection. These effects significantly increase the risk of dynamic rollover and a hard landing, often resulting in the aircraft being damaged or even lost. DVE mitigation solutions fall into a few broad categories: enhanced vision, synthetic vision or a combination of the two, and this is where an RTOS has become a major component. Robust partitioning of the functions in the sensor subsystem is needed to ensure any hosted application has no unintended effect on any other hosted application. In a system with a multi-core processor, robust partitioning is enabled by meeting the objectives of safety standards, including the mitigation of multi-core interference. A multi-core partitioned RTOS or separation kernel includes the ability to add functions without the need for re-testing and re-verification of the entire system as the operational flight programs evolve. Mitigating DVE starts with sensors capable of penetrating the environmental conditions. US regulator the FAA refers to a system that provides real-time imagery of the external scene as an ‘enhanced vision’ system. For example, infrared (IR) has a high frame rate but limited obscurant penetration. Millimetre-wave radar penetrates very well but is low resolution. Lidar has high resolution to detect obstacles and find a flat area to land but doesn’t penetrate obscurants very well, as it takes several scans to form a complete image and has a shorter range than other technologies. Because no one sensor can handle all types of DVE, a combination of them is used and the data is fused to provide a real-time image of the external scene topography and obstacles, with a latency of under 100 ms. That requires a safetycritical RTOS. An alternative to enhanced vision is synthetic vision, which is a computergenerated image of the external scene topography relative to the aircraft derived from a database of terrain and obstacles coupled with aircraft attitude and a highprecision navigation solution. These databases can require a lot of memory, depending on the geographical area coverage, and military synthetic vision systems often combine a civilian terrain database with a more specialised military database. To accommodate such large databases, the operating system should support 64-bit memory addresses in order to access more than 4 Gbytes. Compared to enhanced vision, synthetic vision does not provide a realtime view of the actual external scene, and it is only as accurate as the database and the GPS location. The database can have errors, and the GPS is subject to interference and jamming. However, synthetic vision is not limited in range or field of view. That provides a compelling reason to augment enhanced sensing with synthetic vision. There are several steps in the sensor fusion process to create the combined sensing. The initial 3D terrain database is often augmented with specialised higher resolution data for the area around the target landing zone. As an autonomous helicopter approaches the target landing area before there is any significant brownout or whiteout, a Lidar sensor captures high-resolution terrain data of the area. Because the Lidar data is real time, it also captures any vehicles, other obstacles December/January 2024 | Uncrewed Systems Technology S-plane uses the Integrity RTOS for control of its Skeldar rotorcraft (Courtesy of Green Hills Software)

RkJQdWJsaXNoZXIy MjI2Mzk4