Uncrewed Systems Technology 044 l Xer Technolgies X12 and X8 l Lidar sensors l Stan UGV l USVs insight l AUVSI Xponential 2022 l Cobra Aero A99H l Accession Class USV l Connectors I Oceanology International 2022

39 sensor units to enable more applications, including airborne designs. A Lidar sensor has four main function blocks: transmitter, receiver, modulator and controller. The transmitter block includes the laser module, transmitter optics and a scanning system. This can be a micro-machined mirror, a diffraction grating or metamaterial modulator to steer the beam. Sensor makers tend to combine technology from different suppliers with a key element to provide a performance advantage. The receiver block consists of receiver optics, a receiver array such as an image sensor from a camera, and receiver processing chips, which process multiple channels of the read-outs. Techniques for flash Lidar avoid the need for a modulator by using an array of lasers based on a vertical cavity surface emitting laser (VCSEL) and a receiver, and using complex digital signal processing to process the returning data into a point cloud. The controller can be implemented in a microcontroller or a custom ASIC, and the programming interface for this controller is becoming a key factor in the design of the sensor. Having an open API to the controller allows system developers to implement the higher-level system code more easily. This extends to pre-processing algorithms that can allow developers to swap easily between different types of Lidar for different parts of an uncrewed platform, for example by using a long-range sensor for the front and shorter-range devices around the bottom of a vehicle. Time-of-flight One of the early techniques for Lidar was time-of-flight (ToF), where the time taken for the laser signal to bounce off an object determined the distance. However, this has significant drawbacks that make it difficult to use in many 3D vision applications. It requires detection of very weak reflected light signals, so other Lidar systems or even ambient sunlight can easily overwhelm the detector. It also has limited depth resolution and can take a long time to densely scan a large area such as a highway or factory floor. To tackle these challenges, researchers are turning to a form of Lidar called frequency-modulated continuous wave (FMCW), which is based on ToF principles. They have used techniques from medical imaging to improve the data throughput of FMCW Lidar by 25 times while still achieving sub-millimetre depth accuracy. Optical Coherence Tomography (OCT) is the optical analogue of medical ultrasound, which works by sending sound waves into objects and measuring how long the signals take to come back, similar to ToF. To time the light waves’ return times, OCT devices measure how much the phase has shifted compared to identical light waves that have travelled the same distance but have not interacted with another object. FMCW Lidar uses a similar technique. When the detector gathers light to measure its reflection time, it can distinguish between the specific frequency pattern and any other light source, allowing it to work in all kinds of lighting conditions at very high speed. It then measures any phase shift against unimpeded beams. A diffraction grating breaks the laser into a range of frequencies and allows the system to quickly cover a wide area without losing much depth or location accuracy. A technique called time- frequency multiplexed 3D coherent ranging looks only for the peak signal generated from the surfaces of objects. This costs the system a bit of resolution, but with much greater imaging range and speed than traditional Lidar. This technique generates 475 depth measurements along the axis of the grating axis within a single laser sweep, generating an overall acquisition rate of 7.6MHz. This approach enables real-time 3D imaging results of various everyday objects including a living human hand with a maximum imaging range of 32.8cm and a 3D frame rate as high as 33.2Hz. The Lidar controller can be implemented in an FPGA and a microcontroller. The system in the FPGA includes the control of the laser pulse timing according to the mirror movements, ToF calculation based on the receiver signal, and functional safety to ensure safe operation of the Lidar, especially eye safety. The microcontroller has multiple cores inside, which are used for defining Lidar sensors | Focus Uncrewed Systems Technology | June/July 2022 A time-of-flight architecture for Lidar sensing (Courtesy of Lumotive)