Unmanned Systems Technology 028 | ecoSUB Robotics AUVs I ECUs focus I Space vehicles insight I AMZ Driverless gotthard I InterDrone 2019 report I ATI WAM 167-BB I Video systems focus I Aerdron HL4 Herculift

56 Digest | AMZ Driverless gotthard areas in case the car rolls, and these are located mostly around the nose and the centre of the chassis. “We did however move the Lidar around from where it was originally, in various locations around the front of the vehicle, to improve the aerodynamics based on what our CFD simulations recommended,” Valls notes. Data links In all, there are three links between the vehicle and its control station. “The most important of these is the ‘safety’ link, which is a system supplied by the Formula Student competition authority,” Hendrikx explains. “It’s an EU-certified comms link that runs at a constant radio frequency, between 430 and 440 MHz, and if it cuts out, a safety stop in the racecar is triggered.” Sensor data is currently delivered over a 5 GHz wi-fi link. The vehicle used to have an 800 MHz telemetry link that AMZ designed in-house, which would send information such as battery voltages, temperatures and motor performance over an RS-232 interface, but competition rules now dictate that all processing and calculations have to take place on board the car, so streaming telemetry back to the team during races is banned. Perception sensor architecture For perception, the vehicle can use Lidar, an optical vision camera, or both for redundancy. Although the two systems operate independently of each other, and each confers different modes of operation and different advantages on the vehicle, Valls notes that the Lidar is more accurate for this application. “The Lidar is a Velodyne VLP-16, mounted on the front of the car, right under the nose,” Valls explains. “The most important thing was the vertical resolution – it’s not that we need to detect very dense objects close by; we need to detect very small objects far away.” The Lidar’s algorithmic ‘pipeline’ for discerning the colour and position of the cones first subjects the incoming steam of point cloud data to a few pre- processing tasks. These are centred on estimating the ground plane, and identifying and removing ground-based points to leave only cone-based points. The point cloud then passes through a cone detection algorithm. This recovers any cone-based points that have been accidentally removed to make cone colourisation and detection easier, and then check that the points detected do indeed correspond with those typical of cones in terms of height, width and angular resolution. Lastly, a convolutional neural network is used to estimate the cone colour pattern and thus the direction of the course. A key advantage provided by the camera is the richer level of colourisation it captures, especially in the first lap. The easier the sensor can tell the blue cones from the yellow, the better the car’s autonomy stack can plot its forward direction. “Our chosen camera model was a Basler 1600-pixel, 60 Hz system,” Valls says. “The key thing here was having a camera with a global shutter, which would move and update very quickly – again because we need to perceive the cones at a significant distance while the car is moving very fast.” Three cameras in total are integrated into a housing, with two connected and configured into a stereo arrangement and one working as a monocular system. The stereo cameras serve to detect cones at close range. To that end they use lenses with focal lengths of 5.5 mm and horizontal FOVs of 64.5° for optical accuracy at close proximity. The monocular camera is intended for medium-to-long range cones October/November 2019 | Unmanned Systems Technology The Lidar sensor is a Velodyne VLP-16, mounted beneath the nose of the car Most important is vertical resolution – we don’t need to detect dense objects close by; we need to detect small objects far away

RkJQdWJsaXNoZXIy MjI2Mzk4