Unmanned Systems Technology 017 | AAC HAMR UAV | Autopilots | Airborne surveillance | Primoco 500 two-stroke | Faro ScanBot UGV | Transponders | Intergeo, CUAV Expo and CUAV Show reports

7 Platform one Unmanned Systems Technology | December/January 2018 Velodyne has launched a replacement to its established high-end Lidar sensor for autonomous designs (writes Nick Flaherty). The VLS-128 improves the overall perception system to accelerate the development of driverless cars by using 128 scanning laser channels. This will replace the HDL-64, the 64-channel Lidar that marked the start of the company and is used by 250 customers on most prototype driverless cars. The VLS-128 is 70% smaller than the previous version, and generates four times as many data points. This enhances object detection and collision avoidance at speeds above 30 mph, with twice the resolution and twice the range, which is now 300 m. A cost structure for the VLS-128 has yet to be announced, but Marta Hall at Velodyne said, “The cost is already dramatically reduced. In the future, Lidar will be affordable for cars worldwide.” It is being built at Velodyne’s plant in California using a proprietary fully automatic laser alignment and manufacturing system. Future versions will also be produced as part of Velodyne’s Tier 1 automotive programme for the mass production of driverless cars. Driving down cost of Lidar Sensors The VLS-128 is 70% smaller than its predecessor, and has twice the range Accurately detecting objects in the 3D point clouds generated by a Lidar sensor is a central problem in many unmanned applications, and that has led to the need for cameras and radar sensors (writes Nick Flaherty). Most existing efforts have focused on hand-crafted feature representations, but now researchers at Intel have proposed VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single end-to-end trainable deep learning neural network. The VoxelNet divides a point cloud into equally spaced 3D equivalents of pixels, called ‘voxels’, and transforms a group of points within each voxel into the representation of a feature. This uses a voxel feature encoding layer developed by the researchers. That allows the point cloud to be encoded as a descriptive volumetric representation, which is then used to detect objects around a driverless vehicle or robot. Experiments on the Kitti car detection benchmark show that the VoxelNet outperforms state-of-the-art Lidar-based 3D detection methods by a wide margin. The approach also learns an effective discriminative representation of objects with various geometries, leading to encouraging results in the 3D detection of pedestrians and cyclists using Lidar alone. Object detection Voxel neural net system

RkJQdWJsaXNoZXIy MjI2Mzk4