Issue 54 Uncrewed Sytems Technology Feb/Mar 2024 uWare uOne UUV l Radio and telemetry l Rheinmetall Canada medevacs l UUVs insight DelltaHawk engine l IMU focus l Skygauge in operation l CES 2024 report l Blueflite l Hypersonic flight

T-Motor THE SAFER PROPULSION SYSTEM INDUSTRY-LEVEL PROPULSION SYSTEM CAN Function UART-TTL data feedback protocol IP67 function ELECTRIC PROPELLER www.tmotor.com Platform one Researchers in Korea have adapted the YOLO 3D machine-learning framework for 3D object detection in real time (writes Nick Flaherty). A critical requirement for the success of autonomous vehicles is their ability to detect and navigate around 3D obstacles, pedestrians and other vehicles across diverse environments. Current autonomous vehicles employ smart sensors such as Lidar for a 3D view of their surroundings and depth information, while radar is typically used for detecting objects at night and in cloudy weather, and a set of cameras is often utilised for providing RGB images and a 360o view – collectively forming a comprehensive dataset which we know as a point cloud. The researchers at the Department of Embedded Systems Engineering at Incheon National University (INU), Korea, have developed a deep learning-based, end-to-end, 3D object-detection system. The system is built on the YOLOv3 (You Only Look Once) deep-learning object-detection technique, which is the most active state-of-the-art method available for 2D visual detection, which the researchers modified to detect 3D objects. This technique uses point-cloud data and RGB images as input, and it generates bounding boxes with confidence scores and labels for visible obstacles as output. To assess the system’s performance, the team conducted experiments using the Lyft dataset, which consisted of road information captured from 20 autonomous vehicles travelling a predetermined route in Palo Alto, California, over a four-month period. The results showed YOLOv3 exhibits high accuracy, surpassing other state-ofthe-art architectures. Notably, the overall accuracy for 2D and 3D object detection were 96% and 97%, respectively. The work can also be used to improve the performance of sensors, robotics and artificial intelligence. Autonomous vehicles Real-time 3D object detection Navigating around obstacles and pedestrians

RkJQdWJsaXNoZXIy MjI2Mzk4