30 Lidar perception to cross-validate position estimates via 3D modelling and recognition of landmarks in the surroundings. Although WeRide used solely GNSS data at first, the need to traverse the tunnel linking its island facilities with the Guangzhou mainland rapidly made use of non-GNSS-reliant navigation systems imperative. To ensure these estimates come in consistently, a second MEMS IMU is installed onboard; this measure also enabling redundancy in inertial data. “[The 3D modelling data] is stored onboard each Robobus, not on a cloud where connectivity issues might cause problems, but given the short, confined routes that Robobus traverses, the size of the data isn’t too much,” Liu notes. Optimising the algorithmic aspects posed the biggest hurdle in Robobus’ localisation, given WeRide’s ambition to achieve L4 in safe autonomous driving, and required resolving issues such as localising when surrounded by other vehicles (by programming the software to use even minimal glimpses of recognisable landmarks to triangulate position). “Also, the MU must arbitrate for when GNSS gives you one position and the Lidar gives you another; to resolve this, an algorithm assigns an evaluation score to every position estimate, which assigns an accuracy value to the readings from the different sensors based on the presence of obstacles or problems, which could make either the GNSS or the Lidar model inaccurate,” Liu says. “But if both GNSS and Lidar model readings really aren’t scored high enough to be trustworthy then the system will use visual camera information to check for signs specific to the local area; for instance, road lane markers. “So, even if Robobus can’t tell quite where it is in the world, at the very least it can ensure it is travelling safely on whatever road it is on. It can hence keep moving forwards at a safe pace, within the confines of its surrounding traffic, until one or both of the GNSS and Lidar localisation feeds are giving data with high confidence scores.” Sensor complement WeRide has consistently used Lidars as an important part of its sensor suites. The Robobus initially adopted main perception Lidar units in the front, and blind-spot Lidar units at the front left and rear right. “And between version two and 2+ of Robobus, we moved to a 128-channel model for the main perception Lidar,” Liu adds. Both the main perception Lidar and the blind-spot Lidar are 360° horizontal FoV systems, with the latter having a 104° vertical FoV (to the former’s 40°) for prudent blind-spot coverage. The main Lidar additionally functions with an operating range of 200 m (with 10% reflectivity), up to a maximum 230 m range, and generates up to 6,912,000 points per second in dual return mode, consuming 29 W in standard operations. The 12 cameras around the body, meanwhile, are proprietary designs created at WeRide. One camera has a 30° FoV (vertical and horizontal), nine have a 100° FoV, and the remaining two are fisheye lens devices with 212° FoVs. October/November 2024 | Uncrewed Systems Technology Dossier | WeRide Robobus The main perception Lidars are 128-channel units with an operating range of 200 m. They generate up to 6.912 million points per second and typically consume up to 29 W HD vision data from the cameras is fused to the Lidar point cloud via the main body of the sensorfusion algorithmic model, which generates a unified output reconciling all of the raw data gathered
RkJQdWJsaXNoZXIy MjI2Mzk4