32 “The top-down localisation view displays obstacles around Robobus as rectangular polygons per their bounding boxes, along with parameters like velocity, acceleration and jerk [the rate of change in vehicle acceleration over time, expressed mathematically as the firsttime derivative of acceleration].” The IDE window can be switched into from analytics to show a far more detailheavy view for low-level debugging. Engineers can toggle between telemetry flows from virtually every subsystem at multiple fractions per second in each recording to isolate potential sources of incidents, errors or low confidence indicated by the self-driving algorithm. They can also edit portions of the onboard algorithms and then generate simulations (starting at a user-specified timestamp of their selected recording of fused sensor data) of how the scenario might have evolved differently with a potential fix in mind. The simulation is output as a file that can be played by multiple engineers from the cloud for team input towards optimising each part of every autonomy-critical algorithm. Prediction model Also generated within an onboard prediction model are a large number of possible trajectories based on predictions of how traffic and other objects around the Robobus-ego will move. From those, a ‘selector’ module within the prediction model algorithm shortlists 12 trajectories as the most promising and then chooses whichever one manages the highest evaluation score as the final output trajectory for the powertrain to execute. That score is based on anticipated safety, efficiency and comfort; the third parameter being a function of the acceleration, jerk, and whether there are very elderly or disabled passengers onboard meriting extra-careful manoeuvres. “You’ll notice we generate evaluation scores for both positioning and trajectory estimates; the former are generated by the AI-trained perception model itself, while the latter are produced by our rules-based selector module. Hence, we feel we’ve combined the best of both technologies to work together, and ensure Robobus’ final trajectories are safe enough from both intelligence- and rules-based perspectives,” Liu says. Safety response to sensor failure modes (including detected sensor misalignments due to vibrations or a leaf sticking over a sensor) typically consist of Robobus switching to a redundant sensor, reducing speed and increasing the distance maintained with the vehicle ahead. The remote monitoring technician is alerted of the issue, and if a problem is severe, Robobus automatically pulls over to a safe location (using trajectories from the prediction model), or it can even stop in line if pulling over is dangerous and space for evacuation is possible. If a battery malfunction occurs, standard procedure is to use some remaining energy to park in a safe place, while the remote monitoring centre triggers an alert and an apology to passengers to leave once the Robobus is parked, and await a second vehicle to collect them and continue their journey (as normally happens when crewed buses break down). The first iterations of the trajectory prediction model were based largely on velocity data, and over time they October/November 2024 | Uncrewed Systems Technology Both the perception model and prediction model use a system of evaluation scores to determine the most likely and hence safest estimates for position and traffic behaviour, respectively In addition to the Robobus’ main central computer, a backup is installed to take over during failure modes, typically to drive the vehicle to a safe location nearby
RkJQdWJsaXNoZXIy MjI2Mzk4