Unmanned Systems Technology 005 | Selex ES Falco UAV | Sense and avoid systems | RCV Engines DF70 | DSEI show report | Fuel cells | CUAV Expo, InterDrone and CUAV Show reports | SLAM

77 one-size-fits-all solution. So now is a good time to look at the state of the art. As well as the dual uncertainty about the position of the autonomous system and surrounding landmarks, SLAM algorithms also have to cope with measurement errors inherent in devices such as wheel odometers, inertial measurement units, radars, sonars, laser rangefinders and so on. With an active range measurement device, such as a laser rangefinder, Lidar or sonar, a measurement might be caused by a beam reflecting off an obstacle, but might also be an error caused by crosstalk in the circuits, the beam reaching its maximum measurement range, a moving obstacle, multi-path interference and so on, all of which must be allowed for. There is also the knotty problem of data association – the assignment of observations to landmarks. In general, multiplying the number of observations by the number of landmarks gives the number of possible associations; choosing the wrong data associations though can be disastrous. As a vehicle moves though its environment making observations, it will observe the same landmarks multiple times, so the computer must have a reliable means of telling whether it is seeing one landmark from two different angles, for example, or two different landmarks – with no map other than the one it makes as it goes along. This is hard enough in laboratory conditions, but real-world landmarks have a habit of looking different from different angles, compounding the problem. Mathematics of uncertainty SLAM relies on the mathematics of uncertainty pioneered in the 18th century by English clergyman Thomas Bayes, whose work was edited and published by his friend Richard Price and developed by French mathematician Pierre-Simon Laplace. Bayesian mathematics deals with probabilities, recursively refining the probability that observed effects have a particular cause. In SLAM, the unknown cause is the autonomous system’s real position in space, and the effect is an observation it makes of a landmark at a particular relative range/bearing combination at a particular time. This echoes Bayes’ original thought experiment in which he sought to deduce an unknown (the position of a ball on a table) from a series of observations – the relative positions of other balls rolled subsequently onto the table. Crucially, he started with the hypothesis that the original ball was equally likely to be anywhere on the table. In his thought experiment, Bayes could not see the table and so relied on an assistant to report whether each new ball landed to the left or right of the original. If all the balls landed to the left, he could be confident that the original was near the right-hand edge of the table, and vice versa. If there was an even left-right split, he could be pretty sure the first ball was in the middle. With each new report of a relative position he made a more educated guess about the position of the mystery ball, and used that as the starting point – the prior – for the next guess, using this recursive process to increase his confidence in the estimated position, which he called the posterior. The mathematical tools that refine this process, and which all SLAM algorithms have at heart, are known as filters, and include several varieties of Kalman filters (including information filters) and particle filters, all of which have their strengths and weaknesses. Kalman filters are used in an enormous variety of applications, particularly in the areas of guidance and control. Crunching vectors As the autonomous system moves and takes observations, the SLAM algorithm defines four key parameters for a given instant in time, which can be called ‘now’. The first is the state vector, meaning the vehicle’s location and orientation (also known as its pose). The second is the control vector, which is the command applied in the previous instant to drive the vehicle to where it is now. Third, it defines a vector that describes the position of a landmark, which is assumed not to move. Lastly it defines an observation from the vehicle of where that same landmark is now. It also builds up a history of vehicle locations and control inputs, the set of all landmarks and the set of all landmark observations. Although this requires a lot of number crunching, the estimated positions of all these landmarks correlate because of a common error in the estimate Simultaneous localisation and mapping | Insight Unmanned Systems Technology | Dec 2015/Jan 2016 The mathematical tools at the heart of SLAM algorithms are known as filters, all of which have their strengths and weaknesses With its ‘intelligent visual navigation’, which uses information from a laser sensor, the Roomba 980 brings SLAM technology to domestic appliances (Courtesy of iRobot)

RkJQdWJsaXNoZXIy MjI2Mzk4