Unmanned Systems Technology 007 | UMEX 2016 report | Navya ARMA | Launch & recovery systems | AIE 225CS | AUVs | Electric motors | Lethal autonomous weapons

75 combinations of those attributes. They include Stuart Russell, professor of computer science and engineering at the University of California at Berkeley; Martha E Pollack, professor of computer science and information at the University of Michigan; Apple co-founder Steve Wozniak; Cambridge University cosmologist Sir Martin Rees; renowned MIT computer language scientist and political activist Noam Chomsky, and many others. These are not ingénues primed to worry by the Terminator movies and panicked by a pizza-delivering quadcopter; their views carry considerable weight and cannot be easily dismissed. What they wish to prevent is the development of autonomous weapons that select and engage targets without human intervention, citing hypothetical examples such as armed quadcopters that can search for and eliminate people who meet certain predefined criteria. However, they exclude missiles and armed UAVs for which humans make all the targeting decisions. “AI technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” says the FLI’s letter. Genie out of the bottle The signatories are both right and wrong. They are right that such things are feasible, but wrong to suggest that they belong to the near future – capabilities very much like them have already been fielded. A prime example here is a missile rather than a UAV but, just like the hypothetically weaponised quadcopter, it can search for and eliminate targets that meet predefined criteria. The weapon in question is the MBDA Brimstone that arms Royal Air Force Tornado strike aircraft and will soon also equip Typhoons. The latest version is known as Dual Mode Brimstone to reflect the addition of semi-active laser homing to its guidance system, but the autonomous search-and-destroy capability is inherent in its radar and has its roots deep in the Cold War. Conceived to counter the threat of massed Soviet armour flooding across the North German Plain, Brimstone was given a millimetre-wave radar seeker able to create an accurate 3D profile of any ground vehicle that fell under its beam as it scanned for targets. Its computer would compare these profiles with radar signatures stored in its memory, select the highest priority target in its field of view and attack. One concept of operation envisaged Brimstones being fired en masse from strike jets or helicopters in the direction of a Soviet armoured thrust. Anti-ship missiles such as Exocet, Harpoon and Penguin can do much the same thing. While they can be locked on to an individual target selected by a human operator (as can Brimstone incidentally) they can also be fired in the general direction of an enemy naval formation and choose the highest value target they detect based on its radar signature, infrared signature or a combination of these – another set of predefined criteria. Some border surveillance and security systems already incorporate radars that can distinguish humans from animals and vehicles, and cue electro-optical sensors that can provide high-resolution images to human operators or facial recognition software, either hosted locally or on a cloud server somewhere. Linking sensors and decision logic to weapons is a proven, fielded capability. One application is the kind of remote weapon station (RWS) that is increasingly common on armoured vehicles and has been demonstrated on UGVs. Coupled to an acoustic or electro-optical hostile fire indicator, an RWS can be programmed to point automatically at the source of incoming rounds and present the operator with an image of the shooter centred in the crosshairs on the highest zoom setting, along with a decision on whether to open fire. Weapon stations with these ‘slew to cue’ capabilities don’t return fire autonomously, but they could be programmed to do so. It is not difficult to imagine an autonomous system with lethal capabilities being deployed to guard an extremely sensitive facility. Sensors and associated weapons might cover a sterile area, in which the presence of any human interloper with anything other than hostile intent is deemed extremely unlikely, and an autonomous lethal response deemed justified. It is not many years since such areas would routinely have contained ‘dumb’ land mines. Less than AI It could legitimately be argued that these weapons have little in the way of AI, instead embodying a crude stimulus- response capability refined by a relatively simple decision aid, but it is hard to get away from the fact that, if allowed to, they can make choices among targets without a human in the immediate loop. The examples above are of relatively simple weapons – compared with AI systems under development – Lethal autonomous weapons | Insight Unmanned Systems Technology | April/May 2016 AI technology has reached a point where the deployment of such systems is feasible within years, and the stakes are high

RkJQdWJsaXNoZXIy MjI2Mzk4