Uncrewed Systems Technology 051 l Primoco One 150 l Power management l Ocius Bluebottle USV l Steel E-Motive robotaxi l UAVs insight l Xponential 2023 p Issue 51 Aug/Sept 2023 art 2 l Aant Farm TPR72 l Servos l Tampa Deep Sea Barracuda AUV

21 we take a multi-disciplinary approach.” Run jointly by the Johns Hopkins Applied Physics Laboratory (APL) and the Johns Hopkins Whiting School of Engineering, the IAA works by engaging with academia, industry, governance bodies and the military to draw on expertise from different disciplines. The IAA extends this approach to uncrewed systems, an example being the partnership it formed with AUVSI in 2019 to provide an opportunity to bring together researchers, technology developers, users and regulators to address assurance issues. A major focus of the IAA’s work, LaPointe adds, is aimed at directly helping engineers through the development of tools and methods of assurance that can be generalised across different types of systems and multiple domains, from uncrewed vehicle systems to medical robotics. “A big piece of that is starting to develop a comprehensive and robust systems engineering process for intelligent or learning-based autonomous systems,” she says. ‘Guardian’ tools Ultimately, this could result in a set of standards analogous to those that make sure certified aircraft are safe, but LaPointe cautions that before such standards are possible there is a lot of fundamental research to be done. “The problem with learning-based systems is that, often, we can’t predict where the safe operating envelopes are going to be, as there are edge cases that we don’t always know exist,” she says. “There are so many amazing potential benefits of autonomous systems and AI, and we want to realise all of them. At the same time, there are potential negative impacts that are either unintended consequences or the results of malicious actors manipulating the technology, so we need tools to provide guard rails.” Those could be cyber security tools or tools to make sure developers understand their datasets. If there is something wrong with the data, it might be because someone is ‘poisoning’ the dataset, or it could simply be that a system’s sensors aren’t working very well and are therefore providing poor data. “There is no silver bullet, so we need a set of tools and processes to help us throughout the life cycle of the technology,” she says. “We need tools to help us better drive requirements for technology. “That is where the human need for trust comes in, and you have to have a feedback loop between that and the requirements for how you develop a system. We also need tools for the design and development phases.” One crucial decision to be made at the requirements stage is whether systems have to be ‘explainable’, LaPointe notes. In an explainable system, engineers can understand the reasoning that leads to decisions or predictions, while a learningbased AI system that is not explainable is essentially a ‘black box’ whose reasoning cannot be followed. “In some cases they might have to be explainable; in others they might not. That makes a really big difference, and you have to understand that before you start to design an autonomous system. It’s an important feedback loop.” In LaPointe’s view, there is no way to design out all the risk from an AI-enabled system, so we need tools for use with operational systems, such as monitors and governors. What she calls ecosystem management tools with much broader application are also important, she stresses, because it is not sufficient to consider the safety of individual autonomous systems in isolation. “We are looking to a future when there will be many different autonomous systems, including cyber-physical systems and vehicles, plus many AIenabled decision algorithms running transportation grids and other critical infrastructure,” she says. “So we spend a lot of time thinking about how the entire ecosystem needs to involve in concert with the individual systems so that we get to the point where autonomous systems are trustworthy contributors to society.” Ethical AI When any AI or autonomous system has to make decisions that affect lives, ethical issues arise. LaPointe spent much of the last 10 years of her US Navy career in various technical and policy roles in Washington DC that were concerned with introducing more autonomy and AI into the US fleet. During that time she pushed for the inclusion of ethics into the service’s approach to the technology as early as possible, and now says she is heartened by what she sees as the widespread understanding of the importance of ethics to technology. “For a while, there was a tendency to assume that if an algorithm is doing something it must be fair, but the fact is that there are always values built Cara LaPointe | In conversation Uncrewed Systems Technology | August/September 2023 A future with large numbers of different autonomous systems, including vehicles, interacting in complex urban environments will require them to be highly reliable and trustworthy

RkJQdWJsaXNoZXIy MjI2Mzk4