Uncrewed Systems Technology 051 l Primoco One 150 l Power management l Ocius Bluebottle USV l Steel E-Motive robotaxi l UAVs insight l Xponential 2023 p Issue 51 Aug/Sept 2023 art 2 l Aant Farm TPR72 l Servos l Tampa Deep Sea Barracuda AUV

22 into technology,” she says. “You can be intentional about it or you can be unintentional. The latter is the worstcase scenario, because you don’t even understand what you are building in.” Different approaches to learning what it means to incorporate ethics into AI have been taken over the past 5 years or so, she says, resulting in the emergence of development frameworks from the US Department of Defense and many other organisations. “What is really interesting when you dig into those frameworks is that most are not about the specific values to be built into the systems so much as they are about the engineering principles that you need to follow in order to achieve ethical outcomes,” she says. Looking for bias Over the past year, LaPointe and her colleagues at Johns Hopkins APL and the university itself have been looking into creating ethical AI tools, building on earlier work by the IAA. The main areas of focus are how to eliminate or mitigate bias in the systems and the underlying datasets. “Any learning-based AI is a reflection of the data that went into it,” she says. “We have had teams looking at the training datasets going into AI algorithms so that we can understand where the biases exist. There are all kinds of bias, but in an ethics framework we are talking about undesirable types of bias that lead to a lack of fairness. “So we have teams looking at the types of bias that we don’t want the systems to reflect, at how to integrate architectures within the data to show whether the dataset is fair, or ‘balanced’, and at how to create balanced datasets proactively and use them to train algorithms. Our team has applied this to images to identify facial recognition biases and to health data to see if there is age or gender bias in healthcare decisions, among many other things, but it is also a generalisable tool that you can apply to other types of application, even vehicles.” It is unlikely that an AI running an underwater vehicle, for example, would have much opportunity to exhibit sexism or racism, intentionally or unintentionally, but a facial recognition algorithm might. “Concerns about what is going to be ethical in use can vary widely,” LaPointe says. “When discussing vehicle systems, we often hear people going straight to the ‘trolley problem’ of how the system is going to decide what to run into. But way before you get to that, you need to make sure you have a system that is robust, safe and reliable, and can perceive the environment well enough to get to the point of making ethical decisions. There is a whole layer of technical systems engineering that we need to do first.” One important part of that is assessing the uncertainty of how well an AI-based autonomous system is working. This is a systems engineering problem that has an impact on the ethics of operating and deploying such systems, she says. “When you’re talking about systems that could be life-critical, such as moving vehicles, you need to know what your safe operating envelopes are. It is really important to be able to assess, in real time and in situ, how confident that AI system is in its assessment of its situation, so we have done a lot of work on how better to calibrate selfassessments of uncertainty in AI. There are a lot of different pieces that go into that, including data uncertainty, model August/September 2023 | Uncrewed Systems Technology In conversation | Cara LaPointe Discussion of ethics and autonomous cars often focuses on the ‘trolley problem’ – how the system is going to decide what to run into – but there is a lot of systems engineering to do to ensure they reliably sense the environment before they get to such decisions The Range Adversarial Planning Tool is one of a large and growing range of development tools that are essential to developing assured autonomy