Uncrewed Systems Technology 051 l Primoco One 150 l Power management l Ocius Bluebottle USV l Steel E-Motive robotaxi l UAVs insight l Xponential 2023 p Issue 51 Aug/Sept 2023 art 2 l Aant Farm TPR72 l Servos l Tampa Deep Sea Barracuda AUV

7 Platform one Uncrewed Systems Technology | August/September 2023 Researchers in the US are developing ways to analyse reflections for more effective vision systems (writes Nick Flaherty). Researchers from MIT and Rice University have used machine learning (ML) algorithms to create a computer vision technique that uses reflections to image the world for driverless cars and UAVs. The ORCa (Objects as Radiance-Field Cameras) technique uses images of an object taken from different angles, converting its surface into a virtual sensor that captures reflections. It maps these reflections in a way that enables it to estimate depth in a scene and capture novel views that would only be visible from the object’s perspective. The technique can be used to see around corners or beyond objects that block the observer’s view, particularly in autonomous vehicles. For instance, it could enable a self-driving car to use reflections from objects it passes, such as lamp posts or buildings, to see around a parked truck. ORCa works in three steps. First, pictures of an object are taken from many vantage points, capturing multiple reflections from the object. Then, for each image from the real camera, it uses ML to convert the object’s surface into a virtual sensor that captures light and reflections that strike each virtual pixel on the surface. The system then uses virtual pixels on the object’s surface to model the 3D environment from the point of view of the object. Any distortions in the image depend on the shape of the object and the environment it is reflecting, both of which might consist of incomplete information. In addition, a reflective object might have its own colour and texture that mixes with the reflections. The reflections are also twodimensional projections of a 3D world, which makes it hard to judge depth in reflected scenes. “We have shown that any surface can be converted into a sensor with this formulation that converts objects into virtual pixels and virtual sensors. This can be applied in many different areas,” said Kushagra Tiwary, a graduate student in the Camera Culture Group at MIT’s Media Lab. “In real life, exploiting these reflections is not as easy as just pushing an enhance button,” said Akshat Dave, a graduate student at Rice University, who worked on the project. “Getting useful information out of these reflections is pretty hard, because the reflections give us a distorted view of the world.” Tiwary added, “You have to make sure the mapping works and is physically accurate, so it is based on how light travels and how it interacts with the environment.” Using the proof of concept, the researchers want to apply the technique to UAV imaging. ORCa could use faint reflections from objects a UAV flies over to reconstruct a scene from the ground. They also want to enhance ORCa so it can use other cues such as shadows, to reconstruct hidden information or combine reflections from two objects to image new parts of a scene. Machine vision Reflections on imaging The vision technique uses machine learning to map an object’s reflections to capture depth in a scene We have shown that any surface can be converted with this formulation into virtual pixels and sensors. This can be applied in many different areas