Unmanned Systems Technology 027 l Hummingbird XRP l Gimbals l UAVs insight l AUVSI report part 2 l O’Neill Power Systems NorEaster l Kratos Defense ATMA l Performance Monitoring l Kongsberg Maritime Sounder

98 PS | Making robots from found objects I nspired by artists who create works from objects they find lying around, a team of engineers from Japan has built robots using irregularly shaped tree branches connected by hinge- like servo motors, and employed deep reinforcement learning to teach them to walk (writes Peter Donaldson). For now, the idea seems to be more about exploring the art of the possible than about practical applications, but it raises the prospect of creating useful robots or vehicles at short notice from a few servos, a computer, a power source and either carefully selected found objects or very basic kits of structural parts. One example consists of three branches of different lengths and diameters, the longest and thickest in the middle joined by a pair of servos to one Y-shaped fork and one short, curved stick. In their paper, “Improvised Robotic Design with Found Objects”, the team says, “When the robot is trained towards the goal of efficient locomotion, these parts adopt new meaning – hopping legs, dragging arms, spinning hips or as yet unnamed creative mechanisms of propulsion. These learned strategies, and thus the meanings we might assign to such found object parts, are a product of optimisation and not known prior to learning.” Initially, the robots were created virtually and learned to move in a simulated environment. Once the team had selected the tree branches, they laser- scanned and weighed them to create 3D computer models, then simplified the geometries to make simulation easier. The servos also had to be modelled, as did the friction between the branches and the ground. The simulation allowed the model to explore a wide range of movements, while the reinforcement learning applied ‘rewards’ to those deemed helpful to effective locomotion. In the simulator, multiple software agents were able to learn, each taking a few hours to finish the process. Greater rewards were given for movements that took the robot long distances from its starting point, while low rewards were given for any that produced undesirable behaviours such as spinning on the spot, wiggling, flipping, drifting, sliding or anything that might result in stress or wear in a real robot. An agent that developed a propulsive motion with Y-shaped branches was the most successful, so its joint values, such as angles and rates of motion, were copied from the simulation to the physical robot’s servo motors. Several iterations were made between the simulation and the real world to minimise the differences between them in the robot’s behaviour, with the friction coefficient of the floor and the scale of the rewards having to be revised several times. This and other issues could be solved by running the learning process in the real world; the team did try that but the learning took too long. If they were to do it again, they conclude, they would make the robot bigger so they could fit cameras to it to observe its motion more closely and so that it could carry its own controller and power source to free it from its tangle- prone cable. However, they proved that a few core hardware components, machine learning software and a handful of sticks can become a functioning robot. Now, here’s a thing “ ” The study raises the prospect of creating useful vehicles at short notice from a few servos and so on, and basic kits of parts August/September 2019 | Unmanned Systems Technology