Unmanned Systems Technology 010 | nuTonomy driverless taxi | Embedded computing | HFE International marine powertrain | Space vehicles | Performance monitoring | Commercial UAV Show Asia report

82 October/November 2016 | Unmanned Systems Technology PS | Social robots I magine a humanoid robot that doesn’t have a particular purpose or application but could be taught to do almost any task (writes Stewart Mitchell). The concept may not be too far from reality with the latest iteration of artificial intelligence (AI) technology that has recently come to surface as a spin-off from research carried out at the University of Luxembourg, which uses reasoning from discrete and synchronous observation. The concept of this technology is such that a humanoid robot equipped with it can interact with a human user and learn how to perform tasks using knowledge gained from its environment. That knowledge would be stored in the robot’s memory as ‘events’ that represent observations made by its perception components, which include cameras, an array of sensors and instructions from a human. These instructions and observations would be represented as separate events and maintained independently in the robot’s memory to ensure continuity with each task and make querying and reasoning part of its infrastructure. The management of AI knowledge and integration of reasoning has been the focus of many logic-based AI software packages in the past, but this latest generation of AI may have the capability to offer truly natural human interaction, as it would have what is considered to be a natural ability to reason. Scientists technically define a ‘mind’ as those things we consciously involve ourselves in, and include reason, memory and learning or discovery independent of emotion, so technically a robot equipped with this software would have a mind of its own. Building AI to store knowledge from discrete experiences is not straightforward, however, as every event contains different information types that have to be represented and dealt with differently, just as they are in the human brain. Proper representation of the environment and maintenance of the robot’s knowledge is crucial to enabling machines with cognitive capabilities to carry out knowledge-intensive tasks and interact with humans. The knowledge collected by the robot‘s cameras and sensors would therefore have to be asynchronously outputted to its memory and represented as various information types, such as recognised objects, faces and location. The data collected could then be cross-referenced with results in its knowledge base, which is built up over time. In addition, a simulation interface on board the robot would enable various gestures and body movements to be created for the robot by moving and recording a sequence of actions. They would be exported to the main interface of the robot in a gallery of gestures, and could then be accurately repeated by the robot when called upon to do so. At the moment, the concept stores the knowledge base on the robot itself. However, it could be stored on the cloud to allow it to be cross-referenced with other robot systems to remove some of the computing-intensive functions from the unit itself. That would also mean the task history for all the robots could be stored, and reaction speed could be much increased if an instruction in a particular environment was received more than once. The technology is still in its infancy at this stage, but it may not be long before it is implemented in humanoid robots that will be able to learn from and interact with humans and integrate into our society. Only time will tell. Now, here’s a thing “ ” This latest AI may have the capability to offer truly natural human interaction, as it would have the ability to reason

RkJQdWJsaXNoZXIy MjI2Mzk4