Unmanned Systems Technology 018 | CES show report | ASV Global C-Cat 3 USV | Test centres | UUVs insight | Limbach L 275 EF | Lidar systems | Heliceo DroneBox | Composites

15 Unmanned Systems Technology | February/March 2018 CES | Platform one a new image sensor processor that handles 1.5 Gpixel/s to take data from all the cameras and sensors around the car in full high dynamic range. Also, a new video processor encodes and decodes the data from each camera, recording the video data for ‘black box’ applications. “To have functional safety, you need safety of the intended features, called SotIF,” said Huang. “Whatever you decide is the intended outcome, you have to somehow validate that you’re designing something according to that. “It also has to be resilient to hard failures, hard faults. If a wire were to break or a solder join wore out, for example, it has to be resilient to that. “It has to be resilient to soft failures as well, which could be a noise glitch, the temperature’s too high or the memory forgot a bit. “The architecture has to be able to deal with these three fundamental failure types – functional failure type, hard failure and soft failure. In the final analysis, what you’re looking for is the ability to have redundancy and diversity. Everything you do has to have back-ups. “As a result we’re going to design future cars the way people design aircraft,” he said. The first step is for the Nvidia drive stack to be the world’s first top-to-bottom functional safe drive stack, compliant with the ISO 26262 safety-critical design process, and with full functional safety. That has involved using the QNX ASIL-D real-time operating system from Harman, and time-triggered real-time networking technology from TTTech. The company is also using simulation in a supercomputer to test its hardware and software. “We created a virtual reality where we would have virtual cars driving in virtual cities,” Huang explained. “Our AI stack, Drive, will be in all these VR cars. We’re going to have thousands of parallel universes, so we can test a lot more miles in VR than we could physically.” Huang added that the company has 320 developers, from tier one OEMs to start-ups, mapping companies and research organisations. These include Uber’s fleet of self-driving Volvo taxis and start-up Aurora’s technology used by Volkswagen and Hyundai. The Toyota Research Institute (TRI) in the US demonstrated at the show an automated driving research vehicle. Called version 3.0, it is built on a Lexus LS 600hL and is focused on sensing and imaging technologies, and packaging them in a way that is easy to reproduce for large fleets of cars. A Luminar Lidar provides a 200 m range with 360 º coverage around the car. That comes from four high-resolution Lidar scanning heads that precisely detect objects in the environment, including difficult-to-see dark objects. Shorter-range Lidar sensors are positioned low down on all four sides of the vehicle – one in each front quarter- panel and one each on the front and rear bumpers. These can detect low-level and smaller objects near the car such as children and roadway debris. TRI worked with Calty Design Research in Michigan and engineers at Toyota Motor North America Research and Development to conceal the sensors in the body of the car. They created a new rooftop weather- and temperature- proof panel, using available space in the sunroof compartment to minimise the overall height. The electronics infrastructure and wiring sit in a small box in the boot. The Pegasus platform uses two of the world’s most complex chips, and is aimed at ASIL-D, ISO 26262 designs Toyota Research Institute’s Platform 3.0 vehicle has been developed to test out Lidar and other sensors

RkJQdWJsaXNoZXIy MjI2Mzk4