Research ArticleCOMPUTER VISION

Learning sensorimotor control with neuromorphic sensors: Toward hyperdimensional active perception

See allHide authors and affiliations

Science Robotics  15 May 2019:
Vol. 4, Issue 30, eaaw6736
DOI: 10.1126/scirobotics.aaw6736

You are currently viewing the abstract.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

Abstract

The hallmark of modern robotics is the ability to directly fuse the platform’s perception with its motoric ability—the concept often referred to as “active perception.” Nevertheless, we find that action and perception are often kept in separated spaces, which is a consequence of traditional vision being frame based and only existing in the moment and motion being a continuous entity. This bridge is crossed by the dynamic vision sensor (DVS), a neuromorphic camera that can see the motion. We propose a method of encoding actions and perceptions together into a single space that is meaningful, semantically informed, and consistent by using hyperdimensional binary vectors (HBVs). We used DVS for visual perception and showed that the visual component can be bound with the system velocity to enable dynamic world perception, which creates an opportunity for real-time navigation and obstacle avoidance. Actions performed by an agent are directly bound to the perceptions experienced to form its own “memory.” Furthermore, because HBVs can encode entire histories of actions and perceptions—from atomic to arbitrary sequences—as constant-sized vectors, autoassociative memory was combined with deep learning paradigms for controls. We demonstrate these properties on a quadcopter drone ego-motion inference task and the MVSEC (multivehicle stereo event camera) dataset.

View Full Text