Contents
Vol 4, Issue 30
Research Articles
- Magnetically actuated microrobots as a platform for stem cell transplantation
Magnetic microrobots developed for 3D culture and precise delivery of cells were successfully controlled in various environments.
- Shared control–based bimanual robot manipulation
A bimanual shared-control method enables novice users to control a robot in real time to complete complex manipulation tasks.
Special Section on Computer Vision
Focus
- Does computer vision matter for action?
Controlled experiments indicate that explicit intermediate representations help action.
- Computer vision and machine learning in science fiction
Science fiction has a cautionary view of computer vision and machine learning.
- Toward principled regularization of deep networks—From weight decay to feature contraction
Deep networks for classification problems may be enhanced by training with proper regularization exploiting intrinsic class similarities.
Research Articles
- Efficient nonparametric belief propagation for pose estimation and manipulation of articulated objects
A “pull” message passing algorithm is an efficient nonparametric belief propagation for pose estimation of articulated objects.
- Emergence of exploratory look-around behaviors through active observation completion
A robotic agent learns how to look around novel environments intelligently by directing the camera to best complete its observations.
- Learning sensorimotor control with neuromorphic sensors: Toward hyperdimensional active perception
The theory of hyperdimensional computation facilitates integration of action and perception by using a neuromorphic sensor.
About The Cover

ONLINE COVER Developing a Good Eye. Computer vision and robotics share the goal of creating systems that can understand their environments and interact with nearby objects. These systems often learn with data, such as photographs, selected by humans. Ideally, robotic agents would visually scan a scene and then autonomously identify important areas (such as a door frame or table edges). Ramakrishnan et al. used reinforcement learning to train an agent to automatically identify parts of images that allowed it to complete the rest. The authors then added a "sidekick" policy with additional data from partial views from different locations. The agent learned exploration behaviors that could be applied to new visual tasks. [CREDIT: SANTHOSH RAMAKRISHNAN/UNIVERSITY OF TEXAS (ROBOT: KIRILL MAKAROV/DREAMSTIME.COM]