Research ArticleARTIFICIAL INTELLIGENCE

Deep learning can accelerate grasp-optimized motion planning

See allHide authors and affiliations

Science Robotics  18 Nov 2020:
Vol. 5, Issue 48, eabd7710
DOI: 10.1126/scirobotics.abd7710
  • Fig. 1 Grasp-optimized motion planning in action.

    The proposed motion planner computes a time- and jerk-optimized motion for pick-and-place operations, using a combination of deep learning and optimization. Time optimization makes the motions fast (sub-second). Jerk (change in acceleration) optimization avoids overshooting and reduces wear over long term repeated operation. For a given pair of start and end robot configurations, deep learning rapidly computes an approximation of the optimal motion that can violate motion constraints (e.g., collides with a bin, exceeds joint limits). The motion planner then feeds the approximation to an optimization process to minimize jerk and fix up the constraint violations. By using the deep-learning-based approximation, the computation time speeds up by 300×.

  • Fig. 2 Grasp-optimized motion planning degrees of freedom.

    The optimized motion planning allows for degrees of freedom to be added to the pick and or place frames. In (A), grasp analysis produces a top-down grasp. Because the analysis is based on two contact points, the motion planner allows for rotation about the grasp contact points shown as ±60° rotations in (B) and (C). Similarly, reversing the contact points, visible in (D) as a different arm pose, will still be valid according to grasp analysis. DJ-GOMP computes an optimal rotation for pick and place frames that minimizes time and jerk of the motion.

  • Fig. 3 A deep neural network architecture for grasp optimized motion planning.

    The input is the start and goal grasp frames (A). Each “FDE” block (B) sequences a fully connected (FC) layer (C), a dropout layer (D), and an exponential linear unit (ELU) layer (E). The output (F) is a trajectory τH from the start frame to the goal frame for each of the time steps H supported by the network. A separate network uses one-hot encoding to predict which of the output trajectories is the shortest valid trajectory.

  • Fig. 4 Physical experiment executing jerk-limited motion computed by DJ-GOMP on a UR5.

    The motion starts by picking an object from the right bin (A), moves over the divider (B to D), and ends after placing the object in the left bin (E). Without the jerk limits, the motion takes 448 ms but results in a high jerk at the beginning and end of the motion, which, in this case, causes the UR5 robot to overshoot its end frame by a few millimeters. With jerk limits, the motion takes 544 ms, reduces wear, and does not overshoot the end frame.

  • Fig. 5 Compute time distribution for 1000 random motion plans.

    In these plots, the x axis shows total compute time in seconds for a single optimized trajectory. Plot (A) extends to 90 s, whereas plots (B) and (C) extend to 1 s. The y axis shows the distribution compute time required. The full optimization process without the deep-learning prediction, shown in the histogram in (A), requires orders of magnitude longer to compute. Using a deep network to predict the optimal time horizon for a trajectory, but not warm-starting the trajectory (B), leads to improvements in compute time, although with increased failures. Using the deep network to compute a trajectory to warm start the optimization (C) further improves the compute time. In (C), the plots include results for both estimated trajectory horizon H and the exact H from the full optimization to show the effect of misprediction of trajectory length—inexact predictions can result in a faster compute time because the resulting trajectory is suboptimal, thus less tightly constrained. The upper limit on the x axis is shown in red to highlight the difference in scale—plots (B) and (C) are magnified by two orders of magnitude.

  • Fig. 6 Maximum jerk and timing comparisons for 1000 pick-place pairs computed with PRM*, TrajOpt, and DJ-GOMP.

    These graphs compare motion plan (A) jerk, (B) compute time, (C) motion time, and (D) combined compute + motion time. The filled boxes spans the first through third quartile with a horizontal line at the median. The whiskers extend from the minimum to maximum values. Paths computed by PRM* (9, 10) and TrajOpt (3) are subsequently optimally time parameterized (11). The time parameterization does not limit jerk as DJ-GOMP does, which allows for faster but high jerk motions. Even so, because DJ-GOMP directly optimizes the path, unlike PRM* and TrajOpt, DJ-GOMP generates the fastest motions; whereas its deep learning–based warm start allows for fast compute and motion times.

  • Fig. 7 Jerk limit’s effect on computed and executed motion.

    We plot the jerk (y axis) of each joint in rad per cubic second over time in milliseconds (x axis) as computed (A) without jerk limits and (B) with jerk limits. Without jerk limits, the optimization computes trajectories with large jerks throughout the trajectory (shown in shaded regions). With jerk limits, each joint stays within the defined limits (the dotted lines) of the robot.

  • Fig. 8 Obstacle constraint linearization.

    The constraint linearization process keeps the trajectory away from obstacles by adding constraints based on the Jacobian of the configuration at each waypoint of the accepted trajectory x(k). In this figure, the obstacle is shown from the side, the robot’s path along part of x(k) is shown in blue, and the constraints’ normal projections into Euclidean space are shown in red. For waypoints that are outside the obstacle (A), constraints keep the waypoints from entering the obstacle. For waypoints that are inside the obstacle (B), constraints push the waypoints up out of the obstacle. If the algorithm adds constraints only at waypoints as in (C), the optimization can compute trajectories that collide with obstacles and produce discontinuities between trajectories with small changes to the pick or place frame. These effects are mitigated when obstacles are inflated to account for them, but the discontinuities can lead to poor results when training the neural network. The proposed algorithm adds linearized constraints to account for collision between obstacles, leading to more consistent results shown in (D).

  • Fig. 9 The fast motion planning pipeline.

    The pipeline has three phases between input and robot execution. The first phase estimates the trajectory horizon H* by computing a forward pass of the neural network. The second phase estimates the trajectories for H* to create an initial trajectory for the SQP optimization process. The SQP then optimizes the trajectory, ensuring that it meets all joint kinematic and dynamic limits so that it can successfully execute on a robot.

Supplementary Materials

  • Supplementary Materials

    The PDF file includes:

      Download PDF

      Other Supplementary Material for this manuscript includes the following:

      • Movie S1 (.mp4 format). Example of motions computed by grasp-optimized motion planning with a deep-learning warm start.

      Files in this Data Supplement:

    Stay Connected to Science Robotics

    Navigate This Article