Research ArticleNAVIGATION

Dynamic obstacle avoidance for quadrotors with event cameras

See allHide authors and affiliations

Science Robotics  18 Mar 2020:
Vol. 5, Issue 40, eaaz9712
DOI: 10.1126/scirobotics.aaz9712
  • Fig. 1 Sequence of an avoidance maneuver.

  • Fig. 2 Comparison of the output of a conventional camera versus an event camera for a rotating disk with a black dot.

    A conventional camera captures frames at a fixed rate; an event camera only outputs the sign of brightness changes continuously in the form of a spiral of events in space time (red, positive changes; blue, negative changes).

  • Fig. 3 Effects of the ego-motion compensation.

    Our algorithm collects all the events that fired during the last 10 ms, here represented in the 3D volume (left side), and used the IMU to compensate for the motion of the camera. The ego-motion compensated events are therefore projected into a common image frame (right side) where each pixel (pxl) contains potentially multiple events. By analyzing the temporal statistics of all the events projected into each pixel, our approach is able to distinguish between pixels belonging to the static part of the scene and to moving objects.

  • Fig. 4 Stages of the ego-motion compensation algorithm to isolate the events belonging to moving obstacles.

    (A) A frame captured by the Insightness SEES1 camera showing the motion blur due to the relative motion between the sensor and the moving obstacle. (B) All the events accumulated in the last window, with red and blue indicating the polarity (positive and negative, respectively). (C) The result of the ego-motion compensation, showing in white all the pixels where there has been at least one event in the time window. (D) The motion-compensated events, with color code representing the normalized mean time stamp: The events belonging to the dynamic part of the scene are represented in yellow. (E) A mean time stamp image after thresholding: Green and purple indicate the static and the moving part of the scene, respectively. (F) Events belonging to moving obstacles. This frame is used to segment out the different dynamic objects in the scene.

  • Fig. 5 A sequence from one of the indoor experiments.

    A ball is thrown toward the vehicle, equipped with a monocular event camera, which is used to detect and evade the obstacle. (A) t = 0 s. (B) t = 0.075 s. (C) t = 0.15 s. (D) t = 0.225 s. The ball is thrown at time t = 0 s and reaches the position where the quadrotor is hovering at about time t = 0.225 s. The robot successfully detects the incoming obstacle and moves to the side to avoid it.

  • Fig. 6 A sequence from our outdoor experiments.

    (A) t = 0 s. (B) t = 0.15 s. (C) t = 0.30 s. (D) t = 0.45 s. The quadrotor is flying toward a reference goal position when an obstacle is thrown toward it. The obstacle is successfully detected using a stereo pair of event cameras and is avoided by moving upward.

  • Table 1 Accuracy of our event-based algorithm to detect moving obstacles.

    We analyzed both the monocular and the stereo setups and compared the detections with ground truth data provided by a motion capture system. For each configuration, we report (expressed in meters) the mean, the median, the SD, and the maximum absolute deviation (M.A.D.) of the norm of the position error for different ranges of distances.

    MonocularStereo
    Distance (m)MeanMedianSDM.A.D.MeanMedianSDM.A.D.
    0.2–0.50.080.050.180.090.070.050.070.06
    0.5–1.00.100.050.220.100.100.050.180.10
    1.0–1.50.100.050.200.100.130.070.210.12
  • Table 2 Success rate of the event-based detector.

    Each column reports the success rate for objects moving at a certain distance range from the camera. Each row shows the success rate of detecting objects smaller than a certain size. The results are obtained on a dataset comprising 100 throws of objects belonging to each size.

    Distance (m)
    Size≤0.5≤1≤1.5
    ≤0.1 m92%90%88%
    ≤0.2 m87%92%97%
    ≤0.3 m81%88%93%
  • Table 3 The mean, μ, and SD, σ, of the computation time of the obstacle detection algorithm.

    Stepμ (ms)σ (ms)Percentage
    (%)
    Ego-motion
    compensation
    1.310.3536.80
    Mean time
    stamp
    threshold
    0.980.0527.52
    Morphological
    operations
    0.580.0416.29
    Clustering0.690.2019.39
    Total3.560.45100

Supplementary Materials

  • robotics.sciencemag.org/cgi/content/full/5/40/eaaz9712/DC1

    Fig. S1. Monodimensional example to explain the working principle of event-based detection of moving obstacles.

    Fig. S2. Time statistics of the events belonging to static and dynamic regions.

    Fig. S3. The quadrotor platform we used in our outdoor experiments.

    Fig. S4. Ego-motion compensation computation time as function of the number of events.

    Fig. S5. Clustering computation time as function of the pixels count.

    Fig. S6. Detection of objects having different sizes and shapes.

    Fig. S7. Detection of multiple objects simultaneously.

    Fig. S8. Sequence of detection.

    Fig. S9. Obstacle ellipsoid.

    Fig. S10. Repulsive potential.

    Fig. S11. Attractive potential.

    Movie S1. Outdoor dynamic experiments.

    Movie S2. Explanation of the working principle of the event-based detection algorithm.

    References (56, 57)

  • Supplementary Materials

    The PDF file includes:

    • Fig. S1. Monodimensional example to explain the working principle of event-based detection of moving obstacles.
    • Fig. S2. Time statistics of the events belonging to static and dynamic regions.
    • Fig. S3. The quadrotor platform we used in our outdoor experiments.
    • Fig. S4. Ego-motion compensation computation time as function of the number of events.
    • Fig. S5. Clustering computation time as function of the pixels count.
    • Fig. S6. Detection of objects having different sizes and shapes.
    • Fig. S7. Detection of multiple objects simultaneously.
    • Fig. S8. Sequence of detection.
    • Fig. S9. Obstacle ellipsoid.
    • Fig. S10. Repulsive potential.
    • Fig. S11. Attractive potential.
    • References (56, 57)

    Download PDF

    Other Supplementary Material for this manuscript includes the following:

    • Movie S1 (.mp4 format). Outdoor dynamic experiments.
    • Movie S2 (.mp4 format). Explanation of the working principle of the event-based detection algorithm.

    Files in this Data Supplement:

Stay Connected to Science Robotics

Navigate This Article