Research ArticleARTIFICIAL INTELLIGENCE

AADS: Augmented autonomous driving simulation using data-driven algorithms

See allHide authors and affiliations

Science Robotics  27 Mar 2019:
Vol. 4, Issue 28, eaaw0863
DOI: 10.1126/scirobotics.aaw0863
  • Fig. 1 The inputs, processing pipeline, and outputs of our AADS system.

    Top: The input dataset. Middle: The pipeline of AADS is shown between the dashed lines and contains data preprocessing, novel background synthesis, trajectory synthesis, moving objects’ augmentation, and LiDAR simulation. Bottom: The outputs from the AADS system, which include synthesized RGB images, a LiDAR point cloud, and trajectories with ground truth annotations.

  • Fig. 2 The ApolloScape dataset and its extension.

    Top: Table comparing ApolloScape with other popular datasets. Bottom: RGB images, annotations, and a point cloud from top to bottom (left) and some labeled traffic trajectories from the dataset (right).

  • Fig. 3 View synthesis results and effectiveness of depth refinement.

    (A and B) Raw RGB and depth images in our dataset, respectively. (C to E) Results of depth refinement after filtering and completion. (F and G) Results of view synthesis using initial and refined depths with close views in (H). (I to K) Final results of view synthesis using the method by Liu et al. (22), the method by Chaurasia et al. (23), and our method, respectively.

  • Fig. 4 Comparison of traffic synthesis.

    Velocity and minimum distance distribution of traffic simulation using our method, the method by Chao et al. (26), and the ground truth.

  • Fig. 5 RGB image augmentation evaluations.

    The four images on the left were selected from CARLA (A), VKITTI dataset (B), our AADS-RGB dataset (C), and the testing dataset CityScapes (D). The bar graph on the right shows the evaluation results using mAP, AP50, and AP70 metrics, respectively.

  • Fig. 6 LiDAR simulation evaluations.

    (A) Evaluation of dataset’s size and type (real or simulation) for real-time instance segmentation. (B) Evaluation results of different object placement methods. (C) Real data boosting evaluations (mean mask AP) using instance segmentation.

  • Fig. 7 TrafficPredict evaluations.

    Comparison of trajectory prediction with 20,000 real trajectory frames and an additional 20,000 simulation trajectory frames.

  • Fig. 8 Novel view synthesis pipeline.

    (A) The four nearest reference images were used to synthesize the target view in (D). (B) The four reference images were warped into the target view via depth proxy. (C) A stitching method was used to yield a complete image. (D) Final results in the novel view were synthesized after post-processing, e.g., hole filling and color blending.

Supplementary Materials

  • Supplementary Materials

    The PDF file includes:

    • Fig. S1. Visual evaluations of point cloud simulation.
    • Legends for movies S1 to S5

    Download PDF

    Other Supplementary Material for this manuscript includes the following:

    • Movie S1 (.mp4 format). Full movie.
    • Movie S2 (.mp4 format). Scan-and-simulation pipeline.
    • Movie S3 (.mp4 format). Synthesizing lane changes.
    • Movie S4 (.mp4 format). Data augmentation.
    • Movie S5 (.mp4 format). Novel view synthesis evaluations.

    Files in this Data Supplement:

Stay Connected to Science Robotics

Navigate This Article