Machine Learning

Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image

Tagged: , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 2 months ago.


  • arXiv
    5 pts

    Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image

    We consider the problem of dense depth prediction from a sparse set of depth measurements and a single RGB image. Since depth estimation from monocular images alone is inherently ambiguous and unreliable, we introduce additional sparse depth samples, which are either collected from a low-resolution depth sensor or computed from SLAM, to attain a higher level of robustness and accuracy. We propose the use of a single regression network to learn directly from the RGB-D raw data, and explore the impact of number of depth samples on prediction accuracy. Our experiments show that, as compared to using only RGB images, the addition of 100 spatially random depth samples reduces the prediction root-mean-square error by half in the NYU-Depth-v2 indoor dataset. It also boosts the percentage of reliable prediction from 59% to 92% on the more challenging KITTI driving dataset. We demonstrate two applications of the proposed algorithm: serving as a plug-in module in SLAM to convert sparse maps to dense maps, and creating much denser point clouds from low-resolution LiDARs. Codes and video demonstration are publicly available.

    Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image
    by Fangchang Ma, Sertac Karaman
    https://arxiv.org/pdf/1709.07492v1.pdf

You must be logged in to reply to this topic.