Machine Learning

Joint Parsing of Cross-view Scenes with Spatio-temporal Semantic Parse Graphs

Tagged: , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 3 months ago.


  • arXiv
    5 pts

    Joint Parsing of Cross-view Scenes with Spatio-temporal Semantic Parse Graphs

    Cross-view video understanding is an important yet under-explored area in computer vision. In this paper, we introduce a joint parsing method that takes view-centric proposals from pre-trained computer vision models and produces spatio-temporal parse graphs that represents a coherent scene-centric understanding of cross-view scenes. Our key observations are that overlapping fields of views embed rich appearance and geometry correlations and that knowledge segments corresponding to individual vision tasks are governed by consistency constraints available in commonsense knowledge. The proposed joint parsing framework models such correlations and constraints explicitly and generates semantic parse graphs about the scene. Quantitative experiments show that scene-centric predictions in the parse graph outperform view-centric predictions.

    Joint Parsing of Cross-view Scenes with Spatio-temporal Semantic Parse Graphs
    by Hang Qi, Yuanlu Xu, Tao Yuan, Tianfu Wu, Song-Chun Zhu
    https://arxiv.org/pdf/1709.05436v1.pdf

You must be logged in to reply to this topic.