Attention meets Geometry: Geometry Guided Spatial-Temporal Attention for Consistent Self-Supervised Monocular Depth Estimation 3DV 2021
- Technical University of Munich
- * Equal contribution. Order of authors determined randomly.
Abstract
Inferring geometrically consistent dense 3D scenes across a tuple of temporally consecutive images remains challenging for self-supervised monocular depth prediction pipelines. This paper explores how the increasingly popular transformer architecture, together with novel regularized loss formulations, can improve depth consistency while preserving accuracy. We propose a spatial attention module that correlates coarse depth predictions to aggregate local geometric information. A novel temporal attention mechanism further processes the local geometric information in a global context across consecutive images. Additionally, we introduce geometric constraints between frames regularized by photometric cycle consistency. By combining our proposed regularization and the novel spatial-temporal-attention module we fully leverage both the geometric and appearance-based consistency across monocular frames. This yields geometrically meaningful attention and improves temporal depth stability and accuracy compared to previous methods.
Reconstruction Demos
Spatial-Temporal Attention Demos
Related links
Main Paper
Attention meets Geometry: Geometry Guided Spatial-Temporal Attention for Consistent Self-Supervised Monocular Depth Estimation. International Conference on 3D Vision (3DV), 2021.
Workshop Paper
Spatial-Temporal Attention through Self-Supervised Geometric Guidance.ICCV Workshop: Self-supervised Learning for Next-Generation Industry-level Autonomous Driving, 2021.
Citation
Acknowledgements
The website template was borrowed from Michaƫl Gharbi.