Fusing Depth and Video Using Rao-Blackwellized Particle Filter
Title | Fusing Depth and Video Using Rao-Blackwellized Particle Filter |
Publication Type | Book Chapters |
Year of Publication | 2005 |
Authors | Agrawal A, Chellappa R |
Editor | Pal S, Bandyopadhyay S, Biswas S |
Book Title | Pattern Recognition and Machine IntelligencePattern Recognition and Machine Intelligence |
Series Title | Lecture Notes in Computer Science |
Volume | 3776 |
Pagination | 521 - 526 |
Publisher | Springer Berlin / Heidelberg |
ISBN Number | 978-3-540-30506-4 |
Abstract | We address the problem of fusing sparse and noisy depth data obtained from a range finder with features obtained from intensity images to estimate ego-motion and refine 3D structure of a scene using a Rao-Blackwellized particle filter. For scenes with low depth variability, the algorithm shows an alternate way of performing Structure from Motion (SfM) starting with a flat depth map. Instead of using 3D depths, we formulate the problem using 2D image domain parallax and show that conditioned on non-linear motion parameters, the parallax magnitude with respect to the projection of the vanishing point forms a linear subsystem independent of camera motion and their distributions can be analytically integrated. Thus, the structure is obtained by estimating parallax with respect to the given depths using a Kalman filter and only the ego-motion is estimated using a particle filter. Hence, the required number of particles becomes independent of the number of feature points which is an improvement over previous algorithms. Experimental results on both synthetic and real data show the effectiveness of our approach. |
URL | http://dx.doi.org/10.1007/11590316_82 |