Silhouette-based gesture and action recognition via modeling trajectories on Riemannian shape manifolds
Title | Silhouette-based gesture and action recognition via modeling trajectories on Riemannian shape manifolds |
Publication Type | Journal Articles |
Year of Publication | 2011 |
Authors | Abdelkader MF, Abd-Almageed W, Srivastava A, Chellappa R |
Journal | Computer Vision and Image Understanding |
Volume | 115 |
Issue | 3 |
Pagination | 439 - 455 |
Date Published | 2011/03// |
ISBN Number | 1077-3142 |
Keywords | Action recognition, Gesture recognition, Riemannian manifolds, Shape space, Silhouette-based approaches |
Abstract | This paper addresses the problem of recognizing human gestures from videos using models that are built from the Riemannian geometry of shape spaces. We represent a human gesture as a temporal sequence of human poses, each characterized by a contour of the associated human silhouette. The shape of a contour is viewed as a point on the shape space of closed curves and, hence, each gesture is characterized and modeled as a trajectory on this shape space. We propose two approaches for modeling these trajectories. In the first template-based approach, we use dynamic time warping (DTW) to align the different trajectories using elastic geodesic distances on the shape space. The gesture templates are then calculated by averaging the aligned trajectories. In the second approach, we use a graphical model approach similar to an exemplar-based hidden Markov model, where we cluster the gesture shapes on the shape space, and build non-parametric statistical models to capture the variations within each cluster. We model each gesture as a Markov model of transitions between these clusters. To evaluate the proposed approaches, an extensive set of experiments was performed using two different data sets representing gesture and action recognition applications. The proposed approaches not only are successfully able to represent the shape and dynamics of the different classes for recognition, but are also robust against some errors resulting from segmentation and background subtraction. |
URL | http://www.sciencedirect.com/science/article/pii/S1077314210002377 |
DOI | 10.10.006 |