Adaptive Appearance Face Tracking with Alignment Feedbacks
Résumé
Adaptive appearance approaches are popular for tracking non-rigid objects, such as faces. However, these approaches usually lack direct mechanisms for correcting spatial misalignments (e.g., translation, scaling and rotation errors) existing in the tracking outputs. The unwanted errors are then accumulated in the target's appearance model. This inevitably has negative effects on tracking performance. Besides, many of these approaches rely on video-specific parameter setting. In this paper, we first adopt a self-adaptive dynamical model to predict the candidates of target. Hence, our tracker is able to work with identical parameters for various situations. Moreover, we introduce a multi-view joint face alignment stage to decrease the impact of mis-alignment. Aligned faces are further used as feedbacks to update the appearance model. We test the proposed algorithm on outdoor surveillance videos and real-world YouTube videos. Experimental results prove the effectiveness of our method in tracking faces under uncontrolled conditions.