visually demanding driving environment, where elements surrounding a driver are constantly and rapidly changing, requires a driver to make spatially large head turns. Many existing state of the art vision based head pose algorithms, however, still have difficulties in continuously monitoring the head dynamics of a driver. This occurs because, from the perspective of a single camera, spatially large head turns induce self-occlusions of facial features, which are key elements in determining head pose. In this paper, we introduce a shape feature based multi-perspective framework for continuously monitoring the driver's head dynamics. The proposed approach utilizes a distributed camera setup to observe the driver over a wide range of head movements. Using head dynamics and a confidence measure based on symmetry of facial features, a particular perspective is chosen to provide the final head pose estimate. Our analysis on real world driving data shows promising results.
The different versions of the original document can be found in:
Published on 01/01/2014
Volume 2014, 2014
DOI: 10.1109/itsc.2013.6728568
Licence: CC BY-NC-SA license
Are you one of the authors of this document?