Future intelligent environments and systems may need to interact with humans while simultaneously analyzing events and critical situations. Assistive living, advanced driver assistance systems, and intelligent command-and-control centers are just a few of these cases where human interactions play a critical role in situation analysis. In particular, the behavior or body language of the human subject may be a strong indicator of the context of the situation. In this paper we demonstrate how the interaction of a human observer's head pose and eye gaze behaviors can provide significant insight into the context of the event. Such semantic data derived from human behaviors can be used to help interpret and recognize an ongoing event. We present examples from driving and intelligent meeting rooms to support these conclusions, and demonstrate how to use these techniques to improve contextual learning.
The different versions of the original document can be found in:
DOIS: 10.1109/cvpr.2009.5204215 10.1109/cvprw.2009.5204215
Published on 01/01/2009
Volume 2009, 2009
DOI: 10.1109/cvpr.2009.5204215
Licence: CC BY-NC-SA license
Are you one of the authors of this document?