(Created page with " == Abstract == Head gesture detection and analysis is a vital part of looking inside a vehicle when designing intelligent driver assistance systems. In this paper, we presen...") |
m (Scipediacontent moved page Draft Content 947678515 to Kwan et al 2012a) |
(No difference)
|
Head gesture detection and analysis is a vital part of looking inside a vehicle when designing intelligent driver assistance systems. In this paper, we present a simpler and constrained version of Optical flow based Head Movement and Gesture Analyzer (OHMeGA) and evaluate on a dataset relevant to the automotive environment. OHMeGA is user-independent, robust to occlusions from eyewear or large spatial head turns and lighting conditions, simple to implement and setup, real-time and accurate. The intuitiveness behind OHMeGA is that it segments head gestures into head motion states and no-head motion states. This segmentation allows higher level semantic information such as fixation time and rate of head motion to be readily obtained. Performance evaluation of this approach is conducted under two settings: controlled in laboratory experiment and uncontrolled on-road experiment. Results show an average of 97.4% accuracy in motion states for in laboratory experiment and an average of 86% accuracy overall in on-road experiment.
The different versions of the original document can be found in:
Published on 01/01/2012
Volume 2012, 2012
DOI: 10.1109/itsc.2012.6338909
Licence: CC BY-NC-SA license
Are you one of the authors of this document?