(Created page with " == Abstract == Head gesture detection and analysis is a vital part of looking inside a vehicle when designing intelligent driver assistance systems. In this paper, we presen...")
 
m (Scipediacontent moved page Draft Content 947678515 to Kwan et al 2012a)
 
(No difference)

Latest revision as of 16:20, 1 February 2021

Abstract

Head gesture detection and analysis is a vital part of looking inside a vehicle when designing intelligent driver assistance systems. In this paper, we present a simpler and constrained version of Optical flow based Head Movement and Gesture Analyzer (OHMeGA) and evaluate on a dataset relevant to the automotive environment. OHMeGA is user-independent, robust to occlusions from eyewear or large spatial head turns and lighting conditions, simple to implement and setup, real-time and accurate. The intuitiveness behind OHMeGA is that it segments head gestures into head motion states and no-head motion states. This segmentation allows higher level semantic information such as fixation time and rate of head motion to be readily obtained. Performance evaluation of this approach is conducted under two settings: controlled in laboratory experiment and uncontrolled on-road experiment. Results show an average of 97.4% accuracy in motion states for in laboratory experiment and an average of 86% accuracy overall in on-road experiment.


Original document

The different versions of the original document can be found in:

http://yadda.icm.edu.pl/yadda/element/bwmeta1.element.ieee-000006338909,
http://ieeexplore.ieee.org/document/6338909,
https://dblp.uni-trier.de/db/conf/itsc/itsc2012.html#Martin0TKT12,
https://academic.microsoft.com/#/detail/1979642245
http://dx.doi.org/10.1109/itsc.2012.6338909
Back to Top

Document information

Published on 01/01/2012

Volume 2012, 2012
DOI: 10.1109/itsc.2012.6338909
Licence: CC BY-NC-SA license

Document Score

0

Views 3
Recommendations 0

Share this document

Keywords

claim authorship

Are you one of the authors of this document?