Cameras can provide the capability for surround vehicle analysis and intuitive visualization for driver support in critical situations. As salient objects can be distributed anywhere around the vehicle, a wide view of the surround has many benefits for studying driver behavior in a holistic manner, as well as developing effective driver assistance systems. In this work, we are concerned with the hardware setup and calibration needed to provide such stitched views. In particular, views with large translation differences, many moving objects, and large changes in brightness are handled. We show the qualitative effects of calibration scene and camera orientation on common stitching algorithms. Finally, we analyze the stitched view for salient objects to detect critical events.
The different versions of the original document can be found in:
Published on 01/01/2014
Volume 2014, 2014
DOI: 10.1109/itsc.2013.6728296
Licence: CC BY-NC-SA license
Are you one of the authors of this document?