Modern vehicles are equipped with multiple cameras which are already used in various practical applications. Advanced driver assistance systems (ADAS) are of particular interest because of the safety and comfort features they offer to the driver. Camera based scene understanding is an important scientific problem that has to be addressed in order to provide the information needed for camera based driver assistance systems. While frontal cameras are widely used, there are applications where cameras observing lateral space can deliver better results. Fish eye cameras mounted in the side mirrors are particularly interesting, because they can observe a big area on the side of the vehicle and can be used for several applications for which the traditional front facing cameras are not suitable. We present a general method for scene understanding using 3D reconstruction of the environment around the vehicle. It is based on pixel-wise image labeling using a conditional random field (CRF). Our method is able to create a simple 3D model of the scene and also to provide semantic labels of the different objects and areas in the image, like for example cars, sidewalks, and buildings. We demonstrate how our method can be used for two applications that are of high importance for various driver assistance systems — car detection and free space estimation. We show that our system is able to perform in real time for speeds of up to 63 km/h.
The different versions of the original document can be found in:
Published on 01/01/2012
Volume 2012, 2012
DOI: 10.1109/ivs.2012.6232237
Licence: CC BY-NC-SA license
Are you one of the authors of this document?