Vehicle detection is important for advanced driver assistance systems (ADAS). Both LiDAR and cameras are often used. LiDAR provides excellent range information but with limits to object identification; on the other hand, the camera allows for better recognition but with limits to the high resolution range information. This paper presents a sensor fusion based vehicle detection approach by fusing information from both LiDAR and cameras. The proposed approach is based on two components: a hypothesis generation phase to generate positions that potential represent vehicles and a hypothesis verification phase to classify the corresponding objects. Hypothesis generation is achieved using the stereo camera while verification is achieved using the LiDAR. The main contribution is that the complementary advantages of two sensors are utilized, with the goal of vehicle detection. The proposed approach leads to an enhanced detection performance; in addition, maintains tolerable false alarm rates compared to vision based classifiers. Experimental results suggest a performance which is broadly comparable to the current state of the art, albeit with reduced false alarm rate.
The different versions of the original document can be found in:
Published on 01/01/2014
Volume 2014, 2014
DOI: 10.1109/itsc.2014.6957925
Licence: CC BY-NC-SA license
Are you one of the authors of this document?