Abstract

Modern driver assistance systems rely on a wide range of sensors (RADAR, LIDAR, ultrasound and cameras) for scene understanding and prediction. These sensors are typically used for detecting traffic participants and scene elements required for navigation. In this paper we argue that relying on camera based systems, specifically Around View Monitoring (AVM) system has great potential to achieve these goals in both parking and driving modes with decreased costs. The contributions of this paper are as follows: we present a new end-to-end solution for delimiting the safe drivable area for each frame by means of identifying the closest obstacle in each direction from the driving vehicle, we use this approach to calculate the distance to the nearest obstacles and we incorporate it into a unified end-to-end architecture capable of joint object detection, curb detection and safe drivable area detection. Furthermore, we describe the family of networks for both a high accuracy solution and a low complexity solution. We also introduce further augmentation of the base architecture with 3D object detection.

Comment: Accepted by CVPR 2018 Workshop on Autonomous Driving


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/cvprw.2018.00142
https://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w14/Baek_Scene_Understanding_Networks_CVPR_2018_paper.pdf,
https://arxiv.org/abs/1805.07029,
https://arxiv.org/pdf/1805.07029.pdf,
https://ui.adsabs.harvard.edu/abs/2018arXiv180507029B/abstract,
http://openaccess.thecvf.com/content_cvpr_2018_workshops/w14/html/Baek_Scene_Understanding_Networks_CVPR_2018_paper.html,
https://academic.microsoft.com/#/detail/2964232175
Back to Top

Document information

Published on 01/01/2018

Volume 2018, 2018
DOI: 10.1109/cvprw.2018.00142
Licence: CC BY-NC-SA license

Document Score

0

Views 2
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?