(Created page with " == Abstract == We propose DOPS, a fast single-stage 3D object detection method for LIDAR data. Previous methods often make domain-specific design decisions, for example proj...")
 
m (Scipediacontent moved page Draft Content 759917905 to Ross et al 2020a)
 
(No difference)

Latest revision as of 16:25, 3 February 2021

Abstract

We propose DOPS, a fast single-stage 3D object detection method for LIDAR data. Previous methods often make domain-specific design decisions, for example projecting points into a bird-eye view image in autonomous driving scenarios. In contrast, we propose a general-purpose method that works on both indoor and outdoor scenes. The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes. 3D bounding box parameters are estimated in one pass for every point, aggregated through graph convolutions, and fed into a branch of the network that predicts latent codes representing the shape of each detected object. The latent shape space and shape decoder are learned on a synthetic dataset and then used as supervision for the end-to-end training of the 3D object detection pipeline. Thus our model is able to extract shapes without access to ground-truth shape information in the target dataset. During experiments, we find that our proposed method achieves state-of-the-art results by ~5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Waymo Open Dataset, while reproducing the shapes of detected cars.

Comment: To appear in CVPR 2020


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/cvpr42600.2020.01193
https://openaccess.thecvf.com/content_CVPR_2020/papers/Najibi_DOPS_Learning_to_Detect_3D_Objects_and_Predict_Their_3D_CVPR_2020_paper.pdf,
https://openaccess.thecvf.com/content_CVPR_2020/html/Najibi_DOPS_Learning_to_Detect_3D_Objects_and_Predict_Their_3D_CVPR_2020_paper.html,
https://arxiv.org/abs/2004.01170,
https://arxiv.org/pdf/2004.01170,
http://www.arxiv-vanity.com/papers/2004.01170,
https://doi.org/10.1109/CVPR42600.2020.01193,
https://academic.microsoft.com/#/detail/3035709245
Back to Top

Document information

Published on 01/01/2020

Volume 2020, 2020
DOI: 10.1109/cvpr42600.2020.01193
Licence: CC BY-NC-SA license

Document Score

0

Views 19
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?