(Created page with " == Abstract == © 2020, © 2020 Informa UK Limited, trading as Taylor & Francis Group. Markerless motion capture would permit the study of human biomechanics in environments...")
 
m (Scipediacontent moved page Draft Content 319015362 to Ascenso et al 2020a)
 
(No difference)

Latest revision as of 13:41, 16 February 2021

Abstract

© 2020, © 2020 Informa UK Limited, trading as Taylor & Francis Group. Markerless motion capture would permit the study of human biomechanics in environments where marker-based systems are impractical, e.g. outdoors or underwater. The visual hull tool may enable such data to be recorded, but it requires the accurate detection of the silhouette of the object in multiple camera views. This paper reviews the top-performing algorithms available to date for silhouette extraction, with the visual hull in mind as the downstream application; the rationale is that higher-quality silhouettes would lead to higher-quality visual hulls, and consequently better measurement of movement. This paper is the first attempt in the literature to compare silhouette extraction algorithms that belong to different fields of Computer Vision, namely background subtraction, semantic segmentation, and multi-view segmentation. It was found that several algorithms exist that would be substantial improvements over the silhouette extraction algorithms traditionally used in visual hull pipelines. In particular, FgSegNet v2 (a background subtraction algorithm), DeepLabv3+ JFT (a semantic segmentation algorithm), and Djelouah 2013 (a multi-view segmentation algorithm) are the most accurate and promising methods for the extraction of silhouettes from 2D images to date, and could seamlessly be integrated within a visual hull pipeline for studies of human movement or biomechanics.


Original document

The different versions of the original document can be found in:

http://shura.shu.ac.uk/26853,
https://academic.microsoft.com/#/detail/3042928882 under the license http://www.rioxx.net/licenses/under-embargo-all-rights-reserved
http://dx.doi.org/10.1080/21681163.2020.1790040
Back to Top

Document information

Published on 01/01/2020

Volume 2020, 2020
DOI: 10.1080/21681163.2020.1790040
Licence: Other

Document Score

0

Views 1
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?