(Created page with " == Abstract == In order to make machines perceive their external environment coherently, multiple sources of sensory information derived from several different modalities ca...") |
m (Scipediacontent moved page Draft Content 105541525 to Faria 2018a) |
(No difference)
|
In order to make machines perceive their external environment coherently, multiple sources of sensory information derived from several different modalities can be used (e.g. cameras, LIDAR, stereo, RGB-D, and radars). All these different sources of information can be efficiently merged to form a robust perception of the environment. Some of the mechanisms that underlie this merging of the sensor information are highlighted in this chapter, showing that depending on the type of information, different combination and integration strategies can be used and that prior knowledge are often required for interpreting the sensory signals efficiently. The notion that perception involves Bayesian inference is an increasingly popular position taken by a considerable number of researchers. Bayesian models have provided insights into many perceptual phenomena, showing that they are a valid approach to deal with real-world uncertainties and for robust classification, including classification in time-dependent problems. This chapter addresses the use of Bayesian networks applied to sensory perception in the following areas: mobile robotics, autonomous driving systems, advanced driver assistance systems, sensor fusion for object detection, and EEG-based mental states classification.
Document type: Part of book or chapter of book
The different versions of the original document can be found in:
Published on 01/01/2018
Volume 2018, 2018
DOI: 10.5772/intechopen.81111
Licence: CC BY-NC-SA license
Are you one of the authors of this document?