(Created page with " == Abstract == Given the wide use of machine learning approaches based on opaque prediction models, understanding the reasons behind decisions of black box decision systems...") |
m (Scipediacontent moved page Draft Content 354269794 to Guidotti et al 2019a) |
(No difference)
|
Given the wide use of machine learning approaches based on opaque prediction models, understanding the reasons behind decisions of black box decision systems is nowadays a crucial topic. We address the problem of providing meaningful explanations in the widely-applied image classification tasks. In particular, we explore the impact of changing the neighborhood generation function for a local interpretable model-agnostic explanator by proposing four different variants. All the proposed methods are based on a grid-based segmentation of the images, but each of them proposes a different strategy for generating the neighborhood of the image for which an explanation is required. A deep experimentation shows both improvements and weakness of each proposed approach.
The different versions of the original document can be found in:
Published on 31/12/18
Accepted on 31/12/18
Submitted on 31/12/18
Volume 2019, 2019
DOI: 10.1007/978-3-030-16148-4_5
Licence: Other
Are you one of the authors of this document?