(Created page with " == Abstract == We present a sparse representation of model uncertainty for Deep Neural Networks (DNNs) where the parameter posterior is approximated with an inverse formulat...")
 
m (Scipediacontent moved page Draft Content 453611053 to Lee et al 2020a)
 
(No difference)

Latest revision as of 00:13, 29 January 2021

Abstract

We present a sparse representation of model uncertainty for Deep Neural Networks (DNNs) where the parameter posterior is approximated with an inverse formulation of the Multivariate Normal Distribution (MND), also known as the information form. The key insight of our work is that the information matrix, i.e. the inverse of the covariance matrix tends to be sparse in its spectrum. Therefore, dimensionality reduction techniques such as low rank approximations (LRA) can be effectively exploited. To achieve this, we develop a novel sparsification algorithm and derive a cost-effective analytical sampler. As a result, we show that the information form can be scalably applied to represent model uncertainty in DNNs. Our exhaustive theoretical analysis and empirical evaluations on various benchmarks show the competitiveness of our approach over the current methods.

Comment: Accepted to the Thirty-seventh International Conference on Machine Learning (ICML) 2020


Original document

The different versions of the original document can be found in:

Back to Top

Document information

Published on 01/01/2020

Volume 2020, 2020
Licence: CC BY-NC-SA license

Document Score

0

Views 1
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?