Gradient conjugate priors and multi-layer neural networks

The paper deals with learning probability distributions of observed data by artificial neural networks. We suggest a so-called gradient conjugate prior (GCP) update appropriate for neural networks, which is a modification of the classical Bayesian update for conjugate priors. We establish a connection between the gradient conjugate prior update and the maximization of the log-likelihood of the predictive distribution. Unlike for the Bayesian neural networks, we use deterministic weights of neural networks, but rather assume that the ground truth distribution is normal with unknown mean and variance and learn by the neural networks the parameters of a prior (normal-gamma distribution) for these unknown mean and variance. The update of the parameters is done, using the gradient that, at each step, directs towards minimizing the Kullback–Leibler divergence from the prior to the posterior distribution (both being normal-gamma). We obtain a corresponding dynamical system for the prior's parameters and analyze its properties. In particular, we study the limiting behavior of all the prior's parameters and show how it differs from the case of the classical full Bayesian update. The results are validated on synthetic and real world data sets. © 2019 Elsevier B.V.

Авторы
Gurevich P. 1, 2 , Stuke H.1
Издательство
Elsevier B.V.
Язык
Английский
Статус
Опубликовано
Номер
103184
Том
278
Год
2020
Организации
  • 1 Free University of Berlin, Arnimallee 3, Berlin, 14195, Germany
  • 2 RUDN University, Miklukho-Maklaya 6, Moscow, 117198, Russian Federation
Ключевые слова
Asymptotics; Conjugate priors; Deep neural networks; Kullback–Leibler divergence; Latent variables; Outliers; Regression; Student's t-distribution; Uncertainty quantification
Дата создания
10.02.2020
Дата изменения
10.02.2020
Постоянная ссылка
https://repository.rudn.ru/ru/records/article/record/56568/
Поделиться

Другие записи