Por Hävard Rue
INAMAT2 organiza el seminario de investigación que se celebrará el 30 de mayo.
Lugar: Sala de Conferencias del edificio Jerónimo de Ayanz. UPNA
Resumen: In this talk I will discuss recent methodology progress in developing INLA and its R-package R-INLA for the future. A core idea here is the use of the variational form of Bayes theorem by Zellner (1988). This result frame variational inference scheme methodologically within approximate Bayesian inference. I will discuss how we can use this result to do a low-rank mean correction within the R-INLA framework (with amazing results). Hopefully, I will show some new results fo covariance corrections using the same ideas. The R-INLA package have also developed its parallel performance, and will discuss the parallelisation strategies using OpenMP we have applied. This includes a new algorithm for improving numeric gradients and a parallel line-search algorithm in the BFGS optimisation. In the last part, I will introduce the Bayesian learning rule, which is constructed from the same variational form of Bayes theorem. The BLR will unify many machine-learning algorithms from fields such as optimization, deep learning, and graphical models, like ridge regression, Newton's method, and Kalman filter, as well as modern deep-learning algorithms such as stochastic-gradient descent, RMSprop, and Dropout.