Thursday, February 5, 2015

New Perspectives in Markov Chain Monte Carlo

In my last post I mentioned that I had been finding out about SQMC (Sequential Quasi Monte Carlo), a new method for sequential problems (such as time-series modelling).  I would like to revise some of what I said.  I implied that SQMC would be useful for Hidden Markov Models.  However this is not entirely accurate as there are other alternatives to SQMC that can be used for Hidden Markov Models that are much more computationally efficient (such as the Viterbi algorithm for finding the most likely sequence of hidden states).  The benefit of SQMC (and its main competitor SMC) is that it can be applied to a very wide class of models (such as stochastic volatility models).

Since my last post I have been further expanding my repertoire / toolbox of methods for statistical inference.  The aim of this exercise is to expand the class of models that I know how to do statistical inference for, with a view to doing statistical inference for realistic models of neural (brain) activity.

I have been concentrating my efforts in the following two areas.

Particle MCMC

This is an elegant approach to combining a particle filter / SMC with Markov Chain Monte Carlo for parameter estimation.  The main applications in the original paper (Andrieu et al. 2010) are to state space models (such as the stochastic volatility model).  However the method has since been applied to Markov Random Fields (e.g. the Ising Model and random graphs models) and to stochastic models for chemical kinetics.

The exciting thing about Particle MCMC is that parameter estimation for nonlinear stochastic models is much more computationally efficient than it is with standard MCMC methods.

More generally it is also exciting to be working in a field where major breakthroughs are still being made.

MCMC Methods for Functions

Particle MCMC is a solution for cases where the processes in the model are too complex for standard MCMC to be applied.

The work on MCMC methods for functions is a solution for cases where the dimension of the problem is very high.  The paper I have been looking at (Cotter et al. 2013) presents applications to nonparametric density estimation, inverse problems in fluid mechanics, and some diffusion processes.  In all these cases we want to infer something about a function.  Some functions are low-dimensional (such as a straight line which only has 2 parameters).  However the functions considered in the paper require a large number of parameters in order for them to be accurately approximated, hence the statistical inference problem is high-dimensional.

The methods in the paper appear to be much more computationally efficient than standard MCMC methods for high-dimensional problems, and it looks like they are quite easy to implement.

However the methodology does not apply to as wide a range of models as Particle MCMC.

As far as I know there are not currently any methods that work well for high-dimensional complex models.  This means it is difficult to use bayesian inference for weather prediction where high-dimensional complex models are needed to make accurate predictions.  This is an active area of research at Reading (which has a very strong meteorology department), and I will be interested to see what progress is made in this area in the coming years.

I haven't got very far into the neuroscience modelling part of my project.  However I have a feeling I may be faced with a similar problem to the meteorologists, i.e. needing to use a high-dimensional complex model.

I will be continuing to develop my interest and knowledge in these areas by going to a summer school in Valladolid this summer (New Perspectives in Markov Chain Monte Carlo, June 8-12).

No comments:

Post a Comment