Here are a few loosely connected things that I have been working on or thinking about over the past month or two...

James Robert Lloyd, Zoubin Gharamani and others in his group have developed a really neat tool called the Automatic Statistician. The basic idea is that you feed the Automatic Statistician some data and it estimates a model that fits the data well. And it produces an automatically generated report describing what it has done. The thing that I find most interesting about it is how flexible the model is. The Automatic Statistician can identify linear trends, periodicity, and other patterns in the data. It is the machinery of Gaussian Processes that makes this flexibility possible. This 'Kernel Cookbook' page (from David Duvenand a former member of Zoubin Gharamani's group) gives some information about how to construct a simple Gaussian Process model.

I have had a go at fitting a Gaussian Process model to some bike-sharing data in order to forecast demand in bike-sharing schemes. This data is available here. I found that it was more difficult than I expected to find an appropriate Gaussian Process to model the daily pattern of usage. The key problem that I faced was that most of the simplest kernels assume that the process you are trying to model is stationary. However that is a lot of non-stationarity in the daily pattern of bike-sharing demand - there is a lot more variability in 7-9am (the rush hour peak) than there is in say 9-11pm.

All this make me intrigued to find out more about recent developments in Gaussian Processes, particularly for non-stationary processes. From what I have seen so far, it looks a lot more challenging. It will be interesting to see if the Automatic Statistician can model non-stationary processes.

Brain activity can be modelled at a variety of temporal and spatial scales from the millisecond to minutes, and from single neurons to whole brain regions.

Mean Field Models describe the activity of populations of neurons, and can be used to model the evolution of a field of neural activity within a particular brain region. They are therefore sometimes called mesoscopic models (somewhere in the middle). This means that the models can be more biophysically realistic than models that describe interactions between brain regions. But it is still possible to do inference with these models and human brain imaging data such as EEG.

I am looking at a Mean Field Model developed by David Liley and Ingo Bojak that models the effect of anaesthesia on brain activity. This is proving quite challenging because there are so many unknowns: extracortical input, parameter values, initial conditions.

Some of the world's leading neuroscientists have written a collection of essays called The Future of the Brain. A lot of the essays describe 'Big Science' multi-million dollar research projects. There is a lot of focus on molecular biology experiments to probe the properties of single neurons, and on building large-scale microscopic models of brain activity that predict macroscopic effects.

As someone with a preference for simplicity in modelling, one thing I am interested in is to what extent mesoscopic (i.e. simpler) models can approximate microscopic (i.e. more complex) models. As far as I know this is an open question, that could be answered by some of the large-scale neuroscience research projects that are currently underway.

__Gaussian Processes__James Robert Lloyd, Zoubin Gharamani and others in his group have developed a really neat tool called the Automatic Statistician. The basic idea is that you feed the Automatic Statistician some data and it estimates a model that fits the data well. And it produces an automatically generated report describing what it has done. The thing that I find most interesting about it is how flexible the model is. The Automatic Statistician can identify linear trends, periodicity, and other patterns in the data. It is the machinery of Gaussian Processes that makes this flexibility possible. This 'Kernel Cookbook' page (from David Duvenand a former member of Zoubin Gharamani's group) gives some information about how to construct a simple Gaussian Process model.

I have had a go at fitting a Gaussian Process model to some bike-sharing data in order to forecast demand in bike-sharing schemes. This data is available here. I found that it was more difficult than I expected to find an appropriate Gaussian Process to model the daily pattern of usage. The key problem that I faced was that most of the simplest kernels assume that the process you are trying to model is stationary. However that is a lot of non-stationarity in the daily pattern of bike-sharing demand - there is a lot more variability in 7-9am (the rush hour peak) than there is in say 9-11pm.

All this make me intrigued to find out more about recent developments in Gaussian Processes, particularly for non-stationary processes. From what I have seen so far, it looks a lot more challenging. It will be interesting to see if the Automatic Statistician can model non-stationary processes.

__Mean Field Models for brain activity__Brain activity can be modelled at a variety of temporal and spatial scales from the millisecond to minutes, and from single neurons to whole brain regions.

Mean Field Models describe the activity of populations of neurons, and can be used to model the evolution of a field of neural activity within a particular brain region. They are therefore sometimes called mesoscopic models (somewhere in the middle). This means that the models can be more biophysically realistic than models that describe interactions between brain regions. But it is still possible to do inference with these models and human brain imaging data such as EEG.

I am looking at a Mean Field Model developed by David Liley and Ingo Bojak that models the effect of anaesthesia on brain activity. This is proving quite challenging because there are so many unknowns: extracortical input, parameter values, initial conditions.

__Broader Neuroscience reading__Some of the world's leading neuroscientists have written a collection of essays called The Future of the Brain. A lot of the essays describe 'Big Science' multi-million dollar research projects. There is a lot of focus on molecular biology experiments to probe the properties of single neurons, and on building large-scale microscopic models of brain activity that predict macroscopic effects.

As someone with a preference for simplicity in modelling, one thing I am interested in is to what extent mesoscopic (i.e. simpler) models can approximate microscopic (i.e. more complex) models. As far as I know this is an open question, that could be answered by some of the large-scale neuroscience research projects that are currently underway.