Faculty of Education, University of Auckland and Department of Mathematics and Statistics, University of New Brunswick, Canada
University of Waikato, Room I.1.09 (I block, 1st floor)
Hidden Markov models form a popular and versatile means of handling serial dependence in data. Ever since these models were introduced by Baum and his coworkers in about 1970, the method of choice for fitting them has been the EM (expectation/maximization) algorithm. This is due to the fact that the likelihood of a hidden Markov model is a bit hard to handle.
Recently however a couple of authors have noticed that it is actually possible to calculate the Hessian of the log likelihood of a hidden Markov model, which suggests that one might simply maximize the likelihood by applying Newton's method.
I have implemented the calculations in R and tested out the idea on a couple of fairly complicated examples. Newton's method turns out to be insufficiently stable. However the Levenberg-Marquardt algorithm (which essentially interpolates between Newton's method and the method of
steepest ascent) seems to work like a charm. A seven-fold increase in speed over the EM algorithm was achieved.
This talk will be aimed at non-specialists, so I will explain a bit about hidden Markov models, the EM algorithm, Levenberg-Marquardt, the two complicated models that I have fitted using this technique, and possibly Life, the Universe and Everything.