Hidden Markov Model

Posted by GwanSiu on December 7, 2018

1. Introduction to Hidden Markov Model

In the previous post, naive bayes and gaussian mixture model have been discussed. In naive bayes, we assume all observed data come from one possible latent distribution. Different from naive bayes, gaussian mixture model(GMM) assume all obeserved data come from several latent distribution, and the key point is that one data sample is only possible belong to one latent distribution. In this article, hidden markov model(HMM) is discussed, in which all obeserved data come from several latent distribution, but data samples of the same value may possible belong to different latent distribution. We can consider underlying states in GMM are static, but the underlying states in HMM are dynamic. The figure 1 is shown the difference between them.

In fact, HMM is considered as a generative model as well as a sequential model. Thus, the formulation of HMM is a joint distribution of observed data and latent variables.

where $X$ is observed data, and $Y$ denote underlying state.$A$ is the omission matrix, $B$ is the transition matrix. The parameter of HMM is $\lambda=(A,B,\pi)$.

2. 3 Basic Problems of Hidden Markov Model

  1. Evaluation Problem
    • What is the probability that a particular sequence data is produced by a particular model?(the forward-backward algorithm)
  2. Decoding Problem
    • Given a sequence data and a model, what is the most likely latent states that produced this sequence data?(Viterbi algorithm)
  3. Training Problem
    • Given a model structure and a set of sequence data, find the model that best fit the data. (MLE, Viterbi training, forward-backward algorithm)

3. Forward-Backward Algorithm(Baum Welch Algorithm)

In the forward procedure, we define

which is the probability of seeing the partial sequence $x_{1},…,x_{t}$ and ending up in state $i$ at time $t$. The recursion processs can be

  1. $\alpha_{i}(1)=\pi_{i}b_{i}(x_{1})$
  2. $\alpha_{j}(t+1)=[\sum_{i=1}^{K}\alpha_{i}(t)a_{ij}]b_{j}(x_{t+1})$
  3. $p(X\vert \lambda)=\sum_{i=1}^{K}\alpha_{i}(T)$

The backward procedure is defined similarly:

which is the probability of the ending partial sequence $x_{t+1},…,x_{T}$ given that started at state $i$ in time T. The Recursion procedure is

  1. $\beta_{i}(T)=1$
  2. $\beta_{i}(t)=\sum_{j=1}^{K}a_{ij}b_{j}\beta_{j}(t+1)$
  3. $p(X\vert \lambda)=\sum_{i=1}^{N}\beta_{i}(1)\pi_{i}(1)b_{i}(x_{1})$

Now, we define

which is the probability of being in state $i$ at time $t$ for the state sequence $X$. Note that:

due to the Markov conditional independence

Thus, we can rewrite the formulation of $\gamma_{i}(t)$:

Now, we define

which is the probability of being in state $i$ and being in state $j$ at time $t+1$. The formulation of $\xi_{ij}$ can be rewrite

or

to be note that

where $I_{t}(i)$ is an indicator random variable that is 1 when we are in state $i$ at time $t$, and $I_{t}(i,j)$ is a random variable that is 1 when we move from state $i$ to state $j$ after time $t$.

4. Viterbi Algorithm

Viterbi algorithm compute the most probable latent state given a observed sequence, i.e., it can compute

assume we have T state, in each state, the latent variable has k values, we can see

thus, we can see that the recursion procedure

  1. Base: $\omega_{0}(\text{START})=1$
  2. Recursion: $\omega_{t}(k)=\max_{s}p(y_{t}\vert z_{t}=k)\omega_{t-1}(s)p(z_{t}=k\vert z_{t-1}=s)$

5. EM Algorithm

In this session, we adopt EM algorithm to estimate new parameters for the HMM given old parameters and data. The relative frequence can be used to update parameters:

We define

is the expectation relative frequency spent in state $i$ at time 1.

is the expected number of transitions from state $i$ to state $j$ relative to the expected total number of transitions away from state $i$.

For discrete distribution, we have

is the expected number of times the output observations have been equal to $v_{k}$ while in state $i$ relative to the expected total number of times in state $i$.

For gaussian mixturex, we define the probability that the $l$-th component of the $i$-th mixture generated observation $x_{t}$ as

where $x_{it}$ is a random variable indicating the mixture component at time $t$ for state $i$.

In GMM, the update equaiton for this case are

When there are $E$ observation sequences the $e$-th being of length $T_{e}$, the update equations become:

6. Conditional Random Field