Discrete Markov Chains

Introduction

Suppose we have a system with a set of states over time. We want to know the probability of transitioning to the next state given the previous one at the previous time.

Transition matrices

As these problems can get quite complex, we create a matrix where each Pi,jP_{i,j} is the probability of a transition from state ii to jj.

Pi,j=P(system at state i at time n+1,,system at state j at time n)P_{i,j} = \Bbb{P}(\text{system at state } i \text{ at time } n+1 \\, | \\, \text{system at state } j \text{ at time } n)

Each row in the matrix is the probabilities of all the possible transition options from state ii. This means that each row sums up to 1.

Probability vectors

At any given time nn, we can write a probability vector π(n)\pi^{(n)} :

π(n)=(π1(n),...,πN(n))\pi^{(n)} = (\pi_1^{(n)}, ... , \pi_N^{(n)})

Where each πi(n)\pi_i^{(n)} is the probability of the system being in state ii at time nn.

In order to transition forward in time and obtain the next vector, you multiply the previous vector by the transition matrix.

π(n+1)=π(n)P\pi^{(n+1)} = \pi^{(n)} P

Since we can expand this definition and work backwards, we can then generalise it to calculate the next vector from any previous vector throughout time from the start:

π(n+m)=π(n)Pm\pi^{(n+m)} = \pi^{(n)} P^m

π(m)=π(0)Pm\pi^{(m)} = \pi^{(0)} P^m