Discrete Markov Chains
Introduction
Suppose we have a system with a set of states over time. We want to know the probability of transitioning to the next state given the previous one at the previous time.
Transition matrices
As these problems can get quite complex, we create a matrix where each is the probability of a transition from state to .
Each row in the matrix is the probabilities of all the possible transition options from state . This means that each row sums up to 1.
Probability vectors
At any given time , we can write a probability vector :
Where each is the probability of the system being in state at time .
In order to transition forward in time and obtain the next vector, you multiply the previous vector by the transition matrix.
Since we can expand this definition and work backwards, we can then generalise it to calculate the next vector from any previous vector throughout time from the start: