Math 311

Class 14

Long run proportions

Let \(i\) be a positive recurrent state.

Define \(\displaystyle I_n = \begin{cases} 1 & \text{ if } X_n = i\\ 0 & \text{ otherwise} \end{cases}\)

Define \(\displaystyle M_{in} = \operatorname{E}\left(\sum_{k=1}^n I_k\right)\).

Define \(\displaystyle\pi_i = \lim_{n\to\infty} \frac{M_{in}}{n}\), the long run proportion in state \(i\).

Suppose \(X\) is an irreducible Markov chain. Then

\[\begin{aligned} \pi_j &= \sum_i \pi_i P_{ij} \text{ for every state } j\\ \sum_i \pi_i &= 1 \end{aligned}\]

If this system has no solution, then \(X\) is either transient or null recurrent, and \(\pi_j = 0\) for every \(j\).

System to solve

\[ \begin{bmatrix} \mathstrut \\ \mathbf{P}^T - I \\ \mathstrut\\ 1\quad 1\quad \cdots \quad 1 \end{bmatrix} \begin{bmatrix} \pi_0 \\ \vdots \\ \pi_n \end{bmatrix} = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} \]

Stationary Probabilities

Long run proportions are stationary probabilities:

Suppose the initial state \(X_0\) is chosen according to the long run proportions:

\(\operatorname{P}(X_0 = i) = \pi_i\) for each \(i\).

Then for each \(n\) and for each \(i\), \(\operatorname{P}(X_n = i) = \pi_i\).

Limiting Probabilities

For a state \(i\), define \(\displaystyle\alpha_i = \lim_{n\to\infty} \operatorname{P}(X_n = i)\).

This limit may not exist, but when it does, it is called the limiting probability of \(i\).

How do we calculate these?

Transient States

  • \(g_i = {}\) the probability that, when in the state \(i\), you will never return to state \(i\).
  • \(f_i = 1 - g_i = {}\) the probability that, when in the state \(i\), you will eventually return to state \(i\) in the future.

Transient state: \(g_i > 0\), \(f_i < 1\).

Given \(i\) and \(j\) are transient, define \(s_{ij} = {}\) the expected number of times the process is in state \(j\), given it started at \(i\):

\[s_{ij} = \operatorname{E}\left(\sum_{n=0}^\infty I_n \mid X_0 = i\right)\]

\(\displaystyle s_{ii} = \frac{1}{g_i}\)

Calculating \(s_{ij}\)

Let \(T = \left\{i_1, i_2, \dots, i_t\right\}\) be the set of all transient states of a Markov chain \(X\). Define

\[\mathbf{P}_T = \begin{bmatrix} P_{i_1i_1} & P_{i_1i_2} & \cdots & P_{i_1i_t} \\ \vdots & \vdots & \ddots & \vdots \\ P_{i_ti_1} & P_{i_ti_2} & \cdots & P_{i_ti_t} \end{bmatrix}\]

(The part of the transition matrix corresponding to only transient states.)

Let \(i\) and \(j\) are transient states, then

\(\displaystyle s_{ij} = \delta_{ij} + \sum_{k \in T} P_{ik} s_{kj}\) where \(\displaystyle \delta_{ij} = \begin{cases} 1 & \text{ if } i = j\\ 0 & \text{ otherwise}\end{cases}\)

Let \(\displaystyle\mathbf{S} = \begin{bmatrix} s_{i_1i_1} & s_{i_1i_2} & \cdots & s_{i_1i_t} \\ \vdots & \vdots & \ddots & \vdots \\ s_{i_ti_1} & s_{i_ti_2} & \cdots & s_{i_ti_t} \end{bmatrix}\).      Then \(\mathbf{S} = \mathbf{I} + \mathbf{P}_T\mathbf{S}\).

 

\((\mathbf{I} - \mathbf{P}_T) \mathbf{S} = \mathbf{I}\), or \(\mathbf{S} = (\mathbf{I} - \mathbf{P}_T)^{-1}\).

Example

Gambling model with \(N = 7\) and \(p = 0.4\).

\[\mathbf{P} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ .6 & 0 & .4 & 0 & 0 & 0 & 0 & 0 \\ 0 & .6 & 0 & .4 & 0 & 0 & 0 & 0 \\ 0 & 0 & .6 & 0 & .4 & 0 & 0 & 0 \\ 0 & 0 & 0 & .6 & 0 & .4 & 0 & 0 \\ 0 & 0 & 0 & 0 & .6 & 0 & .4 & 0 \\ 0 & 0 & 0 & 0 & 0 & .6 & 0 & .4 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}\]

  • Starting with \(3\) units, find the expected number of times the funds are \(5\) units.
  • Starting with \(3\) units, find the expected number of times the funds are \(2\) units.
  • Starting with \(3\) units, find the expected number of times the funds are \(3\) units.
  • Starting with \(3\) units, find the probability of never having \(3\) units again.

Probability of transitioning from \(i\) to \(j\)

Let \(i\) and \(j\) be transient states.

Define \(f_{ij} = {}\) the probability of ever transitioning to \(j\), given \(X_0 = i\).

Calculate \(s_{ij}\) by conditioning on whether we ever transition to \(j\):

\(s_{ij} = \operatorname{E}(\text{number of visits to } j \mid X_0 = i, \text{ ever transitioning to } j) f_{ij} + \operatorname{E}(\text{number of visits to } j\mid X_0 = i, \text{ never transitioning to } j)(1 - f_{ij})\).

\(s_{ij} = \delta_{ij} + f_{ij}s_{jj}\)

\(\displaystyle f_{ij} = \frac{s_{ij} - \delta_{ij}}{s_{jj}}\)