Section 8 Hitting times

  • Definitions: Hitting probability, expected hitting time, return probability, expected return time
  • Finding these by conditioning on the first step
  • Return probability and expected return time for the simple random walk

8.1 Hitting probabilities and expected hitting times

In Section 3 and Section 4, we used conditioning on the first step to find the ruin probability and expected duration for the gambler’s ruin problem. Here, we develop those ideas for general Markov chains.

Definition 8.1 Let \((X_n)\) be a Markov chain on state space \(\mathcal S\). Let \(H_{A}\) be a random variable representing the hitting time to hit the set \(A \subset \mathcal S\), given by \[ H_{A} = \min \big\{n \in \{0,1,2,\dots\} : X_n \in A \big\} . \] The most common case we’ll be interested in will be when \(A = \{j\}\) is just a single state, so \[ H_{j} = \min \big\{n \in \{0,1,2,\dots\} : X_n = j \big\} . \] (We use the convention that \(H_A = \infty\) if \(X_n \not\in A\) for all \(n\).)

The hitting probability \(h_{iA}\) of the set \(A\) starting from state \(i\) is \[ h_{iA} = \mathbb P(X_n \in A \text{ for some $n \geq 0$} \mid X_0 = i) = \mathbb P(H_A < \infty \mid X_0 = i) . \] Again, the most common case of interest is \(h_{ij}\) the hitting probability of state \(j\) starting from state \(i\).

The expected hitting time \(\eta_{iA}\) of the set \(A\) starting from state \(i\) is \[ \eta_{iA} = \mathbb E(H_A \mid X_0 = i) . \] Clearly \(\eta_{iA}\) can only be finite if \(h_{iA} = 1\).

The short summary of this is:

  • The hitting probability \(h_{ij}\) is the probability we hit state \(j\) starting from state \(i\).
  • The expected hitting time \(\eta_{ij}\) is the expected time until we hit state \(j\) starting from state \(i\).

The good news us that we already know how to find hitting probabilities and expected hitting times, because we already did it for the gambler’s ruin problem! The way we did it then is that we first found equations for hitting probabilities or expected hitting times by conditioning on the first step, and then we solved those equations. We do the same here for other Markov chains.

Let’s see an example of how to find a hitting probability.

Example 8.1 Consider a Markov chain with transition matrix \[ \begin{pmatrix} \frac15 & \frac15 & \frac15 & \frac25 \\ 0 & 1 & 0 & 0 \\ 0 & \frac12 & 0 & \frac12 \\ 0 & 0 & 0 & 1 \end{pmatrix} . \] Calculate the probability that the chain is absorbed at state \(2\) when started from state \(1\).

This is asking for the hitting probability \(h_{12}\). We use – as ever – a conditioning on the first step argument. Specifically, we have \[\begin{align} h_{12} &= p_{11} h_{12} + p_{12} h_{22} + p_{13} h_{32} + p_{14} h_{42} \\ &= \tfrac 15 h_{12} + \tfrac 15 h_{22} + \tfrac 15 h_{32} + \tfrac 25 h_{42} \tag{8.1} \end{align}\] This is because, starting from 1, there’s a \(p_{11} = \frac15\) probability of staying at 1; then by the Markov property it’s like we’re starting again from 1, so the hitting probability is still \(h_{12}\). Similarly, there’s a \(p_{12} = \frac15\) probability of moving 2; then by the Markov property it’s like we’re starting again from 2, so the hitting probability is still \(h_{22}\). And so on.

Since (8.1) includes not just \(h_{12}\) but also \(h_{22}\), \(h_{32}\) and \(h_{42}\), we need to find equations for those too. Clearly we have \(h_{22} = 1\), as we are “already there”. Also, \(h_{42} = 0\), since 4 is an absorbing state. For \(h_{32}\), another “condition on the first step” argument gives \[ h_{32} = p_{32}h_{22} + p_{34}h_{42} = \tfrac12 h_{22} + \tfrac12 h_{42} = \tfrac12 ,\] where we substituted in \(h_{22} = 1\) and \(h_{42} = 0\).

Substituting \(h_{22} = 1\), \(h_{32} = \frac12\) and \(h_{42} = 0\) all into (8.1), we get \[ h_{12} = \tfrac15 h_{12} + \tfrac15 + \tfrac15 \tfrac12 = \tfrac15 h_{12} + \tfrac{3}{10}. \] Hence \(\frac45 h_{12} = \frac{3}{10}\), so the answer we were after is \(h_{12} = \frac38\).

So the technique to find hitting probability \(h_{ij}\) from \(i\) to \(j\) is:

  1. Set up equations for all the hitting probabilities \(h_{kj}\) to \(j\) by conditioning on the first step.
  2. Solve the resulting simultaneous equations.

It is recommended to derive equations for hitting probabilities from first principles by conditioning on the first step, as we did in the example above. However, we can state what the general formula is: by the same conditioning method, we get \[ h_{iA} = \begin{cases} \displaystyle\sum_{j \in \mathcal S} p_{ij} h_{jA} & \text{if $i \not\in A$} \\ 1 & \text{if $i \in A$.} \end{cases} \] It can be shown that, if these equations have multiple solutions, then the hitting probabilities are in fact the smallest non-negative solutions.

Now an example for expected hitting times.

Example 8.2 Consider the simple no-claims discount chain from Lecture 6, which had transition matrix \[ \mathsf P =\begin{pmatrix} \tfrac14 &\tfrac34 & 0\\ \tfrac14 &0 & \tfrac34\\ 0 &\tfrac14 & \tfrac34\\ \end{pmatrix} .\] Given we start in state 1 (no discount), find the expected amount of time until we reach state 3 (50% discount).

This question asks us to find \(\eta_{13}\). We follow the general method, and start by writing down equations for all the \(\eta_{i3}\)s.

Clearly we have \(\eta_{33} = 0\). For the others, we condition on the first step to get \[\begin{align*} \eta_{13} = 1 + \tfrac14 \eta_{13} + \tfrac34 \eta_{23} \qquad &\Rightarrow \qquad \phantom{-}\tfrac34 \eta_{13} - \tfrac34\eta_{23} = 1 ,\\ \eta_{23} = 1 + \tfrac14 \eta_{13} + \tfrac34 \eta_{33} \qquad &\Rightarrow \qquad - \tfrac14\eta_{13} + \phantom{\tfrac34}\eta_{23} = 1 . \end{align*}\] This is because the first step takes time \(1\), then we condition on what happens next – just like we did for the gambler’s ruin.

The first equation plus three-quarters times the second gives \[ \big(\tfrac34 - \tfrac34\tfrac14\big) \eta_{13} = \tfrac{9}{16}\eta_{13} = 1 + \tfrac34 = \tfrac 74 = \tfrac{28}{16} ,\] which has solution \(\eta_{13} = \tfrac{28}{9} = 3.11\).

Similarly, if we need to, we can give a general formula \[ \eta_{iA} = \begin{cases} 1 + \displaystyle\sum_{j \in \mathcal S} p_{ij} \eta_{jA} & \text{if $i \not\in A$} \\ 0 & \text{if $i \in A$.} \end{cases} \] Again, if we have multiple solutions, the expected hitting times are the smallest non-negative solutions.

8.2 Return times

Under the definitions above, the hitting probability and expected hitting time to a state from itself are always \(h_{ii} = 1\) and \(\eta_{ii} = 0\), as we’re “already there”.

In this case, it can be interesting to look instead at the random variable representing the return time, \[ M_i = \min \big\{n \in \{1,2,\dots\} : X_n = i \big\} . \] Note that this only considers times \(n = 1, 2, \dots\) not including \(n = 0\), so is the next time we come back, after \(n = 0\).

We then have the return probability and expected return time \[\begin{gather*} m_{i} = \mathbb P(X_n = i \text{ for some $n \geq 1$} \mid X_0 = i) = \mathbb P(M_i < \infty \mid X_0 = i) , \\ \mu_{i} = \mathbb E(M_i \mid X_0 = i) . \end{gather*}\]

Just as before, we can find these by conditioning on the first step. The general equations are \[ m_i = \sum_{j \in \mathcal S} p_{ij}h_{ji} , \qquad \mu_i = 1 + \sum_{j \in \mathcal S} p_{ij}\eta_{ji}, \] where, if necessary, we take the minimal non-negative solution again.

8.3 Hitting and return times for the simple random walk

We now turn to hitting and return times for the simple random walk, which goes up with probability \(p\) and down with probability \(q = 1-p\). This material is mandatory and is examinable, but is a bit technical; students who are struggling or have fallen behind might make a tactical decision just to read the two summary theorems below and come back to the details at a later date.

Let’s start with hitting probabilities. Without loss of generality, we look at \(h_{i0}\), the probability the random walk hits 0 starting from \(i\). We will assume that \(i > 0\). For \(i < 0\), we can get the desired result by looking at the random walk “in the mirror” – that is, by swapping the role of \(p\) and \(q\), and treating the positive value \(-i\).

For an initial condition, it’s clear we have \(h_{00} = 1\).

For general \(i > 0\), we condition on the first step, to get \[ h_{i0} = ph_{i+1\, 0} + qh_{i-1\, 0} . \] We recognise this equation from the gambler’s ruin problem, and recall that we have to treat the cases \(p \neq \frac12\) and \(p = \frac12\) separately.

When \(p \neq \frac12\), the general solution is \(h_{i0} = A + B\rho^i\), where \(\rho = q/p \neq 1\), as before. The initial condition \(h_{00} = 1\) gives \(A = 1-B\), so we have a family of solutions \[ h_{i0} = (1 - B) + B\rho^i = 1 + B(\rho^i - 1) . \]

In the gambler’s ruin problem we had another boundary condition to find \(B\). Here we have no other conditions, but we can use the minimality condition that the hitting probabilities are the smallest non-negative solution to the equation. This minimal non-negative solution will depend on whether \(\rho > 1\) or \(\rho < 1\).

When \(\rho > 1\), so \(p < \frac12\), the term \(\rho^i\) tends to infinity. Thus the minimal non-negative solution will have to take \(B = 0\), because taking \(B < 0\) would eventually give a negative solution, while \(B = 0\) gives the smallest of the non-negative solutions. Hence the solution is \(h_{i0} = 1 + 0(\rho^i-1) = 1\), meaning we hit 0 with certainty. This makes sense, because for \(p < \frac12\) the random walk “drifts” to the left, and eventually gets back to 0.

When \(\rho < 1\), so \(p > \frac12\), we have that \(\rho^i - 1\) is negative and tends to \(-1\). So to get a small solution we want \(B\) large and positive, but keeping the solution non-negative limits us to \(B \leq 1\), so the minimality condition is achieved at \(B = 1\). The solution is \(h_{i0} = 1 + 1(\rho^i - 1) = \rho^i\). This is strictly less than 1 as expected, because for \(p > 1/2\), the random walk drifts to the right, so might drift further and further away from 0 and not come back.

For \(p = \frac12\), so \(\rho = 1\), we recall the general solution \(h_{i0} = A + Bi\). The condition \(h_{00} = 1\) gives \(A = 1\), so \(h_{i0} = 1 + Bi\). Because \(i\) is positive, to get the answer to be small we want \(B\) to be small, but non-negativity limits us to \(B\) positive, so the minimal non-negative solution takes \(B = 0\). This gives \(h_{i0} = 1 + 0i = 1\), so we will always get back to 0.

In conclusion, we have the following. (Recall that we get the result for \(i < 0\) by swapping the role of \(p\) and \(q\), and treating the positive value \(-i\).)

Theorem 8.1 Consider a random walk with up probability \(p \neq 0, 1\). Then the hitting probabilities to 0 are givin by, for \(i > 0\), \[ h_{i0} = \begin{cases} \left(\displaystyle\frac{q}{p}\right)^i < 1 & \text{if $p > \frac12$} \\ \ 1 & \text{if $p \leq \frac12$,} \end{cases} \] and for \(i < 0\) by \[ h_{i0} = \begin{cases} \left(\displaystyle\frac{p}{q}\right)^{-i} < 1 & \text{if $p < \frac12$} \\ \ 1 & \text{if $p \geq \frac12$.} \end{cases} \]

Now we look at return times. What is the return probability to 0 (or, by symmetry, to any state \(i \in \mathbb Z\))? By conditioning on the first step, \(m_0 = ph_{1\,0} + qh_{-1\,0}\), and we’ve calculated those hitting times above. We have \[ m_0 = \begin{cases} p\times 1 \hspace{0.4em}+ q\times 1 \hspace{0.4em}= 1 & \text{if $p = \frac12$} \\ p \times \displaystyle\frac qp + q\times 1 \hspace{0.4em}= 2q < 1 & \text{if $p > \frac12$} \\ p\times 1 \hspace{0.4em} + q \times \displaystyle\frac pq = 2p < 1 & \text{if $p < \frac12$.}\end{cases} \]

So for the simple symmetric random walk (\(p = \frac12\)) we have \(m_i = 1\) and are certain to return to the initial state again and again, while for \(p \neq \frac12\), we have \(m_i < 1\) and we might never return.

What about the expected return times \(\mu_{0}\)? For \(p \neq \frac12\), we have \(\mu_{0} = \infty\), since \(m_0 < 1\) and we might never come back. For \(p = \frac12\), though, \(m_0 = 1\), so we have work to do.

To get the expected return time for \(p = \frac12\), we’ll need the expected hitting times for for \(p = \frac12\) too. Conditioning on the first step gives the equation \[ \eta_{i0} = 1 + \tfrac12\eta_{i+1\,0} + \tfrac12\eta_{i-1\,0} , \] with initial condition \(\eta_{00} = 0\). We again recognise from the gambler’s ruin problem the general solution \(\eta_{i0} = A + Bi -i^2\). The initial condition gives \(A = 0\), so \(\eta_{i0} = Bi - i^2 = i(B-i)\). Then non-negativity demands \(B = \infty\), as any other value would get drowned out by the large negative value \(-i^2\), making the whole expression negative for big enough \(i\). Thus we have \(\eta_{i0} = \infty\).

Then we see easily that the return time is \[ \mu_0 = 1 + \tfrac12 \eta_{1\,0} + \tfrac12 \eta_{-1\,0} = 1 + \tfrac12 \infty + \tfrac12 \infty = \infty. \] So for \(p = \frac12\), while the random walk always return, it may take a very long time to do so.

In conclusion, this is what we found out:

Theorem 8.2 Consider the simple random walk with \(p \neq 0,1\). Then for all states \(i\) we have the following:

  • For \(p \neq \frac12\), the return probability \(m_i\) is strictly less than 1, so the expected return time \(\mu_i\) is infinite.
  • For the simple symmetric random walk with \(p = \frac12\), the return probability \(m_i\) is equal to 1, but the expected return time \(\mu_i\) is still infinite.

In the next section, we look at how the return probabilities and expected return times characterise which states continue to be visited by a Markov chain in the long run.