Section 12 End of of Part I: Discrete time Markov chains

  • No new material in this section, but a half-week break to catch up and take stock on what we’ve learned

12.1 Things to do

We’ve now finished the material of the Part 1 of the module, on discrete time Markov chains. So this is a good time to take stock, revise what we’ve learned, and make sure we’re completely up to date before starting Part 2 of the module on continuous time processes.

Some things you may want to do in lieu of reading a section of notes:

  • Make sure you’ve completed Problem Sheets 1 to 6, and go back to any questions that stumped you before.
  • Start working on Computational Worksheet 2 (which doubles as Assessment 2). There are optional computer drop-in sessions in Week 7, and the work is due on Thursday 18 March 1400 (week 8).
  • Start working on Assessment 3 which is due on Thursday 25 March 1400 (week 9).
  • Re-read any sections of notes you struggled with earlier.
  • Take the opportunity to look at the optional nonexaminable subsections, if you opted out the first time around. (Section 9, Section 10, Section 11)
  • Let me know if there’s anything from this part of the course you’d like me to go through in next week’s lecture.

12.2 Summary of Part I

The following list is not exhaustive, but if you can do most of the things on this list, you’re well placed for the exam.

  • Define the simple random walk and other random walks.
  • Perform elementary calculations for the simple random walk, including by referring to the exact binomial distribution.
  • Calculate the expectation and variance of general random walks.
  • Define the gambler’s ruin Markov chain.
  • Find the ruin probability and expected duration for the gambler’s ruin by (i) setting up equations by conditioning on the first step and (ii) solving the resulting linear difference equation.
  • Draw a transition diagram of a Markov chain given the transition matrix.
  • Calculate \(n\)-step transitions probabilities by (a) summing the probabilities over all relevant paths or (b) calculating the matrix power.
  • Find the communicating classes in a Markov chain.
  • Calculate the period of communicating class.
  • Calculate hitting probabilities and expected hitting times by (i) setting up equations by conditioning on the first step and (ii) solving the resulting simultaneous equations.
  • Define positive recurrence, null recurrence and transience, and explain their properties.
  • Find the positive recurrence, null recurrence or transience of communicating classes.
  • Find the stationary distribution of Markov chain.
  • Give conditions for a stationary distribution to exist and be unique.
  • Give conditions for convergence to an equilibrium distribution.
  • Calculate long-term proportions of time using the ergodic theorem.

In the next section, we begin our study of continuous time Markov processes by looking at the most important example: the Poisson process.