Let . Suppose that we have a probability measure . We can use this probability measure to define a random walk on . The walk starts at the identity. At each subsequent step, if it is currently at , then it moves to with probability .
If after steps the random walk is at , then let . This defines a Markov chain taking values in . In any transition, the probability of moving from state to state is
The transition matrix is therefore doubly stochastic. The non-trivial part of Exercise 1 asks us to prove the converse: given a doubly-stochastic matrix , there is a probability distribution on so that holds.
This result shows that any time-invariant Markov chain taking values in can be lifted to a random walk on , and so can be analysed using the techniques in Diaconis’ book.
For the proof, we shall reinterpret as follows. Given , let denote the permutation matrix whose entry in position is if and otherwise the entry is . Then holds if and only if
The converese therefore follows from Birkhoff’s Theorem, that any doubly-stochastic matrix is a non-negative linear combination of permutation matrices.