Solutions to Stochastic Processes Ch.4

随机过程-第二版》(英文电子版Sheldon M. Ross 答案整理,此书作为随机过程经典教材没有习题讲解,所以将自己的学习过程记录下来,部分习题解答参考了网络,由于来源很多,出处很多也不明确,无法一一注明,在此一并表示感谢!希望对大家有帮助,水平有限,不保证正确率,欢迎批评指正,转载请注明出处。
Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

4.1 A store that stocks a certain commodity uses the following \((s, S)\) ordering policy; if its supply at the beginning of a time period is \(x\), then it orders
$$
\left\{
\begin{array}{ll}
0 \quad x \geq s \\
S-x \quad x < n \\
\end{array}
\right. $$
The order is immediately filled. The daily demands are independent and equal \(j\) with probability \(\alpha_j\). All demands that cannot be immediately met are lost. Let \(X_n\) denote the inventory level at the end of the nth time period. Argue that \(\{X_n, n \geq 1\}\) is a Markov chain and compute its transition probabilities.


Let \(Y_i\) denote the demand of the ith day, then
$$
X_n = \left\{
\begin{array}{ll}
X_{n-1} – Y_n \quad X_{n-1} \geq s \\
S-Y_n \quad X_{n-1} < s \\
\end{array}
\right.
$$
Since \(Y_i\) is independent, \(\{X_n, n \geq 1\}\) is a Markov chain, and its transition probabilities are
$$P_{ij} = \left\{
\begin{array}{ll}
\alpha_{i -j} \quad i \geq s, i \geq j \\
\alpha_{S-j} \quad i < s \\
0 \quad i \geq s, i < j
\end{array}
\right.
$$


4.2 For a Markov chain prove that
$$P\{X_n = j | X_{n_1} = i_1, \dots, X_{n_k} = i_k\} = P\{X_n = j | X_{n_k} = i_k\}$$
whenever \(n_1 < n_2 < \dots < n_k < n\).


$$\begin{align}
&P\{X_n = j | X_{n_1} = i_1, \dots, X_{n_k} = i_k\} \\
=&\frac{P_{i_kj}^{n-n_k}\prod_{l=1}^{k-1}P_{i_li_{l+1}}^{n_{l+1} – n_l}}{\prod_{l=1}^{k-1}P_{i_li_{l+1}}^{n_{l+1} – n_l}} \\
=&P_{i_kj}^{n-n_k} \\
=&P\{X_n = j | X_{n_k} = i_k\}
\end{align}$$


4.3 Prove that if the number of states is \(n\), and if state \(j\) is accessible from state \(i\), then it is accessible in \(n\) or fewer steps.


\(j\) is accessible from \(i\) if, for some \(k \ geq 0, P_{ij}^k > 0\). Now
$$P_{ij}^k = \sum\prod_{m=1}^k P_{i_mi_{m+1}}$$
where the sum is taken over all sequences \((i_0, i_1, \dots, i_k)\in\{1, \dots, n\}^{k+1}\) of states with \(i_0 = i\) and \(i_k = j\). Now, \(P_{ij}^k > 0\) implies that at least one term is positive, say \(\prod_{m=1}^kP_{i_mi_{m+1}} > 0\). If a state \(s\) occurs twice, say \(i_a = i_b = s\), and \((a, b) \neq (0, k)\), then the sequence of states \((i_0, \dots, i_{a-1}, i_b, \dots, i_k)\) also has positive probability, without this repetition. Thus, the sequence \(i_0, \dots, i_k\) can be reduced to another sequence, say \(j_0, \dots, j_r\), in which no state is repeated. This gives \(r \leq n-1\), so \(i \neq j\) is accessible in at most \(n – 1\) steps. If \(i = j\), we cannot remove this repetition. This gives the possibility of \(r = n\), when \(i=j\), but there are no other repetitions.


4.4 Show that$$P_{ij}^n = \sum_{k=0}^n f_{ij}^kP_{jj}^{n-k}$$


$$\begin{align}
&P_{ij}^n\\
=&\sum_{k=0}^n P\{\text{visit state j at step n }|\text{first visit state j at step k}\}P\{\text{first visit j at step k}\} \\
=&\sum_{k=0}^n f_{ij}^kP_{jj}^{n-k}
\end{align}$$


4.5 For states \(i, j, k, k \neq j\), let
$$P_{ij/k}^n = P\{X_n = j, X_l \neq k, l = 1, \dots, n-1|X_0 = i\}.$$
(a) Explain in words what \(P_{ij/k}^n\) represents.
(b) Prove that, for \(i \neq j, P_{ij}^n = \sum_{k=0}^n P_{ii}^kP_{ij/i}^{n-k}\).


(a) \(P_{ij/k}^n\) is the probability of being in \(j\) at time \(n\), starting in \(i\) at time 0, while avoiding \(k\).
(b) Let \(N\) denote the time at which \(X_k\) is last \(i\) before time \(n\). Then since \(0 \leq N \leq n\),
$$\begin{align}
&P_{ij}^n = P\{X_n = j | X_0 = i\} \\
=& \sum_{k=0}^n P\{X_n = j, N=k|X_0 = j\} \\
=& \sum_{k=0}^n P\{X_n = j, X_k = i, X_l \neq i: k + 1 \leq l \leq n | X_0 = i\} \\
=& \sum_{k=0}^n P\{X_n = j, X_l \neq i : k + 1 \leq l \leq n|X_0 = i, X_k = i\}P\{X_k = i|X_0=i\}\\
=& \sum_{k=0}^n P_{ii}^kP_{ij/i}^{n-k}
\end{align}$$


4.6 Show that the symmetric random walk is recurrent in two dimensions and transient in three dimensions.


The symmetric random walk can considered as independent walk, then the limiting probability of return state 0 can be compute as following
$$\lim_{n \to \infty}P_{00}^{2n} = (\frac{1}{\sqrt{\pi n}}^d)$$
where \(d\) is the dimension. Then we get,
$$
\sum_{n=1}^{\infty} P_{00}^n = \left\{
\begin{array}{ll}
\sum_{n=1}^{\infty} \frac{1}{n\pi} = \infty \quad d=2 \\
\sum_{n=1}^{\infty} \frac{1}{n^{3/2}} < \infty \quad d=3 \\
\end{array}
\right.
$$Thus, result proven by Proposition 4.2.3


4.7 For the symmetric random walk starting at 0:
(a) What is the expected time to return to 0?
(b) Let \(N_n\) denote the number of returns by time \(n\). Show that
$$E[N_{2n}] = (2n + 1){2n \choose n}(\frac{1}{2})^{2n} – 1$$
(c) Use (b) and Stirling’s approximation to show that for \(n\) large \(E[N_n]\) is proportional to \(\sqrt{n}\).


(a) From Equation 3.7.2, we know,
$$P\{Z_1 \neq 0, \dots, Z_{2n-1} \neq 0, Z_{2n} = 0\} = \frac{{2n \choose n}(\frac{1}{2})^{2n}}{2n-1}$$
Then, then expected time to return to 0 is:
$$\begin{align}
E[T] &= \sum_{k=1}^{\infty}\frac{k}{2k-1}{2k \choose k}(\frac{1}{2})^{2k}\\
&= \sum_{k=0}^{\infty} {2k \choose k}(\frac{1}{2})^{2k} \\
&= \frac{1}{\sqrt{1 – 4/4}} = \infty
\end{align}$$
(b) Let \(I_i\) denote the indicator of whether returned 0 at ith step. Then,
$$E[N_{2n}] = \sum_{k=0}^n E[I_{2k}]= \sum_{k=0}^n P_{00}^{2k}$$
Obviously, it holds true for \(n = 0\). Assume it holds for \(n = k\), when \(n = k+1\),
$$\begin{align}
E[N_{2k+2}] &= E[N_{2k}] + P_{00}^{2k+2} \\
&= (2k+1){2k \choose k}(\frac{1}{2})^{2k} – 1 + {2k+2 \choose k+1}(\frac{1}{2})^{2k+2}\\
&= (2k + 3){2k+2 \choose k+1}(\frac{1}{2})^{2k+2} -1
\end{align}$$
(c) $$\lim_{n \to \infty} \frac{E[N_n]}{\sqrt{n}} \sim \sqrt{\frac{2}{\pi}} + \sqrt{\frac{2}{\pi n}} – \frac{1}{\sqrt{n}} = \sqrt{\frac{2}{\pi}}$$


4.8 Let \(X_1, X_2, \dots\) be independent random variables such that \(P\{X_i = j\} = \alpha_j, j \geq 0\). Say that a record occurs at time \(n\) if \(X_n > max(X_1, \dots, X_{n-1})\), where \(X_0 = -\infty\), and if a record does occur at time \(n\) call \(X_n\) the record value. Let \(R_i\) denote the ith record value.
(a) Argue that \(\{R_i, i \geq 1\}\) is a Markov chain and compute its transition probabilities.
(b) Let \(T_i\) denote the time between the ith and \((i + 1)\)st record. Is \(\{T_i, i \geq 1\}\) a Markov chain? What about \(\{(R_i, T_i), i \geq 1\}\)? Compute transition probabilities where appropriate.
(c) Let \(S_n = \sum_{i=1}^n T_i, n \geq 1\). Argue that \(\{S_n, n \geq 1\}\) is a Markov chain and find its transition probabilities.


(a) $$
P_{ij} = \left\{
\begin{array}{ll}
0 \quad i \geq j \\
\alpha_j/\sum_{k=i+1}^{\infty} \alpha_k \quad i < j \\
\end{array}
\right. $$


4.9 For a Markov chain \(\{X_n, n \geq 0\}\), show that
$$P\{X_k = i_k|X_j = i_j, \text{for all } j \neq k\} = P\{X_k = i_k|X_{k-1} = i_{k-1}, X_{k+1} = i_{k+1}\}$$


$$\begin{align}
& P\{X_k = i_k|X_j = i_j, \text{for all } j \neq k\} \\
=& \frac{P\{X_k = i_k, \text{for all } k\} }{ P\{X_j = i_j, \text{for all } j \neq k\}} \\
=& \frac{\prod_{j=0}^{\infty} P_{i_ji_{j+1}}}{\prod_{j \neq k-1, k}^{\infty} P_{i_ji_{j+1}}P^2_{i_{k-1}i_{k+1}} } \\
=& \frac{P\{X_{k} = i_{k} | X_{k-1} = i_{k-1}\}P\{X_{k+1} = i_{k+1} | X_{k} = i_{k}\} }{P\{X_{k+1} = i_{k+1} | X_{k-1} = i_{k-1}\}} \\
=&\frac{P\{X_{k} = i_{k} | X_{k-1} = i_{k-1}\}P\{X_{k+1} = i_{k+1} | X_{k} = i_{k}, X_{k-1} = i_{k-1}\} P\{X_{k-1} = i_{k-1}\}}{P\{X_{k+1} = i_{k+1} | X_{k-1} = i_{k-1}\}P\{X_{k-1} = i_{k-1}\} } \\
=& P\{X_k = i_k|X_{k-1} = i_{k-1}, X_{k+1} = i_{k+1}\}
\end{align}$$


4.11 If \(f_{ii} < 1\) and \(f_{jj} < 1\), show that:
(a) $$\sum_{n=1}^{\infty} P_{ij}^n < \infty$$
(b) $$f_{ij} = \frac{\sum_{n=1}^{\infty} P_{ij}^n}{1 + \sum_{n=1}^{\infty}P_{jj}^n}$$


(a) Since \(j\) is transient, the expected number of visits to \(j\) is finite, also when from \(i\). Proven.
(b) $$\begin{align}
&E[\text{# of visits j starting from i}] \\
=& \sum_{n=1}^{\infty} P_{ij}^n \\
=&\sum_{n=1}^{\infty}E[\text{# of visits j starting from i}|\text{first visit j at step n}]f_{ij}^n \\
=& \sum_{n=1}^{\infty}f_{ij}^n(1 + E[\text{# of visits j starting from j}]) \\
=& f_{ij}(1 + \sum_{n=1}^{\infty}P_{jj}^n)
\end{align}$$


4.12 A transition probability matrix \(P\) is said to be doubly stochastic if
$$\sum_i P_{ij} = 1 \quad \text{for all } j.$$
That is, the column sums all equal 1. If a doubly stochastic chain has \(n\) states and is ergodic, calculate its limiting probabilities.


Double stochastic implies that a state can be possibly from any of the states, and can transition to any of the states. We may guess the stationary maybe evenly distributed across all the states.
Since \(\pi = (1/n, \dots, 1/n)\) follows the equation \(\pi = \pi P\), also due to the uniqueness of the limiting distribution, \(\pi\) is the result.


4.15 In the \(M/G/1\) system (Example 4.3(A)) suppose that \(\rho < 1\) and thus the stationary probabilities exist. Compute \(\pi^{\prime}(s)\) and find, by taking the limit as \(s \to 1, \sum_0^{\infty}i\pi_i\).


$$\begin{align}
\pi^{\prime}(s) &= (1 – A^{\prime}(1))\frac{A(s) – A^2(s) + s(s-1)A^{\prime}(s)}{[s-A(s)]^2} \\
\sum_0^{\infty}i\pi_i &= \lim_{s \to 1} \pi^{\prime}(s) \\
&= \lim_{s \to 1}(1 – A^{\prime}(1)) \frac{A^{\prime}(s)}{2[1 – A^{\prime}(s)]} \\
&= \lambda E[S]/2
\end{align}$$


4.18 Jobs arrive at a processing center in accordance with a Poisson process with rate \(\lambda\). However, the center has waiting space for only \(N\) jobs and so an arriving job finding \(N\) others waiting goes away. At most 1 job per day can be processed, and processing of this job must start at the beginning of the day. Thus, if there are any jobs waiting for processing at the beginning of a day, then one of them is processed that day, and if no jobs are waiting at the beginning of a day then no jobs are processed that day. Let \(X_n\) denote the number of jobs at the center at the beginning of day \(n\).
(a) Find the transition probabilities of the Markov chain \(\{X_n, n \geq 0\}\).
(b) Is this chain ergodic? Explain.
(c) Write the equations for the stationary probabilities.


Let \(p(j) = \lambda^je^{-\lambda}/j!\)
(a) $$\begin{align}
P_{0j} &= \left\{
\begin{array}{ll}
p(j) \quad 0 \leq j < N \\
\sum_{k=N}^{\infty}p(k) \quad j = N \\
\end{array}
\right. \\
P_{ij} &= \left\{
\begin{array}{ll}
p(j-i+1) \quad i – 1 \leq j < N \\
\sum_{k=N – i+1}^{\infty}p(k) \quad j = N \\
\end{array}
\right. \end{align}$$
(b) The Markov chain is irreducible, since \(P_0j > 0\) and \(P_{j0}^{j} = (p(0))^k > 0\). It is aperiodic, since \(P_{00} > 0\). A finite state Markov chain which is irreducible and aperiodic is ergodic (since it it not possible for all states to be transient or for any states to be null recurrent).
(c) There is no particularly elegant way to write these equations. Since \(\pi_j = \sum_{k=0}^{j+1}\pi_kP_{kj}\), This can be rewritten as a recursion:
$$\pi_{j+1} = \frac{\pi_j(1 – P_{jj}) – \sum_{k=0}^{j-1}\pi_kP_{kj}}{P_{j+1, j}}$$


4.19 Let \(\pi_j, j \geq 0\), be the stationary probabilities for a specified Markov chain.

(a) Compute the following statements: \(\pi_iP_{ij}\) is the proportion of all transition that . . . .
Let \(A\) denote a set of states and let \(A^c\) denote the remaining states.
(b) Finish the following statement: \(\sum_{j \in A^c}\sum_{i \in A} \pi_iP_{ij}\) is the proportion of all transitions that . . . .
(c) Let \(N_n(A, A^c)\) denote the number of the first \(n\) transitions that are from a state in \(A\) to one in \(A^c\); similarly, let \(N_n(A^c, A)\) denote the number that are from a state in \(A^c\) to one in \(A\). Argue that
$$|N_n(A, A^c) – N_n(A^c, A) | \leq 1$$
(d) Prove and interpret the following result:
$$ \sum_{j \in A^c}\sum_{i \in A} \pi_iP_{ij} = \sum_{j \in A^c}\sum_{i \in A} \pi_jP_{ji} $$


(a) from \(i\) to \(j\)
(b) from \(A^c\) to \(A\)
(c) Since the states are either in \(A\) or in \(A^c\), the number of transition can only be equal or only one more another.
(d) Obviously, in the long run, the proportion of all transitions that result in the state of the chain moving from \(A\) to \(A^c\) is equal to the proportion of all transitions that result in the state of the chain moving from \(A^c\) to \(A\).


4.22 Compute the expected number of plays, starting in \(i\), in the gambler’s ruin problem, until the gambler reaches either 0 or \(N\).


From Example 4.4(A) we know,
$$E[B] = \frac{1}{2p-1}\{\frac{n[1 – (q/p)^i]}{1 – (q/p)^N} – i\}$$


4.23 In the gambler’s ruin problem show that
\(P\{\)she wins the next gamble|present fortune is \(i\), she eventually reaches \(N\}\)
$$=\left\{
\begin{array}{ll}
p[1- (p/q)^{i+1}]/[1-(q/p)^i] \quad p \neq 1/2 \\
(i+1)/2i \quad p = 1/2 \\
\end{array}
\right. $$


Let event \(A\) denote she wins the next gamble|present fortune is \(i, B\) denote present fortune is \(i\), she eventually reaches \(N\). Then,
$$\begin{align}
P\{A|B\} &= \frac{P\{AB\}}{P\{B\}}\\
&=\left\{\begin{array}{ll}
\frac{p[1- (p/q)^{i+1}]/[1-(q/p)^N]}{[1- (p/q)^{i}]/[1-(q/p)^N]} \quad p \neq 1/2 \\
\frac{(i+1)/2N}{i/N} \quad p = 1/2 \\
\end{array}
\right.\\
&=\left\{
\begin{array}{ll}
p[1- (p/q)^{i+1}]/[1-(q/p)^i] \quad p \neq 1/2 \\
(i+1)/2i \quad p = 1/2 \\
\end{array}
\right.
\end{align}$$


4.24 Let \(T = \{1, \dots, t\}\) denote the transient states of a Markov chain, and let \(Q\) be, as in Section 4.4, the matrix of transition probabilities from states in \(T\) to states in \(T\). Let \(m_{ij}(n)\) denote the expected amount of time spent in state \(j\) during the first \(n\) transitions given that the chain begins in state \(i\), for \(i\) and \(j\) in \(T\). Let \(M_n\) be the matrix whose element in row \(i\), column \(j\) is \(m_{ij}(n)\).
(a) Show that \(M_n = I + Q + Q^2 + \dots + Q^n\).
(b) Show that \(M_n – I + Q^{n+1} = Q[I + Q + Q^2 + \dots + Q^n]\).
(c) Show that \(M_n = (I – Q)^{-1}(I – Q^{n+1})\).


(a) Obviously, the element of \(Q^{k}\) in row \(i\), column \(j\) is \(P_{ij}^k\), then we have \(m_{ij}(n) = 1 + \sum_{k=1}^n P_{ij}^k\). Proven.
(b) As shown in (a).
(c) \(M_n\) can be computed as common geometric sequence.


4.25 Consider the gambler’s ruin problem with \(N = 6\) and \(p=7\). Starting in state 3, determine:
(a) the expected number of visits to state 5.
(b) the expected number of visits to state 1.
(c) the expected number of visits to state 5 in the first 7 transitions.
(d) the probability of ever visiting state 1.


Calculate \(M = (I – Q)^{-1}\), then
(a) \(m_{3,5} = 1.3243\) (b) \(m_{3,1} = 0.2432\) (d) \(f_{3,1} = m_{3,1}/m_{1,1} = 0.1717\)
(c) From Problem 4.24 (c), compute \(M_{7} = M(I – Q^8)\), then the (3, 5) element of \(M_7\) is \(0.9932\).
Recommend a tool for computing matrix


4.26 Consider the Markov chain with states \(0, 1, \dots, n\) and transition probabilities
$$P_{0,1} = 1 = P_{n, n-1}, \quad P_{i, i+1} = p = 1 – P_{i, i-1}, \quad 0 < i < n.$$
Let \(\mu_{i,n}\) denote the mean time to go from state \(i\) to state \(n\).
(a) Derive a set of linear equations for the \(\mu_{i, n}\).
(b) Let \(m_i\) denote the mean time to go from state \(i\) to state \(i+1\). Derive a set of equations for \(m_i, i=0, \dots, n-1\), and show how they can be solved recursively, first for \(i=0\), then \(i = 1\), and so on.
(c) What is the relation between \(\mu_{i, n}\) and the \(m_j\).
Starting at state 0, say that an excursion ends when the chain either returns to 0 or reaches state \(n\). Let \(X_j\) denote the number of transitions in the jth excursion (that is, the one that begins at the jth return to 0), \(j \geq 1\)
(d) Find \(E[X_j]\)
(Hint: Relate it to the mean time of a gambler’s ruin problem)
(e) Let \(N\) denote the first excursion that ends in state \(n\), and find \(E[N]\)
(f) Find \(\mu_{0,n}\).
(g) Find \(\mu_{i,n}\).


(a) \(\mu_{i, n} = 1 + p\mu_{i+1,n} + (1-p)\mu_{i-1, n}\)
(b) $$\begin{align}
m_i &= p + (1-p)(1 + m_{i-1} + m_{i})\\
m_i + \frac{1}{1 – 2p} &= \frac{1-p}{p}(m_{i-1} + \frac{1}{1-2p}) \\
m_0 &= 1, m_1 = \frac{2-p}{p}
\end{align}$$
(c) \(\mu_{i, n} = \sum_{j=i}^{n-1}m_j\)
(d) \(E[X_j] = 1 + \frac{1}{2p-1}\{\frac{n[1 – (q/p)^j]}{1 – (q/p)^n} – j\}\)
(e) \(N\) is a geometric variable with parameter \( a = (1 – q/p)/[1 – (q/p)^n]\), thus \(E[N = 1/a]\).
(f) We can compute \(\mu_{0, n}\) in two ways:
$$E[N](E[X_1] + 1) = \sum_{j=0}^{n-1}m_j = \frac{2p(p-1)}{(2p – 1)^2}[1 – (\frac{1-p}{p})^n] – \frac{n}{1-2p}$$
(g) $$\frac{2p(p-1)}{(2p-1)^2}[(\frac{1-p}{p})^i – (\frac{1-p}{p})^n] – \frac{n-i}{1-2p}$$


4.27 Consider a particle that moves along a set of \(m+1\) nodes, labeled \(0, 1, \dots, m\). At each move it either goes one step in the clockwise direction with probability \(p\) or one step in the counterclockwise direction with probability \(1-p\). It continues moving until all the nodes \(1, 2, \dots, m\) have been visited at least once. Starting at node 0, find the probability that node \(i\) is the last node visited, \(i = 1, \dots, m\).


As shown in Example 1.9(B),
$$\begin{align}
&P\{i \text{ is the last node visited}\}\\
=&(1-f_{m-1, m})P\{\text{visit i – 1 before i + 1}\} + f_{1,m-1}P\{\text{visit i + 1 before i – 1}\} \\
=&(1-f_{m-1, m})f_{m-i, m-1} + f_{1, m-1}(1 – f_{m-i, m-1}) \\
\end{align}$$


4.28 In Problem 4.27, find the expected number of additional steps it takes to return to the initial position after all nodes have been visited.


$$\begin{align}
P_i &= (1-f_{m-1, m})f_{m-i, m-1} + f_{1, m-1}(1 – f_{m-i, m-1}) \\
q &= 1 – p\\
B_i &= \frac{1}{2p-1}\{ \frac{(m+1)[1 – (q/p)^i]}{1 – (q/p)^{m+1}} – i\} \\
E &= \sum_{i=1}^{m} B_iP_i
\end{align}$$


4.29 Each day one of \(n\) possible elements is requested, the ith one with probability \(P_i, i \geq 1, \sum_{i=1}^n P_i = 1\). These elements are at all times arranged in an ordered list that is revised as follows: the element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Defined the state at any time to be the list ordering at that time.
(a) Argue that the above is a Markov chain.
(b) For any state \(i_1, \dots, i_n\)(which is a permutation of \(1,2, \dots, n\)), let \(\pi(i_1, \dots, i_n)\) denote the limiting probability. Argue that
$$\pi(i_1, \dots, i_n) = P_{i_1}\frac{P_{i_2}}{1 – P_{i_1}}\dots\frac{P_{i_{n-1}}}{1 – P_{i_1} – \dots – P_{i_{n-2}}}$$



4.31 A spider hunting a fly moves between locations 1 and 2 according to a Markov chain with transition matrix \(\begin{bmatrix} 0.7 & 0.3 \\ 0.3 & 0.7\end{bmatrix}\) starting in location 1. The fly, unaware of the spider, starts in location 2 and moves according to a Markov chain with transition matrix \(\begin{bmatrix} 0.4 & 0.6 \\ 0.6 & 0.4\end{bmatrix}\). The spider catches the fly and the hunt ends whenever they meet in the same location.
Show that the progress of the hunt, except for knowing the location where it ends, can be described by a three-state Markov chain where one absorbing state represents hunt ended and the other two that the spider and fly are at different locations. Obtain the transition matrix for this chain.
(a) Find the probability that at time \(n\) the spider and fly are both at their initial locations.
(b) What is the average duration of the hunt?


Let state 1 denote spider at 1, fly at 2; state 2 denote spider at 2, fly at 1; state 3 denote they are at same location. Then the transition matrix is$$
P = \begin{bmatrix} 0.28 & 0.18 & 0.54 \\ 0.18 & 0.28 & 0.54\\ 0 & 0 & 1\end{bmatrix} $$
(a) By diagonalize the transition matrix, we can compute
$$P^n =
\begin{bmatrix} 1 & -1 & 1 \\ 1 & 1 & 1\\ 1 & 0 & 0\end{bmatrix}
\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0.1^n & 0\\ 0 & 0 & 0.46^n\end{bmatrix}
\begin{bmatrix} 0 & 0 & 1 \\ -0.5 & 0.5 & 0\\ 0.5 & 0.5 & -1\end{bmatrix} $$
thus, \((0.1^n + 0.46^n)/2\)
(b) The duration is a geometric variable with mean \(50/27\)


4.32 Consider a simple random walk on the integer points in which at each step a particle moves one step in the positive direction with probability \(p\), one step in the negative direction with probability \(p\), and remains in the same place with probability \(q = 1 – 2p(0 < p < 1/2)\). Suppose an absorbing barrier is placed at the origin–that is, \(P_{00}=1\)–and a reflecting barrier at \(N\)–that is, \(P_{N, N-1} = 1\)–and that the particle starts at \(n (0 < n < N)\).
Show that the probability of absorption is 1, and find the mean number of steps.


Let \(A_i\) denote the probability of absorption for a particle starting at \(i\). Then
$$A_i = pA_{i-1} + pA_{i+1} + (1-2p)A_i, \quad A_0 = 1, \quad A_{N} = A_{N-1}$$
Solve the equations we get, \(A_i = 1\) for all \(i\).
Let \(T_i\) denote the expected number of steps starting at \(i\), then
$$\begin{align}
E[T_i] &= 1 + pE[T_{i-1}] + pE[T_{i+1}] + (1-2p)E[T_i],\\
E[T_0] &= 0, \\
E[T_N] &= 1 + E[T_{N-1}]\end{align}$$
Solve the equation above, we get \(E[T_n] = \frac{n(2N – n + 2p -1)}{2p}\)


4.33 Given that \(\{X_n, n \geq 0\}\) is a branching process:
(a) Argue that either \(X_n\) converges to 0 or infinity.
(b) Show that
$$Var(X_n|X_0 = 1) = \left\{\begin{array}{ll}
\sigma^2\mu^{n-1} \frac{\mu^n – 1}{\mu – 1} \quad \mu \neq 1 \\
n\sigma^2 \quad \mu = 1 \\
\end{array}
\right. $$
where \(\mu\) and \(\sigma^2\) are the mean and variance of the number of offspring an individual has.


(a) Since any finite set of transient states \(\{1, 2, \dots, n\}\) will be visited only finitely often, this leads to if \(P_0 > 0\), then the population will either die out or its size will converge to infinity.
(b) $$\begin{align}
&Var(X_n|X_0 = 1)\\
=& E[Var(X_n|X_{n-1}, X_0=1)] + Var(E[X_n|X_{n-1}, X_0 = 1]) \\
=& \sigma^2 E[X_{n-1}|X_0=1] + Var(\mu X_{n-1}|X_0 = 1) \\
=& \sigma^2\mu^{n-1} + \mu^2Var(X_{n-1}|X_0=1) \\
=& \sigma^2(\mu^{n-1} + \mu^n + \dots + \mu^{2n-2}) \\
=& \left\{\begin{array}{ll}
\sigma^2\mu^{n-1} \frac{\mu^n – 1}{\mu – 1} \quad \mu \neq 1 \\
n\sigma^2 \quad \mu = 1 \\
\end{array}
\right. \end{align}$$


4.34 In a branching process the number of offspring per individual has a binomial distribution with parameters \(2, p\). Starting with a single individual, calculate:
(a) the extinction probability;
(b) the probability that the population becomes extinct for the first time in the third generation.
Suppose that, instead of starting with a single individual, the initial population size \(Z_0\) is a random variable that is Poisson distributed with mean \(\lambda\). Show that, in this case, the extinction probability is given, for \(p > 1/2\), by
$$exp\{\lambda(1-2p)/p^2\}$$


(a) $$\pi_0 = \sum_{k=0}^2 \pi^k{2 \choose k}p^k(1-p)^{2-k}\\
\pi_0 = \left\{\begin{array}{ll}
1 \quad p \leq 0.5 \\
(1 – 1/p)^2 \quad p > 0.5 \\
\end{array}
\right. $$
(b) Let \(\phi_n(s) = E[s^{X_n}]\). Then \(\phi_n(s) = \phi_1(\phi_{n-1}(s))\), and it’s easy to see that \(\phi_1(s) = (sp + 1 -p)^2, P\{X_n = 0\} = \phi_n(0)\). Thus,
$$\begin{align}
&\phi_3(0) – \phi_2(0) \\
=& \phi_1(\phi_1(\phi_1(0))) – \phi_1(\phi_1(0))\\
=& 4p^2(1-p)^4 + 6p^3(1-p)^5 + 6p^4(1-p)^6 + 4p^5(1-p)^7 + p^6(1-p)^8
\end{align}$$
(c) $$\begin{align}
&\sum_{k=0}^{\infty} (1 – 1/p)^{2k} \frac{\lambda^ke^{-\lambda}}{k!} \\
=& e^{\lambda(1-2p)/p^2} \sum_{k=0}^{\infty} \frac{[\lambda(1 – 1/p)^2]^k}{k!}e^{-\lambda(1 – 1/p)^2 }\\
=&exp\{\lambda(1-2p)/p^2\}
\end{align}$$


4.35 Consider a branching process in which the number of offspring per individual has a Poisson distributed with mean \(\lambda, \lambda > 1\). Let \(\pi_0\) denote the probability that, starting with a single individual, the population eventually becomes extinct. Also, let \(a, a < 1\), be such that$$ae^{-a} = \lambda e^{-\lambda}$$
(a) Show that \(a = \lambda\pi_0\).
(b) Show that, conditional on eventual extinction, the branching process follows the same probability law as the branching process in which the number of offspring per individual is Poisson with mean \(a\).


(a) $$\begin{align}
\pi_0 &= \sum_{k=0}^{\infty} \frac{\pi^k\lambda^k}{k!} e^{-\lambda}\\
&= e^{\lambda\pi_0-\lambda}\sum_{k=0}^{\infty}\frac{(\lambda\pi_0)^k}{k!}e^{-\lambda\pi_0}\\
&= e^{\lambda\pi_0-\lambda} \\ \end{align}$$
Thus,
$$ae^{-a} = \lambda e^{-\lambda} = \lambda\pi_0e^{-\lambda\pi_0}$$
(b) $$\begin{align}
&P\{X = k| \text{eventually die out}\} \\
=&\frac{P\{X=k\}\pi_0^k}{\pi_0} \\
=&\frac{(\pi_0\lambda)^{k-1}\lambda e^{-\lambda}}{k!}\\
=&\frac{a^ke^{-a}}{k!}
\end{align}$$


4.36 For the Markov chain model of Section 4.6.1, namely,
$$P_{ij} = \frac{1}{i – 1}, \quad j = 1, \dots, i-1, \quad i > 1,$$
suppose that the initial state is \(N \equiv {n \choose m}\), where \(n>m\). Show that when \(n, m\) and \(n-m\) are large the number of steps to reach 1 from state \(N\) has approximately a Poisson distribution with mean
$$m[c\log\frac{c}{c-1} + \log(c-1)],$$
where \(c = n/m\). (Hint: Use Stirling’s approximation.)


From Proposition 4.6.2 (iii) we get \(T_N \sim \pi(\log {n \choose m})\).
$$\begin{align}
\log {n \choose m} &= \log n! – \log m! – \log (n-m)! \\
&\sim n\log n – n – m\log m + m -(n-m)\log(n-m) + (n-m) \\
&= n\log\frac{n}{n-m} + m\log\frac{n-m}{m}\\
&= m[c\log\frac{c}{c-1} + \log(c-1)]
\end{align}$$


4.37 For any infinite sequence \(x_1, x_2, \dots\), we say that a new long run begins each time the sequence changes direction. That is, if the sequence starts 5, 2, 4, 5, 6, 9, 3, 4, then there are three long runs–namely, (5, 2), (4, 5, 6, 9), and (3, 4). Let \(X_1, X_2, \dots\) be independent uniform (0, 1) random variables and let \(I_n\) denote the initial value of the nth long run. Argue that \(\{I_n, n \geq 1\}\) is a Markov chain having a continuous state space with transition probability density given by
$$p(y|x) = e^{1-x} + e^x – e^{|y-x|} – 1.$$


Similarly to Section 4.6.2, When ascend changed to descend,
$$\begin{align}
&P\{I_{n+1} \in (y+dy), L_n = m | I_n = x\} \\
=&\left\{\begin{array}{ll}
\frac{x^{m-1}}{(m-1)!}dy[1 – (\frac{x-y}{x})^{m-1}] \quad y < x \\
\frac{x^{m-1}}{(m-1)!}dy \quad y > x \\
\end{array}\right.
\end{align} $$
When descend changed to ascend,
$$\begin{align}
&P\{I_{n+1} \in (y+dy), L_n = m | I_n = x\} \\
=&\frac{x^{m-1}}{(m-1)!}dyP\{min(X_1, \dots, X_{m-1}) < y | X_i < x, i = 1, \dots, m-1\} \\
=&\left\{\begin{array}{ll}
\frac{x^{m-1}}{(m-1)!}dy[1 – (\frac{x-y}{x})^{m-1}] \quad y < x \\
\frac{x^{m-1}}{(m-1)!}dy \quad y > x \\
\end{array}\right.
\end{align}$$Thus,
$$\begin{align}
p(y|x) &= \left\{\begin{array}{ll}
\sum_{m=2}^{\infty} \{\frac{(1-x)^{m-1}}{(m-1)!} + \frac{x^{m-1}}{(m-1)!}dy[1 – (\frac{x-y}{x})^{m-1}]\} \quad y < x \\
\sum_{m=2}^{\infty} \{\frac{(1-x)^{m-1}}{(m-1)!}dy[1 – (\frac{y-x}{1-x})^{m-1}] + \frac{x^{m-1}}{(m-1)!}dy\} \quad y > x \\
\end{array}\right. \\
&= e^{1-x} + e^x – e^{|y-x|} – 1
\end{align}$$


4.38 Suppose in Example 4.7(B) that if the Markov chain is in state \(i\) and the random variable distributed according to \(q_{ij}\) takes on the value \(j\), then the next state is set equal to \(j\) with probability \(a_j/(a_j + a_i)\) and equal to \(i\) otherwise. Show that the limiting probabilities for this chain are \(\pi_j = a_j/\sum_j a_j\).


From Example 4.7(B), we can verify the chain is time reversible with stationary probabilities \(\pi_j\) by verifying:
$$\pi_iP_{ij} = \frac{a_i}{\sum_k a_k}\frac{q_{ij}a_j}{a_i+a_j} = \pi_jP_{ji}$$
Since the above equation is immediate, we can follow the same reason to conclude that \(\pi_j = a_j/\sum_j a_j\) are the limiting probabilities.


4.39 Find the transition probabilities for the Markov chain of Example 4.3(D) and show that it is time reversible.


$$P_{ij} = \sum_{k=0}^{min(i,j)} {i \choose k}(1-p)^kp^{i-k}\frac{e^{-\lambda}\lambda^{j-k}}{(j-k)!}$$
Thus by the formula \(\pi_j = e^{-\lambda/p}(\lambda/p)^j/j!\), we get \(\pi_iP_{ij} = \pi_jP_{ji}\)


4.41 A particle moves among \(n\) locations that are arranged in a circle (with the neighbors of location \(n\) being \(n-1\) and 1). At each step, it moves one position either in the clockwise position with probability \(p\) or in the counterclockwise position with probability \(1 – p\)
(a) Find the transition probabilities of the reverse chain.
(b) Is the chain time reversible?


(a) \(P_{i, i+1} = p, P_{i, i-1} = 1-p\) for \(1 < i < n\), and \(P_{1,2} = P_{n,1} = p\), \(P_{1,n} = P_{n, n-1} = 1-p\).
(b) The chain is time reversible when \(p = 1/2\)


4.42 Consider the Markov chain with states \(0, 1, \dots, n\) and with transition probabilities
$$P_{0,1} = P_{n, n-1} = 1, \quad P_{i, i+1} = p_i = 1 – P_{i, i-1}, \quad i = 1, \dots, n-1.$$
Show that this Markov chain is of the type considered in Proposition 4.7.1 and find its stationary probabilities.


The chain can be considered as a weighted graph, with \(w_{ij} = P_{ij}\). The stationary probabilities are
$$\pi_i = \frac{\sum_j P_{ij}}{\sum_j\sum_iP{ij}} = \frac{1}{n}$$


4.44 Consider a time-reversible Markov chain with transition probabilities \(P_{ij}\) and limiting probabilities \(\pi_i\), and now consider the same chain truncated to the states \(0, 1, \dots, M\). That is, for the truncated chain its transition probabilities \(P_{ij}\) are$$
\bar{P_{ij}} = \left\{
\begin{array}{ll}
P_{ij} + \sum_{k > M}P_{ik} \quad 0 \leq i \leq M, j = i \\
P_{ij} \quad 0 \leq i \neq j \leq M \\
0 \quad \text{otherwise}.
\end{array}
\right. $$
Show that the truncated chain is also time reversible and has limiting probabilities given by$$\bar{\pi_i} = \frac{\pi_i}{\sum_{i=0}^M\pi_i}$$


$$\begin{align}
\bar{\pi_i}\bar{P_{ij}} &= \frac{\pi_i}{\sum_{i=0}^{M}\pi_i}(P_{ij} + I_{\{i=j\}}\sum_{k>M}^MP_{ik}) \\
&= \frac{\pi_j}{\sum_{i=0}^{M}\pi_i}(P_{ji} + I_{\{i=j\}}\sum_{k>M}^MP_{jk}) \\
&= \bar{\pi_j}\bar{P_{ji}}
\end{align}$$


4.45 Show that a finite state, ergodic Markov chain such that \(P_{ij} > 0\) for all \(i \neq j\) is time reversible if, and only if,
$$P_{ij}P_{jk}P_{ki} = P_{ik}P_{kj}P_{ji} \quad \text{for all } i, j, k$$


Suppose \(\pi_j = cP_{ij}/P_{ji}\), where \(c\) must be chosen so that \(\sum_j \pi_j = 1\). Then \(\pi_jP_{jk}= \pi_kP_{kj}\) if and only if
$$\frac{cP_{ij}P_{jk}}{P_{ji}} = \frac{cP_{ik}P_{kj}}{P_{ki}} $$
if and only if \(P_{ij}P_{jk}P_{ki} = P_{ik}P_{kj}P_{ji} \)


4.47 \(M\) ball are initially distributed among \(m\) urns. At each stage one of the balls is selected at random, taken from whichever urn it is in, and placed, at random, in one of the other \(m-1\) urns. Consider the Markov chain whose state at any time is the vector \(n_1, \dots, n_m\), where \(n_i\) denotes the number of balls in urn \(i\). Guess at the limiting probabilities for this Markov chain and then verify your guess and show at the same time that the Markov chain is time reversible.


Guess the limiting distribution is the Multinomial distribution:
$$\pi(n_1, n_2, \dots, n_m) = \frac{M!}{n_1!\dots n_m!m^M}$$ where \(n_i \geq 0, n_1 + \dots + n_m = M\). Now for two vectors \( (n_1, n_2, \dots, n_m)\) and \( (n_1, n_2, \dots, n_i+1, \dots, n_j -1, \dots, n_m)\), if \(i \neq j, n_j > 0\), then the transition probability is
$$\begin{align}
&P((n_1, \dots, n_i + 1, \dots, n_j – 1, \dots, n_m ),(n_1, \dots, n_m))\\
=&\frac{n_i + 1}{M}\frac{1}{m-1}\end{align}$$
Therefore, we have
$$\begin{align}
\sum_{1 \leq i \neq j \leq m} &\pi(n_1, \dots, n_i + 1, \dots, n_j – 1, \dots, n_m)\\
&P((n_1, \dots, n_i + 1, \dots, n_j – 1, \dots, n_m ),(n_1, \dots, n_m)) \\
=&\sum_{1 \leq i \neq j \leq m} \frac{M!}{n_1!\dots (n_i+1)!\dots(n_j-1)!\dots n_m!m^M}\frac{n_i+1}{M}\frac{1}{m-1} \\
=& \frac{M!}{n_1!\dots n_m!m^M} \sum_{1 \leq i \neq j \leq m} \frac{n_j}{M(m-1)}\\
=& \pi(n_1, \dots, n_m)
\end{align}$$


4.48 For an ergodic semi-Markov process.
(a) Compute the rate at which the process makes a transition from \(i\) into \(j\).
(b) Show that \(\sum_{i} P_{ij}/\mu_{ii} = 1/\mu_{jj}.\)
(c) Show that the proportion of time that the process is in state \(i\) and headed for state \(j\) is \(P_{ij}\eta_{ij}/\mu_{ii}\), where \(\eta_{ij} = \int_0^{\infty}\bar{F_{ij}}(t)dt.\)
(d) Show that the proportion of time that the state is \(i\) and will next be \(j\) within a time \(x\) is$$\frac{P_{ij}\eta_{ij}}{\mu_{ii}}F_{i,j}^e(x),$$ where \(F_{i,j}^e\) is the equilibrium distribution of \(F_{ij}\).


(a) Define a (delayed) renewal reward process: a renewal occurs when state \(i\) is entered from other states and the reward of each n-th cycle \(R_n\) equals 1 if in the n-th cycle, the state after \(i\) is \(j\) and 0 otherwise. Let \(R_{ij}(t)\) be the total number of transitions from \(i\) to \(j\) by time \(t\). We have
$$\sum_{n=0}^{N(t)}R_n \leq R_{ij}(t) \leq \sum_{n=0}^{N(t) + 1}R_n \leq \sum_{n=0}^{N(t)} R_n + 1.$$ Thus the rate at which the process makes a transition from \(i\) to \(j\) equals
$$\lim_{t \to \infty} \frac{R_{ij}(t)}{t} = \frac{E[R]}{E[X]} = \frac{P_{ij}}{\mu_{ii}}$$
(b) Let \(R_j(t)\) be the number of visits to state \(j\) by time \(t\). Thus
$$\begin{align}
\sum_i R_{ij}(t) &= R_j(t) \\
\sum_i \lim_{t \to \infty} \frac{R_{ij}(t)}{t} &= \lim_{t \to \infty}\frac{R_j(t)}{t} \\
\sum_{i} P_{ij}/\mu_{ii} &= 1/\mu_{jj}
\end{align}$$
(c) Define cycle as in part (a) and the reward in a cycle to be 0 if the transition from \(i\) is not into \(j\) and \(T_{ij}\) the time taken for transition if the transition from \(i\) is into \(j\). Thus
$$\lim_{t \to \infty} \frac{R(t)}{t} = \frac{E[R]}{E[X]} = \frac{P_{ij}E[T_{ij}]}{\mu_ij} = \frac{P_{ij}\eta_{ij}}{\mu_{ii}}$$
(d) Define cycle as in last part and the reward in a cycle as 0 if the transition from \(i\) is not into \(j\) and \(min(x, T_{ij})\) if the transition from \(i\) is into \(j\). Thus
$$\lim_{t \to \infty} \frac{R(t)}{t} = \frac{E[R]}{E[X]} = \frac{P_{ij}E[min(x, T_{ij})]}{\mu_{ii}} = \frac{P_{ij}\eta_{ij}}{\mu_{ii}}F_{i,j}^e(x) $$


4.49 For an ergodic semi-Markov process derive an expression, as \(t \to \infty\), for the limiting conditional probability that the next state visited after \(t\) is state \(j\), given \(X(t) = i\).



4.50 A taxi alternates between three locations. When it reaches location 1, it is equally likely to go next to either 2 or 3. When it reaches 2, it will next go to 1 with probability 1/3 and to 3 with probability 2/3. From 3 it always goes to 1. The mean times between locations \(i\) and \(j\) are \(t_{12} = 20, t_{13} = 30, t_{23} = 30 (t_{ij} = t_{ji})\).
(a) What is the (limiting) probability that the taxi’s most recent stop was at location \(i, i = 1, 2, 3\) ?
(b) What is the (limiting) probability that the taxi is heading for location 2?
(c) What fraction of time is the taxi traveling from 2 to 3? Note: Upon arrival at a location the taxi immediately departs.


The transition matrix is \(\begin{bmatrix} 0 & 1/2 & 1/2 \\ 1/3 & 0 & 2/3\\ 1 & 0 & 0\end{bmatrix} \)
And the stationary probabilities are \(\pi_1 = 6/14, \pi_2 = 3/14, \pi_3 = 5/14\).
Since \(\mu_i = \sum_i P_{ij}\mu_{ij}\), we have \(\mu_1 = 25, \mu_2 = 80/3, \mu_3 = 30\).
(a) From Proposition 4.8.3, we have \(P_1 = 15/38, P_2 = 8/38, P_3 = 16/38\).
(b) From Problem 4.48(c), we have \(P_{12}\eta_{12}P_1/\mu_1 = 3/19\).
(c) Similar to (b), \(P_{23}\eta_{23}P_2/\mu_2 = 3/19\).


4 Replies to “Solutions to Stochastic Processes Ch.4”

  1. 4.9的解答,第二个分式似乎不太对,分母少了一个数:
    $P^{2}_{i_{k-1},i_{k+1}}$
    应该是这个数与原来分母的乘积做式子的分母。
    即少了k-1时在ik-1处的前提下,k+1时在ik+1处的概率。

      1. 大佬常用什么社交软件,有兴趣添加下联系方式吗?
        对你写的东西以及你经历的事情颇有兴趣…虽然我可能不是像你那么优秀的人……

        1. 谢谢你的赞同,平时就用用微信,都是很私人的东西,没什么营养,可以知乎上私信我,“你有种放学别走”,应该比这个及时一点。

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 32 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here