0% found this document useful (0 votes)
7 views28 pages

Markov Chain Model

This document discusses the theory and applications of discrete Markov chains, a class of stochastic processes where future states depend only on the present state. It introduces key concepts such as state transitions, transition probabilities, and provides examples of Markov chains in various contexts, including stock markets and queuing systems. The document also explains the mathematical framework for defining Markov processes and chains, including transition probability matrices and higher transition probabilities.

Uploaded by

tejaswinibhoi90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views28 pages

Markov Chain Model

This document discusses the theory and applications of discrete Markov chains, a class of stochastic processes where future states depend only on the present state. It introduces key concepts such as state transitions, transition probabilities, and provides examples of Markov chains in various contexts, including stock markets and queuing systems. The document also explains the mathematical framework for defining Markov processes and chains, including transition probability matrices and higher transition probabilities.

Uploaded by

tejaswinibhoi90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CHAPTER 1

Introduction:-
This chapter is based on a topic of great application in predicting outcomes of day-to-day
processes based on observed probabilistic results from the past. This book chapter deals
exclusively with discrete Markov chain. Markov chain represents a class of stochastic
processes in which the future does not depend on the past but only on the present. The
algorithm was first proposed by a Russian mathematician Andrei Markov. He was taught
mathematics by another great mathematician Pafnuty Chebyshev at the University of St
Petersburg. Chebyshev was noted for his expertise in probability theory of which Markov
chain is a part. Markov’s first publication on Markov chain was in 1906. Since then the
theory and applications of Markov chains has dramatically increased. In the recent past,
like many other ancient mathematics theories, including Maxwell’s equations, wavelets
and a wide range of predictive mathematical algorithms, Markov chain has come to f ind
its place in various practical applications. It has been applied in stock markets, weather
prediction, spread of influenzas,

susceptibility to breast cancer among women and various data analysis as we shall
observe in this chapter. Markov chains model processes which evolve in steps which
could be in terms of time, trials or sequence. Therefore, for example at each step, the
process may exist in various countable states. When the process evolves, the system
can remain in the same state or change (transition) to a different state during the time
epoch. These movements between states are normally described in terms of transition
probabilities. These transition probabilities allow us to predict into the future the
possibility of the system being in a state, many time epochs later. We will get see a few
examples of this in this chapter. The rest of this chapter introduces the concepts of
Markov chains and defines the concept of states, state transition and how state
transition diagram:

1
CHAPTER – 2
MARKOV CHAINS AND MARKOV PROCESS

Definition:

Let { Xn,n ∈ N},Use stochastic process with state space E(discrete or continue) and time
space T(discrete or continuous) A thus a family of random variables { Xt : t ∈ T} ,where T
={.....-1,0,1,2.....} or (-∞, ∞) or its subsets, takes its values from the state space E which
is a subsets of real or complex space . The collection of such processes consist of all
kinds of stochastic processes that can be classified in to four different categories.

1) Discrete time, Discrete state space.

2) Discrete time, Continuous state space.

3) Continuous time, Discrete state space.

4) Continue time, Continuous state space.

DS CS
{Xn : n ∈ N} {Xn : n ∈ N} DT
(1) (2)
{Xt : t ∈ N} {Xt : t ∈ N}
(3) (4) CT

Example1.

Let Xn denote the number of sixes up to nth the throw of an unbiased die (6 faces)
continuously. Then, clearly,{ Xn∶n ≥0 }is a stochastic process with time space T= {0, 1, 2
…..}, and state space E = {0, 1, 2 ……}.

Example 2.

Consider the experiment of recording the temperature at a place at the end of every day.
Let Xn denote the temperature measure on the nth day, then { Xn ∶n ≥0 } is a stochastic
process with state space T =( −∞,∞) (sometimes, the temp freezes below00 ) .

2
Example (3)

Let Xt denote the number of phone calls received at a telephone exchange board upto
time t, That is the number of calls received during the interval [ 0, t ), starting with initial
time point t = 0, Then clearly { Xt ∶t ∈ T }is a stochastic process with continuous time space
=[0,∞ )and discrete state space E ={ 0,1,2… . }

Example (4)

Consider the experiment of observing the price of gold in the whole sale market with
initial time point t = 0. Let Xt denote the price of gold at time t (clock time). Then clearly {
Xt ∶t ∈ T } in a stochastic process with time space T=(0,∞ )and stole spec E =(0,∞)

➢ Stochastic Processes –(Independent Increaments)


Consider a stochastic process { Xt ∶t ∈ T } with continuous time space T =(−∞,∞ ) If for all
, t1, t2, t3 …..tn ∈T , t1< t2 <t3 <…..<tn . The random variables, X( t2 )- X (t1 ), X(t3) – X( t2 ),…. , X
(tn) – X(tn-1) are independent then, { Xt ∶t ∈ T } is said to be a stochastic process with
independent increments.

In the case of discrete parameter, { Xn: n ∈ N} is a stochastic process, satisfying the


Markov condition.

Let Z0 = X0 , Zi= Xi – Xi-1,i = 1, 2…… , where N = { 0, 1, 2…..} be independent random variables,


then {Xt∶t∈T} is the Stochastic Process with independent increments. Then the sequence
of independent random variables {Zn:n≥0} is a stochastic process with independent
increments.

Let {Xt∶t∈T} be a stochastic process with time space T =( −∞,∞), and state space E =
(−∞,∞ ) ( continuous time, continuous state space)

If for a given value,X(s) , the value of X(t) , t >s, do not depend as the values X(u) , u<s, then
the process {Xt∶t∈T} is called Markov Process.

In mathematical form (probability distribution),this Markov process can be defined as


follows

If for t1< t2 < …..< tn<t, Pr { α≤ Xt ≤ β | Xt1 =x1 , Xt2 =x2,…., Xtn=xn }

= Pr { α≤ Xt ≤ β | Xtn=xn } , then the process {Xt∶t∈T} is called Markov


process .

3
➢ Markov Chain:

The discrete parameter Markov process { Xn: n ∈ N} is known as Markov Chain with state
space either discrete or continuous.

Consider a simple coin tossing experiment repeated for a number of times (costively),
Two possible outcomes for each trial are ‘Head’ and ‘Tail’. Assume that Head occurs with
probability p and that Tail occurs with probability q, so that p + q = 1.

Let us denote the outcomes of the nth toss of the unbiased coin by Xn .

Then Xn = 1 , if head occurs

0 , if tail occurs , for n=1,2,3,….

That is

Pr {Xn = 1 } = p, and Pr ℎ{ Xn =0 } = q. Hence the sequence of random variables,


X1,X2…….., Can be written as { Xn : n ≥1}, which is a Markov chain.

Definition:

The stochastic process { Xn ∶ n =0,1,2….} or { Xn: n ∈ N0 }, where N0 = { 0, 1, 2….} is a Markov


Chain .if for i, j, i0,i1……in-1 ∈N ,(or a subset of z).

Pr = { Xn+1=j | Xn= i ,Xn-1 = in-1 ,..……X1 = 1,X0 = i0 }

= Pr { Xn+1=j | Xn= i }

whenever, the initial random variable is defined.

Here Xn =j means the outcome of the process in the trial is j.

➢ Transition Probabilities:

Consider a Markov chain {Xn : n≥0 }, then the m−step transition probability denoted Pij(m)
is defined as Pij(m) = { Xn+1=j | Xn= i }

➢ Transition Probability Matrix:

When m=1,the one step transition probabilities (Pij ) ,satisfies Pij≥ 0 and ∑∞
𝑗=0 Pij=1 for all

i = 0, 1,2,3,…

4
The transition probabilities for different state transitions may be written in Matrix from as
follows:

P00 P01 P02 -- -- --

P= P10 P11 P12 -- -- --

--- --- --- -- -- --

--- --- --- -- -- -- nxn

This matrix P is called a transition probability matrix ( tpm ) of the Markov Chain
{Xn : n≥0 }

Example :

Consider a simple queuing system, before a counter designed for customer service.
Customers arrive for service, to the counter (one server) who serves one customer at time
epochs 0, 1, 2,…

Let Yn denote the random variable, representing the number of customersarrive the
counter during the time interval ( n,n+1) for n = 0, 1, 2, …..

Clearly , are independent and identically distributed random variables, with probability
distribution Pr {Yn = n } = Pk, k = 0, 1, 2,…… . Assume that the waiting room can
accommodate only M customers, including one in the counter.

Let Xn be the number of customers present at epoch n, including the one being served, if
any, Then {Xn : n≥0 } Is a Markov Chain with state space E = { 0, 1, 2, ……….,M }.

Now we have,

Yn if Xn =0 are 0≤ Yn ≤ M

Xn+1 = Xn + Yn -1 if 1 ≤ Xn ≤M

and 0 ≤ Yn ≤ M+1- Xn

M otherwise .

5
The corresponding tpm is denoted by

q0 q1 q2 ….. qm-1 Qm
q0 q1 q2 ….. qm-2 Qm-1
0 q0 q1 ….. qm-3 Qm-2
P= . . . . .
. . . . . .
. . . . .
0 0 0 ……. q0 Q1
0 0 0 …… 0 1 (M+1)×(M+1)

Where, QM = qM +qM+1 + ….. and Q0 = 1 .

Example 7:

A particle performs a random walk with absorbing barriers, as 0 and 4.

Whenever it is at any position r (0< r < 4), it moves to r+1 with probability P or to(r −1) with
probability q, p + q =1. But as soon as it reaches 0 or 4 it remains there itself. Let xn be the
position of the particle after n moves. The different states of xn are different positions of
the particle. { xn }is a Markov chain whose unit – step transition probabilities are given by

Pr {Xn+1= r +1 | Xn =r }= p

Pr = {Xn+1= r -1 | Xn = r } =q 0< r < 4

and Pr = {Xn+1= 0 | Xn = 0 } =1

Pr = {Xn+1= 4 | Xn = 4 } =1

The transition matrix is given by

6
States of Xn+1

1 0 0 0 0

q 0 p 0 0

States of Xn 0 q 0 p 0

0 0 q 0 p

0 0 0 0 1 5x5

General random walk between two barriers:

Consider that a particle that may be at any one position r, r = 0, 1,…., k ( ≥ 1 ) of the x –
axis. From state r it moves to state r + 1, 1 ≤ r ≤ k – 1 with probability p and to state r – 1
with probability q. As soon as it reaches state 0 it remains there with probability and is
reflected to state 1 with probability 1 − a ( 0 < a < 1); if it reaches the state k it remains
there with probability b and is reflected to k – 1 with probability 1 – b ( 0 < b < 1 ) . Then,{Xn}
, where Xn is the position of the particle after n steps or moves, is Markov chain with state
space S = { 0, 1,….k }. The transition matrix is

a 1-a 0…0 0 0

q 0 0…0 0 0

p= .. .. .. … … .. ..

0 0 0…q 0 p

0 0 0 … 0 1-b b (k+1)x(k+1)

If a = 1, then is an absorbing barrier and if a = 0, then is a reflecting barrier, if 0 < a < 1, 0 is


an elastic barrier. Similar is the case with state k. The case when both 0 and k are
absorbing barriers corresponds to the familiar Gambler’s ruin problem ( with total capital
between the two gamblers amounting to k).

7
Example:

Suppose that a coin with probability p for showing a head (success) is tossed indefinitely.
Let Xn denote the outcome of the nth trial, be k, where k =( 0, 1,…n ) denote that there is a
run of k successes, i. e. the length of the uninterrupted block of heads is k. Clearly { Xn, n
≥ 0 } constitutes a Markov Chain, with unit – step transition probabilities

Pjk = Pr {Xn+1= k | Xn = j } = p, k=j+1

Pr {Xn+1= k | Xn = j } = q, k=0

Pr {Xn+1= k | Xn = j } = 0, otherwise.

The transition matrix is given by

States of Xn+1

0 1 2 …. k k+1 ….

q p 0 …. 0 0 ….

q 0 0 …. 0 0 ….

States of Xn .. .. ………. .. ….

.. .. ………. .. ….

q 0 0 …. 0 p ….

.. .. .. …. .. .. …. ∞×∞

HIGHER TRANSITION PROBABILITES

Chapman – Kolmogorov equation:

We have so far considered unit – step or one – step transition probabilities, the probability
of Xn given Xn-1, i. e. the probability of the outcome at the nth step or trial given the outcome
at the previous step; Pjk gives the probability of unit – step transition from the state j at a

8
trial to the state k at the next following trial. The m – step transition probability is denoted
by

Pr {Xn+m = k | Xn = j } = Pjk (m) ;

Pjk (m) gives the probability that from the state j at nth trial, the k is reached at ( m + n )th trial
in m steps, i. e. the probability of transition from the state j to the state k in exactly m
steps. The number n does not occur in the r. h. s. of the relation and the chain is
homogeneous. The one – step transition probabilities Pjk (1) are denoted by Pjk for
simplicity. Consider

Pjk (2 ) = Pr {Xn+2 = k | Xn = j }

The state k can be reached from the state j in two steps through some intermediate state
r. Consider a fixed value of r; we have

Pr {Xn+2 = k,Xn+1 =r | Xn = j }

= Pr {Xn+2 = k,Xn+1 =r | Xn = j } Pr {Xn+1 =r | Xn = j }

= Prk(1) Pjr(1) = Pjr Prk .

Since these intermediate state r can assume values r = 1, 2, …., we have

Pjk(2) = Pr {Xn+2 = k | Xn = j } = ∑r Pr {Xn+2 = k,Xn+1 =r | Xn = j } = ∑rPjr Pjr(m)

(summing over for all intermediate states).

By induction, we have

Pjk(m+1) = Pr {Xn+m+1 = k | Xn = j }

=∑r Pr {Xn+m+1 = k | Xn+m = r} Pr {Xn+m = r | Xn = j }

=∑r Pr Pjr(m) .

Similarly, we get

Pjk(m+1) = ∑r PjrPjr(m)

In general, we have

Pjk(m+n) = ∑r Prk (n)Pjr(m) = ∑r Prk (m)Pjr(n) .

9
This equation is a special case of Chapman – Kolmogorov equation, which is satisfied by
the transition probabilities of a Markov chain.

From the above argument , we get

Pjk(m+n) ≥ Prk (n)Pjr(m) , for any r .

Example:

3 1
0
4 4

1 1 1
P01 = 4 2 4

3 1
0 3x3
4 4

The two – step transition matrix is given by

3 1 3 1 5 5 1
0 0
4 4 4 4 8 16 16

1 1 1 1 1 1 5 1 3
=
4 2 4 4 2 4 16 2 16

3 1 3 1 3 9 1
0 0
4 4 4 4 16 16 4

5
Hence P01(2) = Pr {Xn+2 = 1 | Xn = 0 } = for n ≥ 0
16

5
Thus Pr {X2 = 1 | X0= 0 } = 16 ,

And Pr {X2 = 1 , X0= 0 } = Pr {X2 = 1 | X0= 0 } Pr {X0= 0 }

5 1 5
(16)(3) = 48

10
Probability distribution-

Probability distribution of random variables involved in a markov chain can be studied in


this [Link] joint distribution of consecutive random variables can be found using the
following techniques:

It may be seen that the probability distribution of Random variables Xr , Xr+1 ,…… Xr+n, can
be computed in terms of the transition probabilities Pjk and the initial distribution of Xr , is
known .Suppose, for simplicity, take r = 0, then

Pr {X0=a, X1=b, ……, Xn-1=j, Xn=k }

= Pr {Xn=k | Xn-1=j, …. ,X0=a } Pr {Xn-1=j, …. ,X0=a }

= Pr {Xn=k | Xn-1=j } Pr {Xn=j | Xn-2 = i } Pr {Xn-2= i,…. ,X0=a }

= Pr {Xn=k | Xn-1=j } Pr {Xn=j | Xn-2= i } ….. Pr {X1=b |X0=a } Pr {X0=a }

Thus , Pr {Xr=a, Xr+1=b, ……, Xr+n-2=i, Xr+n-1=j, Xr+n=k }

={ Pr (Xr=a) } Pab ….. Pij Pjk ,

Example: Let ,{Xn ,n≥0} be a Markov chain with three states 0, 1, 2 and with transition
3 1
matrix 0
4 4

1 1 1
, and the initial distribution Pr {X0= i } , I=0,1,2
4 2 4

3 1
0 4 4

3x3

3 1
We have , Pr {X1 =1 | X0 =2 } = Pr {X2 =2 | X1 =1} =
4 4

1 3 3
Pr {X2 =2 | X1 =1| X0 =2 } = Pr {X2 =2 | X1 =1} Pr {X1 =1 | X0 =2 } = 4. 4 = 16

3 1 1
Pr {X2 =2 , X1 =1, X0 =2 } = Pr {X2 =2 , X1 =1| X0 =2 } Pr {X0 =2 } = 16 . 3 = 16

Pr {X3 =1 , X2 =2 , X1 =1, X0 =2 } = Pr {X3 =1 | X2 =2 , X1 =1, X0 =2 } Pr {X2 =2 , X1 =1, X0 =2 }

1 3 1 3
= Pr {X3 =1 | X2 =2 } . 16 = 4 . 16 = 64

11
CHAPTER – 3

CLASSIFICATION OF STATES OF STATES AND CHAINS

➢ Classification of states:

The states j, j = 0, 1, 2, …. Of a Markov chain ,{ Xn ,n≥0 } can often be classified in a


distinctive manner according to some fundamental properties of the system. By means
of such classification it is possible to identify certain types of chains.

Communication Relations

If Pij(n) > 0 for some n ≥ 1, then we say that state j can be reached or state j is accessible
from state I; the relation is denoted by 1 → j . Conversely, if for all n, Pij(n) = 0, then j is not
accessible from I; in notation I ↛ j.

If two states I and j are such that each is accessible from the other then we say that the
two states communicate; it is denoted by I ⟷ j ; then there exist integer m and n such
that

Pij(n) > 0 and Pij(m) > 0 .

The relation → is transitive, i. e. if i → j and j → k then i → k. From Chapman – Kolmogorov


equation

Pik(m+n) = ∑r Pir(m) Prk(n)

Pik(m+n) ≥ ∑r Pij(m) Pjk(n)

where the transitivity property follows.

The relation ⟷ is also transitive; i. e. i ⟷ j, j ⟷ k imply I ⟷ k.

The relation is clearly symmetric, i. e. if i ⟷ j, then j ⟷ i.

The digraph of a chain helps in studying the communication relations.

From we see that 0 ⟷ 1 and 1 ⟷ 2 implies 0 ⟷ 2.

The states of this chain are such that every state can be reached from every other
state.

12
Class Property :

A class of states is a subset of the state space such that every of the class communicates
with every other and there is no other state outside the class which communicates with
all other states in the class. A property defined for all states of a chain is a class property
is its possession by one state in a class implies its possession by all states of the same
class. One such property is the periodicity of a state. Periodicity:

State I is a return state if Pii(n) ≥ 0 for some n ≥ 1. The period of a return to state
i is defined as the greatest common divisor of all m such that Pii(m) > 0. Thus

di = G. C. D. {m∶ Pii(m) > 0 };

State i is said to be aperiodic if di = 1 and periodic if di > 1. Clearly state I is aperiodic if


Pii ≠ 0.

It can be shown that two distinctive states belonging to the same class have same period.

Classification of Chains:

If C is a set states such that no state outside C can be reached from any state in C, then
C is said to be closed. If C is closed and j ∈ C while k ∉ C, then Pjk(n) = 0 for all, i. e. C is
closed iff ∑𝑗∈c 𝑃ij= 1 for every I ∈ C. Then the sub-matrix P1 =( Pij ), I, j, ∈ C, is also stochastic
and P can be expressed in the canonical form as :

P= P1 0

R1 Q

A closed set may contain one or more states. If a closed set contains only one
state j then state j is said to be absorbing: j is absorbing iff Pjj = 1, Pjk = 0, k ≠ j.

Every finite Markov chain contains at least one closed set, i. e. the set of all states
or the state space. If the chain does not contain any other proper closed subset other
than the state space, then chain is called irreducible; the t. p. m. of irreducible chain is
an irreducible matrix. In an irreducible Markov chain every state can be reached from
every other state. Chains which are irreducible are said to be reducible or non –
irreducible; the t. p. m. is reducible. The irreducible matrices may be subdivided into two
classes: primitive (aperiodic) and un primitive (cyclic or periodic). A Markov chain is
13
primitive (aperiodic) iff the corresponding t. p. m. is primitive. In an irreducible chain
states belong to the same class.

Transient and Recurrent States :

We now proceed to obtain a more sensitive classification of the states of a


Markov chain.

Suppose that a system starts with state j. Let fjk(n) be the probability that it reaches the
state k for the first time at the nth step (or after n transitions) and let Pjk(n) be the probability
that it reaches state k (not necessarily for the first time) after n transitions. Let given that
the chain starts at state j. A relation can be established between fjk(n) and Pjk(n) as follows.
The relation allows fjk(n) to be expressed in terms of Pjk(n) .

First passage time distribution:

Let Fjk denote the probability that starting with state j the system will ever reach state k.
Clearly

Fjk = ∑∞
𝑛=0 Fjk
(n)

We have sup ≤ Pjk(n) ≤ Fjk ∑m≥1 Pjk(m) for all n ≥ 1.

We have to consider two cases, Fjk = 1 and Fjk < 1.

When Fjk = 1, it is certain that the system starting with state j will reach state k; in this case
{ Fjk(n), n = 1, 2, … } is a proper probability distribution and this first passing time distribution
for k given that the system starts with j.

The mean (first passing) time from state j to state k is given by µjk = ∑∞
𝑛=1 𝑛 Fjk
(n)

In particular, when k = j, { Fjj(n) , n = 1, 2,…} represents the distribution of the recurrence


times of j; and Fjj = 1 will imply that the return to the state j is certain . In this case

µjj = ∑∞
𝑛=1 𝑛 Fjj
(n)
= ∑ is known as the mean recurrence time for the state j.

Thus, two questions arise concerning state j: first, whether the return to state j is certain
and secondly, when this happens, whether the mean recurrence time µjj is finite.

It can be shown that di = G. C. D. {m∶ Pii(m) > 0 } = G. C. D. {m∶ Fii(m) > 0 }

14
Persistent :-

A state j is said to persistent (or recurrent ) if Fjk < 1 (i. e. return to state j is uncertain). A
persistent state j is said to be null persistent if µjj = ∞, i. e. if the mean recurrence time is
infinite and is said to be non – null (or positive) persistent if µjj < ∞,

Thus the states of Markov chain can be classified as transient and persistent, and
persistent states can be subdivided as non – null and null persistent.

A persistent non – null and aperiodic state of a Markov chain is said to be ergodic.
Consider the following example.

Example 5.

Let , {Xn , n≥0} be a Markov chain having state S = { 1, 2, 3, 4 ) and transition matrix

1 2
0 0
3 3

P= 1 0 0 0

1 1
0 0
2 2

1 1
0 0 4x4
2 2

1
Here f33(1) = , f33(2) = f33(3) = …..= 0 so that F33(1) = ∑∞
𝑛=1 f33
(n)
2

1 1
= +0 = <1.
2 2

Hence state 3 is transient .

1 1 1
Again f44(1) = , f44(n) = 0 , n≥2 , so that F44 = ∑∞
𝑛=1 f44 = 2 + 0 +0 + ….= 2 < 1.
(n)
2

Hence state 4 is also transient.

For state 1 :

1 2 1 2
Now f11(1) = , f11(2) = 3 , and F11 = ∑∞
𝑛=1 f11 = 3 +3 = 1, so that state 1 is persistent .
(n)
3

1 2 5
Further since µjj = ∑∞
𝑛=1 𝑛 Fjj = 1.
(n)
+ 2. 3 = , state 1 is non – null persistent .
3 3

15
1
Again P11 = > 0 , so that state 1 is aperiodic. Since state 1 is non-null persistent and
3

aperiodic clearly State 1 is ergodic.

Example 1:

Consider a Markov chain with transition matrix

0 0 1 0

P= 0 0 0 1

0 1 0 0

1 1 1 1
4×4
2 4 8 8

Show that all states of the above MC are ergodic.

Solution:-

It can be easily seen that the chain is irreducible. Consider state 4:

1 1 1 1
we have P44 = 8 > 0 ; state is aperiodic and f44(1) = 8 , f44(2) = 8 , f44(3) = 8 , f44(4) = ¼ , and

f44(n) = 0 n > 4

1 1 1 1 5
F44 = ∑∞
𝑛=1 f44 = 8 + 8 + 8 + 4 = 8 <∞
(n)

1 1 1 1 17
Also, µ44 = ∑∞
𝑛=1 𝑛 F44 =1. 8 +2. 8 + 3. 8 +4. 4 =
(n)
<∞
8

Hence , state 4 is non -null persistent.

Since state 4 is aperiodic and persistent ,state 4 is ergodic.

Example 2 :

0 0 1 0

P= 0 0 0 1

0 1 0 0

1 1 1 1
3 5 6 4
4×4

16
Solution :

Every state can be reached from every other state in a finite number of steps , so It is
irreducible .

Consider state 4 ,

1
P44=4 >0

1 1 1 1
f44(1) = 4 , f44(2) = 6 , f44(3) = 5 , f44(4) = 3 and

1 1 1 1
F44 = ∑∞
𝑛=1 f44 =
(n)
+ 6 + 5 + 3 = 1 <∞
4

So state is persistent

1 1 1 1
Also, µ44 = ∑∞
𝑛=1 𝑛 F44 = 1. 4 + 2. 6 + 3. 5 + 4. 3 <∞
(n)

Therefore , state is non-null persistent.

State 4 is ergodic

➢ Markov chain with denumerable number of states :

So far we discussed Markov chains with finite number of states. The result can be
generalized to chain with a denumerable number of states (or with countable state
space). Let P = ( pij ) be the t. p. m. of the chain ,{ Xn , n ≥ 1 } with countable states space S
= { 0, 1, 2,… }. Then Pk = Pij(k) is well defined. The states of the chain may not constitute
even single closed set. For example when

Pij= 1 , j=I+1

=0, otherwise,

The states do not belong to any closed set, including S.

For dealing with a chain with a countable state space, we need a more sensitive
classification of states – transient, persistent null and persistent non – null. Beside
irreducibility and aperiodicity, non – null persistence is required for ergodicity for such a

17
chain (a chain with countable state space) while aperiodicity and irreducibility (or some
type of reducibility) were enough for ergodicity for a finite chain.

Reducible chains:

In this section we propose to discuss some properties of reducible chains.

Finite Reducible chains with one closed set

A finite reducible Markov chain with one closed set is a Markov chain satisfies :

1. The state space is finite.


2. The chain is reducible (not all states communicate).
3. There exist exactly one closed communicating class states.

The state space S can be written as

S=TUC

Where , T = transient states

C = only closed communication class

As n → ∞

Probability of being in transient state → 0 . and probability mass concentrates entirely on


the closed set

Transition matrix form:

P= Q R

0 S nxn

Where

Q: transition amog transient states

R: transitions from transient to closed states

0: no transition from closed to transient states

S: transition within the closed set

18
Example

P= 0 1 0

0 0 1

0 0 1 3x3

Here state 3 is closed set

State 1 and 2 are transient

Chain is finite and reducible with one closed set

Chain with one Single Class of Persistent Non – null Aperiodic States

Now suppose that the states of the closed class C are non – null persistent and
aperiodic, the remaining states of S being transient; the transient states constitute a set
T.

Then we have , for each pair i, j,

lim pi.j(n) = vj is independent of i, when i, j are persistent, and also when j is persistent
𝑛→∞
and i is transient; again

lim pi.j(n) = 0 when j is transient.


𝑛→∞

In this case shall write the transition matrix as

P= P1 0

R1 M nxn

where M gives the matrix of transitions among the transient states.

Example :

Consider a reducible chain with S = {1,2,3,4} and t. p. m.

P= P1 0

R1 M

19
1 2 1
where P1 = R 1= 0
3 3 2

1 1 1
0
2 2 2

3
M= 0
4

1 1
2 4

Solution :

Stationary distribution of the closed class

Let π = (π1, π2 )

πP1 = π , π1 + π2 = 1
1 1
that is π1 =3 π1 + 2π2 …………………(1)
2 1
π2 = 3π1+2 π2 …………………..(2)

from the first equation


1 1
π1 - 3 π1 = 2π2
4
π2 = 3 π1

using π1 + π2 = 1
4
π2 = 7

for a finite reducible chain with one closed set

if the closed states are 1 and 2 , and transient states are 3 and 4 , then
3 4
lim pn = ( 7 , 7 , 0 ,0 )
𝑛→∞

20
CHAPTER – 4
BIRTH AND DEATH PROCESS AND CONTINUOUS TIME MARKOV CHAIN

Introduction:

A stochastic process whose state space moves back and forth by unit measure in state
space is called Birth-Death process. A simple example for birth –death process is the
queuing system in which arrival customer to the counter is a birth and the service
completion in a server is equivalent to death event. Inventory control system with one for
–one ordering policy is also an example for Birth- Death process. In this unit we study the
pure birth and pure death process together with Birth-Death process.

➢ Birth – Death Process:

Pure Birth- Death Process:

First we consider a pure birth process, where Pr{ Number of births between t and t+h is k,
given the number of individuals at epoch t is n}

Is given by , P(k, ℎ | n, t) = λnh + o(h), k=1

o(h), k ≥2

1-λnh + o(h) , k=0

The above holds for all n ≥0; λ0 may or not be equal to zero. Here k is a non – negative
integer which implies that there can only be an increase by k, i. e. only births are consi
=dered possible. Now we suppose that there could also be a decrease by k, i. e. death(s)
is also considered possible. In this case we shall further assume that

Pr { Number of births between t and t+h is k, given the number of individuals at epoch t is
n}

Is given by , µnh + o(h), k=1

q (k,h \ n,t ) = o(h), k ≥2

1- µnh + o(h), k=0,

21
The above holds for n ≥ ; further µ0=0,Which is known as a birth and death process.
Through a birth there is an increase by one and through a death, there is a decrease by
one in the number of “individuals”. The probability of more than one birth or more than
one death in an interval of length h is ℎ .

Birth and death rates :

Some particular values of λn and µnare of special interest. When λn= λ i.e.λn is independent
of the population size n, then the increase may be thought of as due to an external source
such as immigration. When λn = nλ , we have case of (linear) birth; λnℎ= nλℎ may by
considered as the probability of one birth in an interval of length h given that n individuals
are present (at the instant from which the interval commences)the probability of one
individual giving a birth being λℎ, (i. e. rate of birth in unit interval λ is per individual). Here
λ0=0.

When µn = µ, the decrease may effected due to the emigration [Link], µn =nµ , we
have the case of death, the rate of death in unit interval being µ per individual.

Particular Cases

1) Immigration – Emigration Process

For λn= λ and µn = µ we have what is known as immigration – emigration process. The
process associated with the simple queuing model M /M /1 in such a process.

2) Linear Growth Process

(a)Generating Function:

IN the Yule –Furry process one is concerned with a population whose members can give
birth only but cannot die. Let us consider the case where both births and deaths can
occur. Suppose that the probability that a member gives birth to a new member in a small
interval of length h is λℎ+ o( ℎ) and the probability that a member dies is µℎ+ o(ℎ) . Then,
if n members are present at the instant t, the probability of one birth between t and t +ℎ is
nλℎ+ o(ℎ)and that of one death is µℎ+o( ℎ ), n ≥1.

We have thus a birth and death process with


22
λn = nλ , µn = µ( n ≥1) , λ0 = µ0 =0 .

(b) Mean population size:

M(t) = i𝑒 (λ − µ )t

As →∞, the mean population size M(t) tends to 0 for λ < µ (birth rate smaller than death
rate) or to ∞ for λ > µ (birth rate greater than death rate) and to the constant value I when
λ=µ

(c) Extinction Probability:

Since =0, 0 is an absorbing state, i. e. once the population size reaches 0, it remains at 0
thereafter. This is the interesting case of extinction of the population.

µ
The probability of ultimate extinction is 1 when λ < µ and is < 1 when λ > µ .
λ

3) Linear Growth Immigration :

we have λ0 =0 and, as a result, if the population size reaches zero at any time, it
remains at zero thereafter. Here 0 is an absorbing state. If we consider λn = nλ+α (α >0 ),
µn = nµ ,(n ≥0) we get what is known as a linear growth process with immigration, where
0 is not an absorting state.

4) Immigration –Death Process

If λn = λ and µn =nµ , we get what is known as an immigration – death process. This


corresponds to the Markovian queue with infinite number of channels, i. e. the queue M/
M/∞.

5) Pure Death Process

Here λn = 0 for all n, i. e. an individual cannot give birth to a new individual and the
probability of death of an individual in , (t , t+ℎ )is µℎ+ o(ℎ) . Then , if n individuals are
present at time , the probability of one death in , ( t, t+ℎ ) is nµℎ+ o( ℎ) .

The birth and death process is a special case of continuous time Markov process with
discrete state space = {0,1,2,……} such that the probability of transtition from i to j in ∆t
time is (∆ t) whenever |i −j | ≥2. In other words changes takes place through transitions
only from a state to its immediate neighboring state.
23
➢ Continuous Time Markov Chains:

Definition:

A continuous time parameter MARKOV process {X(t) : t≥0 } with discrete state space N =
{0,1,2….. } is considered for this section. Assume that {X(t) : t≥0 } is a time homogeneous
Markov chain.

So the probability of a transition from state i to state j during the time interval , (T, T+t )
does not depend on the initial time T, but depends only on the elapsed time t and on the
initial and terminal states i and j. We can thus write

Pr {X(T+t) = j | X(T)= i } = pij(t) ,i ,j = 0,1,2,…, t≥0.

In particular, we write Pr {X(t) = j | X(0)= i } = pij(t)

The waiting time for a change of state:

Suppose that {X(t) : t≥0 } is a homogeneous Markov process and that at time t0 =0,the
state of the process X (t0 )= X(0) = i is known. The time taken for a change of state i is a
random variable, say τ . This random time period is called the waiting time to reach a
different state from state i.

Chapman – Kolmogorov Equations:

The transition probability Pij(t+T) is the probability that the given state was i at epoch 0, it
is in state j at epoch t+T ; but in passing from state i to state j in time (t+T) the process
moves through some state k in time t, Thus

Pij(t+T) =∑𝑖 Pr{X(t+T) =j , X(t) = k| X(0)= i }

The chapman-kolmogorov Equation is

Pij(t+T) = ∑∞
𝑘=0 pik(t) pjk (T) , for all states i,j and t≥0 , T≥0

Chapman –Kolmogorov equation in matrix form:

P(t+T) = P(t) .P(T)

Forward Kolmogorov equations:

P′ij (t) = ∑∞
𝑘=0 pik(t) pkj

24
P′(t) = P(t)A

Backward Kolmogorov equation:

P′ij (t) = ∑∞
𝑘=0 pik pkj (t)

P′(t) = AP(t) (matrix notation)

Poisson process:

If events occur in accordance with a Poisson process N(t) with mean λt, then ,

Pi,i+1( ∆t) = Pr{the process goes to state i+1 from state i in time ∆t}

=Pr {one event occurs in time ∆t } =Pr {N(∆t) =1 }

= λ∆t + o( ∆t) ,

Pi ,j( ∆t) = 1− λ∆t + o( ∆t)

and , Pi ,j( ∆t) = o( ∆t) , j ≠ i,i+1.

By comparing with Pi j( ∆t) = aij( ∆t) + o( ∆t), i ≠ j and Pi j( ∆t) = 1+ aij( ∆t) + o( ∆t) , we have

ai,i+1 =λ, ai,i= -λ, ai,j = 0 for j ≠ i,i+1.

The rate matrix is A = (ai,j ) = -λ λ 0 …. 0

0 -λ λ …. 0

… … … … ..

The Kolmogorov forward equations are P′i,i (t) = - λPi,i(t)

P′i,i (t) = -λPi,i(t) + λPi,i-1(t) , j= i+1,i+2,……

Let pj (t) = Pr{N(t)=j } and p0(0) =1, pn (0) = 0, n ≠ 0 we get pj (t) ≡ p0j(t), j=0,1,2,…. So that
pj(t) = 𝑒 −λt (λt)i/ j! Similarly . with pij(0) = 1 ,j= i , pij(0)=0, i ≠ j , we get

pij(t) = 𝑒 −λt (λt)i/ j

25
CHAPTER – 5

CONCLUSION

Markov chains are a fundamental class of stochastic processes in which the


future evolution of a system depends only on its current state and not on its
past history. They provide a powerful mathematical framework for modeling
random systems that evolve over time.

Starting from basic definitions, we explored the structure of Markov chains,


transition probabilities, and state classification. We also discussed the
behavior of chains with finite or countably infinite states and introduced key
concepts such as reducibility and recurrence. Finally, we extended the
discussion to continuous-time Markov processes like the birth-death
process.

Markov chains are widely used in fields such as queuing theory, genetics,
economics, computer science (especially in algorithms like PageRank), and
operations research because they balance analytical tractability with rich
real-world modeling capability.

26
REFERENCE

1.J.R. Norris (1997) - Markov Chains.A very clear and rigorous introduction, excellent for
both beginners and advanced study.

[Link], J.R. Markov Chains. Cambridge University Press, 1997.

3.S. Ross (2014) - Introduction to Probability [Link] is widely used in engineering


and applied sciences with many practical [Link], Sheldon M. Introduction to
Probability Models, 11th Edition, Academic Press, 2014.

4.K. L. Chung (1967) - Markov Chains with Stationary Transition Probabilities Classic
reference for deep theoretical aspects.

[Link], Kai Lai. Markov Chains with Stationary Transition Probabilities, Springer, 1967.

6.J. Medhi (2009) - Stochastic Processes Well-structured, easy to follow, common in


academic [Link], J. Stochastic Processes, 3rd Edition, New Age International,
2009.

7.G. Lawler (2018) - Introduction to Stochastic [Link] text with strong emphasis
on theoretical foundations.

[Link], W. R., Richardson, S., Spiegelhalter D. (1995) Markov Chain Monte Carlo in
Practice, CRC Press.

[Link], P. W. (2013) Harris recurrence, Stochastic Systems lecture notes, Stanford


University.

[Link], P. G., Port, S. C., Stone, C. J. (1986) Introduction to Stochastic Processes,


Waveland Press.

[Link], J. R. (1998) Markov Chains, Cambridge University Press.

[Link]-Nik, H. (2014) Introduction to Probability, Statistics, and Random Processes,


Kappa Research, LLC.

[Link], G. O., Rosenthal, J. S. (2006) Harris recurrence of Metropolis-within-Gibbs


and trans-dimensional Markov Chains, The Annals of Applied Probability, Vol. 16

27
28

You might also like