Markov Chain Model
Markov Chain Model
Introduction:-
This chapter is based on a topic of great application in predicting outcomes of day-to-day
processes based on observed probabilistic results from the past. This book chapter deals
exclusively with discrete Markov chain. Markov chain represents a class of stochastic
processes in which the future does not depend on the past but only on the present. The
algorithm was first proposed by a Russian mathematician Andrei Markov. He was taught
mathematics by another great mathematician Pafnuty Chebyshev at the University of St
Petersburg. Chebyshev was noted for his expertise in probability theory of which Markov
chain is a part. Markov’s first publication on Markov chain was in 1906. Since then the
theory and applications of Markov chains has dramatically increased. In the recent past,
like many other ancient mathematics theories, including Maxwell’s equations, wavelets
and a wide range of predictive mathematical algorithms, Markov chain has come to f ind
its place in various practical applications. It has been applied in stock markets, weather
prediction, spread of influenzas,
susceptibility to breast cancer among women and various data analysis as we shall
observe in this chapter. Markov chains model processes which evolve in steps which
could be in terms of time, trials or sequence. Therefore, for example at each step, the
process may exist in various countable states. When the process evolves, the system
can remain in the same state or change (transition) to a different state during the time
epoch. These movements between states are normally described in terms of transition
probabilities. These transition probabilities allow us to predict into the future the
possibility of the system being in a state, many time epochs later. We will get see a few
examples of this in this chapter. The rest of this chapter introduces the concepts of
Markov chains and defines the concept of states, state transition and how state
transition diagram:
1
CHAPTER – 2
MARKOV CHAINS AND MARKOV PROCESS
Definition:
Let { Xn,n ∈ N},Use stochastic process with state space E(discrete or continue) and time
space T(discrete or continuous) A thus a family of random variables { Xt : t ∈ T} ,where T
={.....-1,0,1,2.....} or (-∞, ∞) or its subsets, takes its values from the state space E which
is a subsets of real or complex space . The collection of such processes consist of all
kinds of stochastic processes that can be classified in to four different categories.
DS CS
{Xn : n ∈ N} {Xn : n ∈ N} DT
(1) (2)
{Xt : t ∈ N} {Xt : t ∈ N}
(3) (4) CT
Example1.
Let Xn denote the number of sixes up to nth the throw of an unbiased die (6 faces)
continuously. Then, clearly,{ Xn∶n ≥0 }is a stochastic process with time space T= {0, 1, 2
…..}, and state space E = {0, 1, 2 ……}.
Example 2.
Consider the experiment of recording the temperature at a place at the end of every day.
Let Xn denote the temperature measure on the nth day, then { Xn ∶n ≥0 } is a stochastic
process with state space T =( −∞,∞) (sometimes, the temp freezes below00 ) .
2
Example (3)
Let Xt denote the number of phone calls received at a telephone exchange board upto
time t, That is the number of calls received during the interval [ 0, t ), starting with initial
time point t = 0, Then clearly { Xt ∶t ∈ T }is a stochastic process with continuous time space
=[0,∞ )and discrete state space E ={ 0,1,2… . }
Example (4)
Consider the experiment of observing the price of gold in the whole sale market with
initial time point t = 0. Let Xt denote the price of gold at time t (clock time). Then clearly {
Xt ∶t ∈ T } in a stochastic process with time space T=(0,∞ )and stole spec E =(0,∞)
Let {Xt∶t∈T} be a stochastic process with time space T =( −∞,∞), and state space E =
(−∞,∞ ) ( continuous time, continuous state space)
If for a given value,X(s) , the value of X(t) , t >s, do not depend as the values X(u) , u<s, then
the process {Xt∶t∈T} is called Markov Process.
If for t1< t2 < …..< tn<t, Pr { α≤ Xt ≤ β | Xt1 =x1 , Xt2 =x2,…., Xtn=xn }
3
➢ Markov Chain:
The discrete parameter Markov process { Xn: n ∈ N} is known as Markov Chain with state
space either discrete or continuous.
Consider a simple coin tossing experiment repeated for a number of times (costively),
Two possible outcomes for each trial are ‘Head’ and ‘Tail’. Assume that Head occurs with
probability p and that Tail occurs with probability q, so that p + q = 1.
Let us denote the outcomes of the nth toss of the unbiased coin by Xn .
That is
Definition:
= Pr { Xn+1=j | Xn= i }
➢ Transition Probabilities:
Consider a Markov chain {Xn : n≥0 }, then the m−step transition probability denoted Pij(m)
is defined as Pij(m) = { Xn+1=j | Xn= i }
When m=1,the one step transition probabilities (Pij ) ,satisfies Pij≥ 0 and ∑∞
𝑗=0 Pij=1 for all
i = 0, 1,2,3,…
4
The transition probabilities for different state transitions may be written in Matrix from as
follows:
This matrix P is called a transition probability matrix ( tpm ) of the Markov Chain
{Xn : n≥0 }
Example :
Consider a simple queuing system, before a counter designed for customer service.
Customers arrive for service, to the counter (one server) who serves one customer at time
epochs 0, 1, 2,…
Let Yn denote the random variable, representing the number of customersarrive the
counter during the time interval ( n,n+1) for n = 0, 1, 2, …..
Clearly , are independent and identically distributed random variables, with probability
distribution Pr {Yn = n } = Pk, k = 0, 1, 2,…… . Assume that the waiting room can
accommodate only M customers, including one in the counter.
Let Xn be the number of customers present at epoch n, including the one being served, if
any, Then {Xn : n≥0 } Is a Markov Chain with state space E = { 0, 1, 2, ……….,M }.
Now we have,
Yn if Xn =0 are 0≤ Yn ≤ M
Xn+1 = Xn + Yn -1 if 1 ≤ Xn ≤M
and 0 ≤ Yn ≤ M+1- Xn
M otherwise .
5
The corresponding tpm is denoted by
q0 q1 q2 ….. qm-1 Qm
q0 q1 q2 ….. qm-2 Qm-1
0 q0 q1 ….. qm-3 Qm-2
P= . . . . .
. . . . . .
. . . . .
0 0 0 ……. q0 Q1
0 0 0 …… 0 1 (M+1)×(M+1)
Example 7:
Whenever it is at any position r (0< r < 4), it moves to r+1 with probability P or to(r −1) with
probability q, p + q =1. But as soon as it reaches 0 or 4 it remains there itself. Let xn be the
position of the particle after n moves. The different states of xn are different positions of
the particle. { xn }is a Markov chain whose unit – step transition probabilities are given by
Pr {Xn+1= r +1 | Xn =r }= p
and Pr = {Xn+1= 0 | Xn = 0 } =1
Pr = {Xn+1= 4 | Xn = 4 } =1
6
States of Xn+1
1 0 0 0 0
q 0 p 0 0
States of Xn 0 q 0 p 0
0 0 q 0 p
0 0 0 0 1 5x5
Consider that a particle that may be at any one position r, r = 0, 1,…., k ( ≥ 1 ) of the x –
axis. From state r it moves to state r + 1, 1 ≤ r ≤ k – 1 with probability p and to state r – 1
with probability q. As soon as it reaches state 0 it remains there with probability and is
reflected to state 1 with probability 1 − a ( 0 < a < 1); if it reaches the state k it remains
there with probability b and is reflected to k – 1 with probability 1 – b ( 0 < b < 1 ) . Then,{Xn}
, where Xn is the position of the particle after n steps or moves, is Markov chain with state
space S = { 0, 1,….k }. The transition matrix is
a 1-a 0…0 0 0
q 0 0…0 0 0
p= .. .. .. … … .. ..
0 0 0…q 0 p
0 0 0 … 0 1-b b (k+1)x(k+1)
7
Example:
Suppose that a coin with probability p for showing a head (success) is tossed indefinitely.
Let Xn denote the outcome of the nth trial, be k, where k =( 0, 1,…n ) denote that there is a
run of k successes, i. e. the length of the uninterrupted block of heads is k. Clearly { Xn, n
≥ 0 } constitutes a Markov Chain, with unit – step transition probabilities
Pr {Xn+1= k | Xn = j } = q, k=0
Pr {Xn+1= k | Xn = j } = 0, otherwise.
States of Xn+1
0 1 2 …. k k+1 ….
q p 0 …. 0 0 ….
q 0 0 …. 0 0 ….
States of Xn .. .. ………. .. ….
.. .. ………. .. ….
q 0 0 …. 0 p ….
.. .. .. …. .. .. …. ∞×∞
We have so far considered unit – step or one – step transition probabilities, the probability
of Xn given Xn-1, i. e. the probability of the outcome at the nth step or trial given the outcome
at the previous step; Pjk gives the probability of unit – step transition from the state j at a
8
trial to the state k at the next following trial. The m – step transition probability is denoted
by
Pjk (m) gives the probability that from the state j at nth trial, the k is reached at ( m + n )th trial
in m steps, i. e. the probability of transition from the state j to the state k in exactly m
steps. The number n does not occur in the r. h. s. of the relation and the chain is
homogeneous. The one – step transition probabilities Pjk (1) are denoted by Pjk for
simplicity. Consider
Pjk (2 ) = Pr {Xn+2 = k | Xn = j }
The state k can be reached from the state j in two steps through some intermediate state
r. Consider a fixed value of r; we have
Pr {Xn+2 = k,Xn+1 =r | Xn = j }
By induction, we have
Pjk(m+1) = Pr {Xn+m+1 = k | Xn = j }
=∑r Pr Pjr(m) .
Similarly, we get
Pjk(m+1) = ∑r PjrPjr(m)
In general, we have
9
This equation is a special case of Chapman – Kolmogorov equation, which is satisfied by
the transition probabilities of a Markov chain.
Example:
3 1
0
4 4
1 1 1
P01 = 4 2 4
3 1
0 3x3
4 4
3 1 3 1 5 5 1
0 0
4 4 4 4 8 16 16
1 1 1 1 1 1 5 1 3
=
4 2 4 4 2 4 16 2 16
3 1 3 1 3 9 1
0 0
4 4 4 4 16 16 4
5
Hence P01(2) = Pr {Xn+2 = 1 | Xn = 0 } = for n ≥ 0
16
5
Thus Pr {X2 = 1 | X0= 0 } = 16 ,
5 1 5
(16)(3) = 48
10
Probability distribution-
It may be seen that the probability distribution of Random variables Xr , Xr+1 ,…… Xr+n, can
be computed in terms of the transition probabilities Pjk and the initial distribution of Xr , is
known .Suppose, for simplicity, take r = 0, then
Example: Let ,{Xn ,n≥0} be a Markov chain with three states 0, 1, 2 and with transition
3 1
matrix 0
4 4
1 1 1
, and the initial distribution Pr {X0= i } , I=0,1,2
4 2 4
3 1
0 4 4
3x3
3 1
We have , Pr {X1 =1 | X0 =2 } = Pr {X2 =2 | X1 =1} =
4 4
1 3 3
Pr {X2 =2 | X1 =1| X0 =2 } = Pr {X2 =2 | X1 =1} Pr {X1 =1 | X0 =2 } = 4. 4 = 16
3 1 1
Pr {X2 =2 , X1 =1, X0 =2 } = Pr {X2 =2 , X1 =1| X0 =2 } Pr {X0 =2 } = 16 . 3 = 16
1 3 1 3
= Pr {X3 =1 | X2 =2 } . 16 = 4 . 16 = 64
11
CHAPTER – 3
➢ Classification of states:
Communication Relations
If Pij(n) > 0 for some n ≥ 1, then we say that state j can be reached or state j is accessible
from state I; the relation is denoted by 1 → j . Conversely, if for all n, Pij(n) = 0, then j is not
accessible from I; in notation I ↛ j.
If two states I and j are such that each is accessible from the other then we say that the
two states communicate; it is denoted by I ⟷ j ; then there exist integer m and n such
that
The states of this chain are such that every state can be reached from every other
state.
12
Class Property :
A class of states is a subset of the state space such that every of the class communicates
with every other and there is no other state outside the class which communicates with
all other states in the class. A property defined for all states of a chain is a class property
is its possession by one state in a class implies its possession by all states of the same
class. One such property is the periodicity of a state. Periodicity:
State I is a return state if Pii(n) ≥ 0 for some n ≥ 1. The period of a return to state
i is defined as the greatest common divisor of all m such that Pii(m) > 0. Thus
It can be shown that two distinctive states belonging to the same class have same period.
Classification of Chains:
If C is a set states such that no state outside C can be reached from any state in C, then
C is said to be closed. If C is closed and j ∈ C while k ∉ C, then Pjk(n) = 0 for all, i. e. C is
closed iff ∑𝑗∈c 𝑃ij= 1 for every I ∈ C. Then the sub-matrix P1 =( Pij ), I, j, ∈ C, is also stochastic
and P can be expressed in the canonical form as :
P= P1 0
R1 Q
A closed set may contain one or more states. If a closed set contains only one
state j then state j is said to be absorbing: j is absorbing iff Pjj = 1, Pjk = 0, k ≠ j.
Every finite Markov chain contains at least one closed set, i. e. the set of all states
or the state space. If the chain does not contain any other proper closed subset other
than the state space, then chain is called irreducible; the t. p. m. of irreducible chain is
an irreducible matrix. In an irreducible Markov chain every state can be reached from
every other state. Chains which are irreducible are said to be reducible or non –
irreducible; the t. p. m. is reducible. The irreducible matrices may be subdivided into two
classes: primitive (aperiodic) and un primitive (cyclic or periodic). A Markov chain is
13
primitive (aperiodic) iff the corresponding t. p. m. is primitive. In an irreducible chain
states belong to the same class.
Suppose that a system starts with state j. Let fjk(n) be the probability that it reaches the
state k for the first time at the nth step (or after n transitions) and let Pjk(n) be the probability
that it reaches state k (not necessarily for the first time) after n transitions. Let given that
the chain starts at state j. A relation can be established between fjk(n) and Pjk(n) as follows.
The relation allows fjk(n) to be expressed in terms of Pjk(n) .
Let Fjk denote the probability that starting with state j the system will ever reach state k.
Clearly
Fjk = ∑∞
𝑛=0 Fjk
(n)
When Fjk = 1, it is certain that the system starting with state j will reach state k; in this case
{ Fjk(n), n = 1, 2, … } is a proper probability distribution and this first passing time distribution
for k given that the system starts with j.
The mean (first passing) time from state j to state k is given by µjk = ∑∞
𝑛=1 𝑛 Fjk
(n)
µjj = ∑∞
𝑛=1 𝑛 Fjj
(n)
= ∑ is known as the mean recurrence time for the state j.
Thus, two questions arise concerning state j: first, whether the return to state j is certain
and secondly, when this happens, whether the mean recurrence time µjj is finite.
14
Persistent :-
A state j is said to persistent (or recurrent ) if Fjk < 1 (i. e. return to state j is uncertain). A
persistent state j is said to be null persistent if µjj = ∞, i. e. if the mean recurrence time is
infinite and is said to be non – null (or positive) persistent if µjj < ∞,
Thus the states of Markov chain can be classified as transient and persistent, and
persistent states can be subdivided as non – null and null persistent.
A persistent non – null and aperiodic state of a Markov chain is said to be ergodic.
Consider the following example.
Example 5.
Let , {Xn , n≥0} be a Markov chain having state S = { 1, 2, 3, 4 ) and transition matrix
1 2
0 0
3 3
P= 1 0 0 0
1 1
0 0
2 2
1 1
0 0 4x4
2 2
1
Here f33(1) = , f33(2) = f33(3) = …..= 0 so that F33(1) = ∑∞
𝑛=1 f33
(n)
2
1 1
= +0 = <1.
2 2
1 1 1
Again f44(1) = , f44(n) = 0 , n≥2 , so that F44 = ∑∞
𝑛=1 f44 = 2 + 0 +0 + ….= 2 < 1.
(n)
2
For state 1 :
1 2 1 2
Now f11(1) = , f11(2) = 3 , and F11 = ∑∞
𝑛=1 f11 = 3 +3 = 1, so that state 1 is persistent .
(n)
3
1 2 5
Further since µjj = ∑∞
𝑛=1 𝑛 Fjj = 1.
(n)
+ 2. 3 = , state 1 is non – null persistent .
3 3
15
1
Again P11 = > 0 , so that state 1 is aperiodic. Since state 1 is non-null persistent and
3
Example 1:
0 0 1 0
P= 0 0 0 1
0 1 0 0
1 1 1 1
4×4
2 4 8 8
Solution:-
1 1 1 1
we have P44 = 8 > 0 ; state is aperiodic and f44(1) = 8 , f44(2) = 8 , f44(3) = 8 , f44(4) = ¼ , and
f44(n) = 0 n > 4
1 1 1 1 5
F44 = ∑∞
𝑛=1 f44 = 8 + 8 + 8 + 4 = 8 <∞
(n)
1 1 1 1 17
Also, µ44 = ∑∞
𝑛=1 𝑛 F44 =1. 8 +2. 8 + 3. 8 +4. 4 =
(n)
<∞
8
Example 2 :
0 0 1 0
P= 0 0 0 1
0 1 0 0
1 1 1 1
3 5 6 4
4×4
16
Solution :
Every state can be reached from every other state in a finite number of steps , so It is
irreducible .
Consider state 4 ,
1
P44=4 >0
1 1 1 1
f44(1) = 4 , f44(2) = 6 , f44(3) = 5 , f44(4) = 3 and
1 1 1 1
F44 = ∑∞
𝑛=1 f44 =
(n)
+ 6 + 5 + 3 = 1 <∞
4
So state is persistent
1 1 1 1
Also, µ44 = ∑∞
𝑛=1 𝑛 F44 = 1. 4 + 2. 6 + 3. 5 + 4. 3 <∞
(n)
State 4 is ergodic
So far we discussed Markov chains with finite number of states. The result can be
generalized to chain with a denumerable number of states (or with countable state
space). Let P = ( pij ) be the t. p. m. of the chain ,{ Xn , n ≥ 1 } with countable states space S
= { 0, 1, 2,… }. Then Pk = Pij(k) is well defined. The states of the chain may not constitute
even single closed set. For example when
Pij= 1 , j=I+1
=0, otherwise,
For dealing with a chain with a countable state space, we need a more sensitive
classification of states – transient, persistent null and persistent non – null. Beside
irreducibility and aperiodicity, non – null persistence is required for ergodicity for such a
17
chain (a chain with countable state space) while aperiodicity and irreducibility (or some
type of reducibility) were enough for ergodicity for a finite chain.
Reducible chains:
A finite reducible Markov chain with one closed set is a Markov chain satisfies :
S=TUC
As n → ∞
P= Q R
0 S nxn
Where
18
Example
P= 0 1 0
0 0 1
0 0 1 3x3
Chain with one Single Class of Persistent Non – null Aperiodic States
Now suppose that the states of the closed class C are non – null persistent and
aperiodic, the remaining states of S being transient; the transient states constitute a set
T.
lim pi.j(n) = vj is independent of i, when i, j are persistent, and also when j is persistent
𝑛→∞
and i is transient; again
P= P1 0
R1 M nxn
Example :
P= P1 0
R1 M
19
1 2 1
where P1 = R 1= 0
3 3 2
1 1 1
0
2 2 2
3
M= 0
4
1 1
2 4
Solution :
Let π = (π1, π2 )
πP1 = π , π1 + π2 = 1
1 1
that is π1 =3 π1 + 2π2 …………………(1)
2 1
π2 = 3π1+2 π2 …………………..(2)
using π1 + π2 = 1
4
π2 = 7
if the closed states are 1 and 2 , and transient states are 3 and 4 , then
3 4
lim pn = ( 7 , 7 , 0 ,0 )
𝑛→∞
20
CHAPTER – 4
BIRTH AND DEATH PROCESS AND CONTINUOUS TIME MARKOV CHAIN
Introduction:
A stochastic process whose state space moves back and forth by unit measure in state
space is called Birth-Death process. A simple example for birth –death process is the
queuing system in which arrival customer to the counter is a birth and the service
completion in a server is equivalent to death event. Inventory control system with one for
–one ordering policy is also an example for Birth- Death process. In this unit we study the
pure birth and pure death process together with Birth-Death process.
First we consider a pure birth process, where Pr{ Number of births between t and t+h is k,
given the number of individuals at epoch t is n}
o(h), k ≥2
The above holds for all n ≥0; λ0 may or not be equal to zero. Here k is a non – negative
integer which implies that there can only be an increase by k, i. e. only births are consi
=dered possible. Now we suppose that there could also be a decrease by k, i. e. death(s)
is also considered possible. In this case we shall further assume that
Pr { Number of births between t and t+h is k, given the number of individuals at epoch t is
n}
21
The above holds for n ≥ ; further µ0=0,Which is known as a birth and death process.
Through a birth there is an increase by one and through a death, there is a decrease by
one in the number of “individuals”. The probability of more than one birth or more than
one death in an interval of length h is ℎ .
Some particular values of λn and µnare of special interest. When λn= λ i.e.λn is independent
of the population size n, then the increase may be thought of as due to an external source
such as immigration. When λn = nλ , we have case of (linear) birth; λnℎ= nλℎ may by
considered as the probability of one birth in an interval of length h given that n individuals
are present (at the instant from which the interval commences)the probability of one
individual giving a birth being λℎ, (i. e. rate of birth in unit interval λ is per individual). Here
λ0=0.
When µn = µ, the decrease may effected due to the emigration [Link], µn =nµ , we
have the case of death, the rate of death in unit interval being µ per individual.
Particular Cases
For λn= λ and µn = µ we have what is known as immigration – emigration process. The
process associated with the simple queuing model M /M /1 in such a process.
(a)Generating Function:
IN the Yule –Furry process one is concerned with a population whose members can give
birth only but cannot die. Let us consider the case where both births and deaths can
occur. Suppose that the probability that a member gives birth to a new member in a small
interval of length h is λℎ+ o( ℎ) and the probability that a member dies is µℎ+ o(ℎ) . Then,
if n members are present at the instant t, the probability of one birth between t and t +ℎ is
nλℎ+ o(ℎ)and that of one death is µℎ+o( ℎ ), n ≥1.
M(t) = i𝑒 (λ − µ )t
As →∞, the mean population size M(t) tends to 0 for λ < µ (birth rate smaller than death
rate) or to ∞ for λ > µ (birth rate greater than death rate) and to the constant value I when
λ=µ
Since =0, 0 is an absorbing state, i. e. once the population size reaches 0, it remains at 0
thereafter. This is the interesting case of extinction of the population.
µ
The probability of ultimate extinction is 1 when λ < µ and is < 1 when λ > µ .
λ
we have λ0 =0 and, as a result, if the population size reaches zero at any time, it
remains at zero thereafter. Here 0 is an absorbing state. If we consider λn = nλ+α (α >0 ),
µn = nµ ,(n ≥0) we get what is known as a linear growth process with immigration, where
0 is not an absorting state.
Here λn = 0 for all n, i. e. an individual cannot give birth to a new individual and the
probability of death of an individual in , (t , t+ℎ )is µℎ+ o(ℎ) . Then , if n individuals are
present at time , the probability of one death in , ( t, t+ℎ ) is nµℎ+ o( ℎ) .
The birth and death process is a special case of continuous time Markov process with
discrete state space = {0,1,2,……} such that the probability of transtition from i to j in ∆t
time is (∆ t) whenever |i −j | ≥2. In other words changes takes place through transitions
only from a state to its immediate neighboring state.
23
➢ Continuous Time Markov Chains:
Definition:
A continuous time parameter MARKOV process {X(t) : t≥0 } with discrete state space N =
{0,1,2….. } is considered for this section. Assume that {X(t) : t≥0 } is a time homogeneous
Markov chain.
So the probability of a transition from state i to state j during the time interval , (T, T+t )
does not depend on the initial time T, but depends only on the elapsed time t and on the
initial and terminal states i and j. We can thus write
Suppose that {X(t) : t≥0 } is a homogeneous Markov process and that at time t0 =0,the
state of the process X (t0 )= X(0) = i is known. The time taken for a change of state i is a
random variable, say τ . This random time period is called the waiting time to reach a
different state from state i.
The transition probability Pij(t+T) is the probability that the given state was i at epoch 0, it
is in state j at epoch t+T ; but in passing from state i to state j in time (t+T) the process
moves through some state k in time t, Thus
Pij(t+T) = ∑∞
𝑘=0 pik(t) pjk (T) , for all states i,j and t≥0 , T≥0
P′ij (t) = ∑∞
𝑘=0 pik(t) pkj
24
P′(t) = P(t)A
P′ij (t) = ∑∞
𝑘=0 pik pkj (t)
Poisson process:
If events occur in accordance with a Poisson process N(t) with mean λt, then ,
Pi,i+1( ∆t) = Pr{the process goes to state i+1 from state i in time ∆t}
= λ∆t + o( ∆t) ,
By comparing with Pi j( ∆t) = aij( ∆t) + o( ∆t), i ≠ j and Pi j( ∆t) = 1+ aij( ∆t) + o( ∆t) , we have
0 -λ λ …. 0
… … … … ..
Let pj (t) = Pr{N(t)=j } and p0(0) =1, pn (0) = 0, n ≠ 0 we get pj (t) ≡ p0j(t), j=0,1,2,…. So that
pj(t) = 𝑒 −λt (λt)i/ j! Similarly . with pij(0) = 1 ,j= i , pij(0)=0, i ≠ j , we get
25
CHAPTER – 5
CONCLUSION
Markov chains are widely used in fields such as queuing theory, genetics,
economics, computer science (especially in algorithms like PageRank), and
operations research because they balance analytical tractability with rich
real-world modeling capability.
26
REFERENCE
1.J.R. Norris (1997) - Markov Chains.A very clear and rigorous introduction, excellent for
both beginners and advanced study.
4.K. L. Chung (1967) - Markov Chains with Stationary Transition Probabilities Classic
reference for deep theoretical aspects.
[Link], Kai Lai. Markov Chains with Stationary Transition Probabilities, Springer, 1967.
7.G. Lawler (2018) - Introduction to Stochastic [Link] text with strong emphasis
on theoretical foundations.
[Link], W. R., Richardson, S., Spiegelhalter D. (1995) Markov Chain Monte Carlo in
Practice, CRC Press.
27
28