Samiksha Jaiswal (Editor)

Markov renewal process

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Markov renewal process

In probability and statistics a Markov renewal process is a random process that generalizes the notion of Markov jump processes. Other random processes like Markov chain, Poisson process, and renewal process can be derived as a special case of an MRP (Markov renewal process).

Contents

Definition

Consider a state space S . Consider a set of random variables ( X n , T n ) , where T n are the jump times and X n are the associated states in the Markov chain (see Figure). Let the inter-arrival time, τ n = T n T n 1 . Then the sequence ( X n , T n ) is called a Markov renewal process if

Pr ( τ n + 1 t , X n + 1 = j | ( X 0 , T 0 ) , ( X 1 , T 1 ) , , ( X n = i , T n ) ) = Pr ( τ n + 1 t , X n + 1 = j | X n = i ) n 1 , t 0 , i , j S

Relation to other stochastic processes

  1. If we define a new stochastic process Y t := X n for t [ T n , T n + 1 ) , then the process Y t is called a semi-Markov process. Note the main difference between an MRP and a semi-Markov process is that the former is defined as a two-tuple of states and times, whereas the latter is the actual random process that evolves over time and any realisation of the process has a defined state for any given time. The entire process is not Markovian, i.e., memoryless, as happens in a continuous time Markov chain/process (CTMC). Instead the process is Markovian only at the specified jump instants. This is the rationale behind the name, Semi-Markov. (See also: hidden semi-Markov model.)
  2. A semi-Markov process (defined in the above bullet point) where all the holding times are exponentially distributed is called a CTMC. In other words, if the inter-arrival times are exponentially distributed and if the waiting time in a state and the next state reached are independent, we have a CTMC. Pr ( τ n + 1 t , X n + 1 = j | ( X 0 , T 0 ) , ( X 1 , T 1 ) , , ( X n = i , T n ) ) = Pr ( τ n + 1 t , X n + 1 = j | X n = i ) = Pr ( X n + 1 = j | X n = i ) ( 1 e λ i t ) ,  for all  n 1 , t 0 , i , j S
  3. The sequence X n in the MRP is a discrete-time Markov chain. In other words, if the time variables are ignored in the MRP equation, we end up with a DTMC. Pr ( X n + 1 = j | X 0 , X 1 , , X n = i ) = Pr ( X n + 1 = j | X n = i ) n 1 , i , j S
  4. If the sequence of τ s are independent and identically distributed, and if their distribution does not depend on the state X n , then the process is a renewal process. So, if the states are ignored and we have a chain of iid times, then we have a renewal process. Pr ( τ n + 1 t | T 0 , T 1 , , T n ) = Pr ( τ n + 1 t ) n 1 , t 0

References

Markov renewal process Wikipedia