# SCS

## Seminar

### Efficiency of the cross-entropy method in Markov chain problems

Seminar Room 1, Newton Institute

#### Abstract

Consider the problem of estimating the probability ?(0) that a discrete-time Markov chain (X(t))1t=0 on a finite state space X, with transition probabilities (p(x, y))x,y2X , reaches a failure set F from a reference state 0 before returning to the reference state. We assume that this is a rare event, where the rarity parameter n might be the population size in queueing applications, or n = 1/o in reliability applications, with o the failure rate. For simulation we apply importance sampling by generating sample paths of the Markov chain using transition probabilities pIS(x, y). Let Y IS n be the resulting estimator of ?(0). In this paper we investigate the efficiency of the estimator, in terms of bounded relative error, and logarithmically bounded relative error, defined by limsup n!1 p Var[Yn] E[Yn] < 1, and liminf n!1 log p Var[Yn] logE[Yn] 1, respectively. Optimal but not implementable would be to use the zero-variance transition probabilities pZV(x, y) = p(x, y)?(y)/?(x), where ?(x) is the probability of reaching the failure set before the reference state when starting in state x. Then the importance sampling estimator has zero variance. Suppose that we approximate ?(x) by ?APP(x), for any state x. L’Ecuyer and Tuffin (2009) gave conditions for the approximated transition probabilities under which the associated importance sampling estimator shows bounded relative error. In this paper we analyse the cross-entropy method for estimating the zero-variance transition probabilities. For that purpose we first show that the zero-variance transition probabilities satisfy pZV(x, y) = E[1{X(T) 2 F}N(x, y)|X(0) = 0] E 1{X(T) 2 F} P z2X N(x, z)|X(0) = 0 , where T is the first entrance time in F [ {0}}, and N(x, y) is the number of times that transition (x, y) has occured. The cross-entropy method provides a procedure for estimating the expectations in this expression, resulting in estimators pEST(x, y) of the zero-variance transition probabilities. Let pCE(x, y) be realised values of these estimators, based on finite sample sizes. Then we give a sufficient condition for the associated estimator Y CE n to show logarithmically bounded relative error. The condition is in terms of the Kullback-Leibler distance D(PZV, PCE) between the underlying probability measures that are associated with these changes of measure. Moreover, we propose a probabilistic condition for the estimators pEST(x, y) such that we obtain the bounded relative error property probabilistically. This means the following, the condition gives us a probability a, such that with probability a the estimator Y CE n of ?(0) has bounded relative error. The proof is based on checking the conditions of L’Ecuyer and Tuffin (2009).#### Video

**The video for this talk should appear here if JavaScript is enabled.**

If it doesn't, something may have gone wrong with our embedded player.

We'll get it fixed as soon as possible.

If it doesn't, something may have gone wrong with our embedded player.

We'll get it fixed as soon as possible.