Skip to main content
Article thumbnail
Location of Repository

Sensitivity of Markov chains for wireless protocols

By David Allwright and Paul Dellar

Abstract

Network communication protocols such as the IEEE 802.11 wireless protocol are currently best modelled as Markov chains. In these situations we have some protocol parameters $\alpha$, and a transition matrix $P(\alpha)$ from which we can compute the steady state (equilibrium) distribution $z(\alpha)$ and hence final desired quantities $q(\alpha)$, which might be for example the throughput of the protocol. Typically the chain will have thousands of states, and a particular example of interest is the Bianchi chain defined later. Generally we want to optimise $q$, perhaps subject to some constraints that also depend on the Markov chain. To do this efficiently we need the gradient of $q$ with respect to $\alpha$, and therefore need the gradient of $z$ and other properties of the chain with respect to $\alpha$. The matrix formulas available for this involve the so-called fundamental matrix, but are there approximate gradients available which are faster and still sufficiently accurate? In some cases BT would like to do the whole calculation in computer algebra, and get a series expansion of the equilibrium $z$ with respect to a parameter in $P$. In addition to the steady state $z$, the same questions arise for the mixing time and the mean hitting times. Two qualitative features that were brought to the Study Group’s attention were: * the transition matrix $P$ is large, but sparse. * the systems of linear equations to be solved are generally singular and need some additional normalisation condition, such as is provided by using the fundamental matrix. We also note a third highly important property regarding applications of numerical linear algebra: * the transition matrix $P$ is asymmetric. A realistic dimension for the matrix $P$ in the Bianchi model described below is 8064×8064, but on average there are only a few nonzero entries per column. Merely storing such a large matrix in dense form would require nearly 0.5GBytes using 64-bit floating point numbers, and computing its LU factorisation takes around 80 seconds on a modern microprocessor. It is thus highly desirable to employ specialised algorithms for sparse matrices. These algorithms are generally divided between those only applicable to symmetric matrices, the most prominent being the conjugate-gradient (CG) algorithm for solving linear equations, and those applicable to general matrices. A similar division is present in the literature on numerical eigenvalue problems

Topics: Information and communication technology
Year: 2007
OAI identifier: oai:generic.eprints.org:110/core70

Suggested articles

Citations

  1. A second look at general Markov chains.
  2. (2004). Algorithm 832: UMFPACK – an unsymmetric-pattern multifrontal method with a column pre-ordering strategy. doi
  3. (1998). ARPACK Users’ Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods. doi
  4. (1991). Generalized inverses of linear transformations. Dover reprint. doi
  5. (1985). GMRES – a generalized minimal residual algorithm for solving nonsymmetric linear systems. doi
  6. (1996). Iterative Methods for Sparse Linear Systems. doi
  7. (1999). LAPACK Users’ Guide, 3rd edn. doi
  8. (1996). Matrix Computations, 3rd edn.
  9. (1997). Numerical Linear Algebra. doi
  10. (1996). On the Lambert W function. doi
  11. (2000). Performance analysis of the IEEE 802.11 distributed coordination function. doi
  12. (1991). Perturbation Methods. doi
  13. (1994). Sensitivity of the stationary distribution of a Markov chain. doi
  14. (1992). Sparse matrices in MATLAB: design and implementation. doi
  15. (1996). Unwinding the branches of the Lambert W function.

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.