19,113 research outputs found

    Sensitivity of Markov chains for wireless protocols

    Get PDF
    Network communication protocols such as the IEEE 802.11 wireless protocol are currently best modelled as Markov chains. In these situations we have some protocol parameters Ī±\alpha, and a transition matrix P(Ī±)P(\alpha) from which we can compute the steady state (equilibrium) distribution z(Ī±)z(\alpha) and hence final desired quantities q(Ī±)q(\alpha), which might be for example the throughput of the protocol. Typically the chain will have thousands of states, and a particular example of interest is the Bianchi chain defined later. Generally we want to optimise qq, perhaps subject to some constraints that also depend on the Markov chain. To do this efficiently we need the gradient of qq with respect to Ī±\alpha, and therefore need the gradient of zz and other properties of the chain with respect to Ī±\alpha. The matrix formulas available for this involve the so-called fundamental matrix, but are there approximate gradients available which are faster and still sufficiently accurate? In some cases BT would like to do the whole calculation in computer algebra, and get a series expansion of the equilibrium zz with respect to a parameter in PP. In addition to the steady state zz, the same questions arise for the mixing time and the mean hitting times. Two qualitative features that were brought to the Study Groupā€™s attention were: * the transition matrix PP is large, but sparse. * the systems of linear equations to be solved are generally singular and need some additional normalisation condition, such as is provided by using the fundamental matrix. We also note a third highly important property regarding applications of numerical linear algebra: * the transition matrix PP is asymmetric. A realistic dimension for the matrix PP in the Bianchi model described below is 8064Ɨ8064, but on average there are only a few nonzero entries per column. Merely storing such a large matrix in dense form would require nearly 0.5GBytes using 64-bit floating point numbers, and computing its LU factorisation takes around 80 seconds on a modern microprocessor. It is thus highly desirable to employ specialised algorithms for sparse matrices. These algorithms are generally divided between those only applicable to symmetric matrices, the most prominent being the conjugate-gradient (CG) algorithm for solving linear equations, and those applicable to general matrices. A similar division is present in the literature on numerical eigenvalue problems

    Graham Higman's PORC theorem

    Get PDF
    Graham Higman published two important papers in 1960. In the first of these papers he proved that for any positive integer nn the number of groups of order pnp^{n} is bounded by a polynomial in pp, and he formulated his famous PORC conjecture about the form of the function f(pn)f(p^{n}) giving the number of groups of order pnp^{n}. In the second of these two papers he proved that the function giving the number of pp-class two groups of order pnp^{n} is PORC. He established this result as a corollary to a very general result about vector spaces acted on by the general linear group. This theorem takes over a page to state, and is so general that it is hard to see what is going on. Higman's proof of this general theorem contains several new ideas and is quite hard to follow. However in the last few years several authors have developed and implemented algorithms for computing Higman's PORC formulae in special cases of his general theorem. These algorithms give perspective on what are the key points in Higman's proof, and also simplify parts of the proof. In this note I give a proof of Higman's general theorem written in the light of these recent developments

    Perturbation, extraction and reļ¬nement of invariant pairs for matrix polynomials

    Get PDF
    Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar beneļ¬ts can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to ļ¬ll this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a ļ¬rst-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe eļ¬ƒcient reļ¬nement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the eļ¬€ectiveness of our extraction and reļ¬nement procedures
    • ā€¦
    corecore