1,433 research outputs found

    Hypergeometric Functions of Matrix Arguments and Linear Statistics of Multi-Spiked Hermitian Matrix Models

    Full text link
    This paper derives central limit theorems (CLTs) for general linear spectral statistics (LSS) of three important multi-spiked Hermitian random matrix ensembles. The first is the most common spiked scenario, proposed by Johnstone, which is a central Wishart ensemble with fixed-rank perturbation of the identity matrix, the second is a non-central Wishart ensemble with fixed-rank noncentrality parameter, and the third is a similarly defined non-central FF ensemble. These CLT results generalize our recent work to account for multiple spikes, which is the most common scenario met in practice. The generalization is non-trivial, as it now requires dealing with hypergeometric functions of matrix arguments. To facilitate our analysis, for a broad class of such functions, we first generalize a recent result of Onatski to present new contour integral representations, which are particularly suitable for computing large-dimensional properties of spiked matrix ensembles. Armed with such representations, our CLT formulas are derived for each of the three spiked models of interest by employing the Coulomb fluid method from random matrix theory along with saddlepoint techniques. We find that for each matrix model, and for general LSS, the individual spikes contribute additively to yield a O(1)O(1) correction term to the asymptotic mean of the linear statistic, which we specify explicitly, whilst having no effect on the leading order terms of the mean or variance

    Density of Spherically-Embedded Stiefel and Grassmann Codes

    Full text link
    The density of a code is the fraction of the coding space covered by packing balls centered around the codewords. This paper investigates the density of codes in the complex Stiefel and Grassmann manifolds equipped with the chordal distance. The choice of distance enables the treatment of the manifolds as subspaces of Euclidean hyperspheres. In this geometry, the densest packings are not necessarily equivalent to maximum-minimum-distance codes. Computing a code's density follows from computing: i) the normalized volume of a metric ball and ii) the kissing radius, the radius of the largest balls one can pack around the codewords without overlapping. First, the normalized volume of a metric ball is evaluated by asymptotic approximations. The volume of a small ball can be well-approximated by the volume of a locally-equivalent tangential ball. In order to properly normalize this approximation, the precise volumes of the manifolds induced by their spherical embedding are computed. For larger balls, a hyperspherical cap approximation is used, which is justified by a volume comparison theorem showing that the normalized volume of a ball in the Stiefel or Grassmann manifold is asymptotically equal to the normalized volume of a ball in its embedding sphere as the dimension grows to infinity. Then, bounds on the kissing radius are derived alongside corresponding bounds on the density. Unlike spherical codes or codes in flat spaces, the kissing radius of Grassmann or Stiefel codes cannot be exactly determined from its minimum distance. It is nonetheless possible to derive bounds on density as functions of the minimum distance. Stiefel and Grassmann codes have larger density than their image spherical codes when dimensions tend to infinity. Finally, the bounds on density lead to refinements of the standard Hamming bounds for Stiefel and Grassmann codes.Comment: Two-column version (24 pages, 6 figures, 4 tables). To appear in IEEE Transactions on Information Theor

    Difference system for Selberg correlation integrals

    Full text link
    The Selberg correlation integrals are averages of the products s=1ml=1n(xszl)μs\prod_{s=1}^m\prod_{l=1}^n (x_s - z_l)^{\mu_s} with respect to the Selberg density. Our interest is in the case m=1m=1, μ1=μ\mu_1 = \mu, when this corresponds to the μ\mu-th moment of the corresponding characteristic polynomial. We give the explicit form of a (n+1)×(n+1)(n+1) \times (n+1) matrix linear difference system in the variable μ\mu which determines the average, and we give the Gauss decomposition of the corresponding (n+1)×(n+1)(n+1) \times (n+1) matrix. For μ\mu a positive integer the difference system can be used to efficiently compute the power series defined by this average.Comment: 21 page
    corecore