3 research outputs found

    Subadditivity Beyond Trees and the Chi-Squared Mutual Information

    Full text link
    In 2000, Evans et al. [Eva+00] proved the subadditivity of the mutual information in the broadcasting on tree model with binary vertex labels and symmetric channels. They raised the question of whether such subadditivity extends to loopy graphs in some appropriate way. We recently proposed such an extension that applies to general graphs and binary vertex labels [AB18], using synchronization models and relying on percolation bounds. This extension requires however the edge channels to be symmetric on the product of the adjacent spins. A more general version of such a percolation bound that applies to asymmetric channels is also obtained in [PW18], relying on the SDPI, but the subadditivity property does not follow with such generalizations. In this note, we provide a new result showing that the subadditivity property still holds for arbitrary (asymmetric) channels acting on the product of spins, when the graphs are restricted to be series-parallel. The proof relies on the use of the Chi-squared mutual information rather than the classical mutual information, and various properties of the former are discussed. We also present a generalization of the broadcasting on tree model (the synchronization on tree) where the bound from [PW18] relying on the SPDI can be significantly looser than the bound resulting from the Chi-squared subadditivity property presented here.Comment: 16 page

    On the computational tractability of statistical estimation on amenable graphs

    Full text link
    We consider the problem of estimating a vector of discrete variables (θ1,⋯ ,θn)(\theta_1,\cdots,\theta_n), based on noisy observations YuvY_{uv} of the pairs (θu,θv)(\theta_u,\theta_v) on the edges of a graph G=([n],E)G=([n],E). This setting comprises a broad family of statistical estimation problems, including group synchronization on graphs, community detection, and low-rank matrix estimation. A large body of theoretical work has established sharp thresholds for weak and exact recovery, and sharp characterizations of the optimal reconstruction accuracy in such models, focusing however on the special case of Erd\"os--R\'enyi-type random graphs. The single most important finding of this line of work is the ubiquity of an information-computation gap. Namely, for many models of interest, a large gap is found between the optimal accuracy achievable by any statistical method, and the optimal accuracy achieved by known polynomial-time algorithms. Moreover, this gap is generally believed to be robust to small amounts of additional side information revealed about the θi\theta_i's. How does the structure of the graph GG affect this picture? Is the information-computation gap a general phenomenon or does it only apply to specific families of graphs? We prove that the picture is dramatically different for graph sequences converging to amenable graphs (including, for instance, dd-dimensional grids). We consider a model in which an arbitrarily small fraction of the vertex labels is revealed, and show that a linear-time local algorithm can achieve reconstruction accuracy that is arbitrarily close to the information-theoretic optimum. We contrast this to the case of random graphs. Indeed, focusing on group synchronization on random regular graphs, we prove that the information-computation gap still persists even when a small amount of side information is revealed.Comment: Stronger results, improved presentation. The transitivity assumption on the limiting graph is removed. Instead, we introduce and use the notion of a `tame' random rooted graph. 40 page

    Statistical Problems with Planted Structures: Information-Theoretical and Computational Limits

    Full text link
    Over the past few years, insights from computer science, statistical physics, and information theory have revealed phase transitions in a wide array of high-dimensional statistical problems at two distinct thresholds: One is the information-theoretical (IT) threshold below which the observation is too noisy so that inference of the ground truth structure is impossible regardless of the computational cost; the other is the computational threshold above which inference can be performed efficiently, i.e., in time that is polynomial in the input size. In the intermediate regime, inference is information-theoretically possible, but conjectured to be computationally hard. This article provides a survey of the common techniques for determining the sharp IT and computational limits, using community detection and submatrix detection as illustrating examples. For IT limits, we discuss tools including the first and second moment method for analyzing the maximum likelihood estimator, information-theoretic methods for proving impossibility results using mutual information and rate-distortion theory, and methods originated from statistical physics such as interpolation method. To investigate computational limits, we describe a common recipe to construct a randomized polynomial-time reduction scheme that approximately maps instances of the planted clique problem to the problem of interest in total variation distance.Comment: Chapter in "Information-Theoretic Methods in Data Science". Edited by Yonina Eldar and Miguel Rodrigues, Cambridge University Press, forthcomin
    corecore