90 research outputs found

    Bounding the norm of a log-concave vector via thin-shell estimates

    Full text link
    Chaining techniques show that if X is an isotropic log-concave random vector in R^n and Gamma is a standard Gaussian vector then E |X| < C n^{1/4} E |Gamma| for any norm |*|, where C is a universal constant. Using a completely different argument we establish a similar inequality relying on the thin-shell constant sigma_n = sup ((var|X|^){1/2} ; X isotropic and log-concave on R^n). In particular, we show that if the thin-shell conjecture sigma_n = O(1) holds, then n^{1/4} can be replaced by log (n) in the inequality. As a consequence, we obtain certain bounds for the mean-width, the dual mean-width and the isotropic constant of an isotropic convex body. In particular, we give an alternative proof of the fact that a positive answer to the thin-shell conjecture implies a positive answer to the slicing problem, up to a logarithmic factor.Comment: preliminary version, 13 page

    Local Algorithms for Block Models with Side Information

    Full text link
    There has been a recent interest in understanding the power of local algorithms for optimization and inference problems on sparse graphs. Gamarnik and Sudan (2014) showed that local algorithms are weaker than global algorithms for finding large independent sets in sparse random regular graphs. Montanari (2015) showed that local algorithms are suboptimal for finding a community with high connectivity in the sparse Erd\H{o}s-R\'enyi random graphs. For the symmetric planted partition problem (also named community detection for the block models) on sparse graphs, a simple observation is that local algorithms cannot have non-trivial performance. In this work we consider the effect of side information on local algorithms for community detection under the binary symmetric stochastic block model. In the block model with side information each of the nn vertices is labeled ++ or −- independently and uniformly at random; each pair of vertices is connected independently with probability a/na/n if both of them have the same label or b/nb/n otherwise. The goal is to estimate the underlying vertex labeling given 1) the graph structure and 2) side information in the form of a vertex labeling positively correlated with the true one. Assuming that the ratio between in and out degree a/ba/b is Θ(1)\Theta(1) and the average degree (a+b)/2=no(1) (a+b) / 2 = n^{o(1)}, we characterize three different regimes under which a local algorithm, namely, belief propagation run on the local neighborhoods, maximizes the expected fraction of vertices labeled correctly. Thus, in contrast to the case of symmetric block models without side information, we show that local algorithms can achieve optimal performance for the block model with side information.Comment: Due to the limitation "The abstract field cannot be longer than 1,920 characters", the abstract here is shorter than that in the PDF fil

    On almost randomizing channels with a short Kraus decomposition

    Full text link
    For large d, we study quantum channels on C^d obtained by selecting randomly N independent Kraus operators according to a probability measure mu on the unitary group U(d). When mu is the Haar measure, we show that for N>d/epsilon^2,suchachannelisepsilon−randomizingwithhighprobability,whichmeansthatitmapseverystatewithindistanceepsilon/d(inoperatornorm)ofthemaximallymixedstate.ThisslightlyimprovesonaresultbyHayden,Leung,ShorandWinterbyoptimizingtheirdiscretizationargument.Moreover,forgeneralmu,weobtainaepsilon−randomizingchannelprovidedN>d(log⁡d)6/epsilon2, such a channel is epsilon-randomizing with high probability, which means that it maps every state within distance epsilon/d (in operator norm) of the maximally mixed state. This slightly improves on a result by Hayden, Leung, Shor and Winter by optimizing their discretization argument. Moreover, for general mu, we obtain a epsilon-randomizing channel provided N > d (\log d)^6/epsilon^2. For d=2^k (k qubits), this includes Kraus operators obtained by tensoring k random Pauli matrices. The proof uses recent results on empirical processes in Banach spaces.Comment: We added some background on geometry of Banach space

    Localizing the Latent Structure Canonical Uncertainty: Entropy Profiles for Hidden Markov Models

    Get PDF
    This report addresses state inference for hidden Markov models. These models rely on unobserved states, which often have a meaningful interpretation. This makes it necessary to develop diagnostic tools for quantification of state uncertainty. The entropy of the state sequence that explains an observed sequence for a given hidden Markov chain model can be considered as the canonical measure of state sequence uncertainty. This canonical measure of state sequence uncertainty is not reflected by the classic multivariate state profiles computed by the smoothing algorithm, which summarizes the possible state sequences. Here, we introduce a new type of profiles which have the following properties: (i) these profiles of conditional entropies are a decomposition of the canonical measure of state sequence uncertainty along the sequence and makes it possible to localize this uncertainty, (ii) these profiles are univariate and thus remain easily interpretable on tree structures. We show how to extend the smoothing algorithms for hidden Markov chain and tree models to compute these entropy profiles efficiently.Comment: Submitted to Journal of Machine Learning Research; No RR-7896 (2012

    Optimal Concentration of Information Content For Log-Concave Densities

    Full text link
    An elementary proof is provided of sharp bounds for the varentropy of random vectors with log-concave densities, as well as for deviations of the information content from its mean. These bounds significantly improve on the bounds obtained by Bobkov and Madiman ({\it Ann. Probab.}, 39(4):1528--1543, 2011).Comment: 15 pages. Changes in v2: Remark 2.5 (due to C. Saroglou) added with more general sufficient conditions for equality in Theorem 2.3. Also some minor corrections and added reference

    Remarks on the KLS conjecture and Hardy-type inequalities

    Full text link
    We generalize the classical Hardy and Faber-Krahn inequalities to arbitrary functions on a convex body Ω⊂Rn\Omega \subset \mathbb{R}^n, not necessarily vanishing on the boundary ∂Ω\partial \Omega. This reduces the study of the Neumann Poincar\'e constant on Ω\Omega to that of the cone and Lebesgue measures on ∂Ω\partial \Omega; these may be bounded via the curvature of ∂Ω\partial \Omega. A second reduction is obtained to the class of harmonic functions on Ω\Omega. We also study the relation between the Poincar\'e constant of a log-concave measure ÎŒ\mu and its associated K. Ball body KÎŒK_\mu. In particular, we obtain a simple proof of a conjecture of Kannan--Lov\'asz--Simonovits for unit-balls of ℓpn\ell^n_p, originally due to Sodin and Lata{\l}a--Wojtaszczyk.Comment: 18 pages. Numbering of propositions, theorems, etc.. as appeared in final form in GAFA seminar note

    Estimation in high dimensions: a geometric perspective

    Full text link
    This tutorial provides an exposition of a flexible geometric framework for high dimensional estimation problems with constraints. The tutorial develops geometric intuition about high dimensional sets, justifies it with some results of asymptotic convex geometry, and demonstrates connections between geometric results and estimation problems. The theory is illustrated with applications to sparse recovery, matrix completion, quantization, linear and logistic regression and generalized linear models.Comment: 56 pages, 9 figures. Multiple minor change
    • 

    corecore