1,063 research outputs found

    The Sampling Rate-Distortion Tradeoff for Sparsity Pattern Recovery in Compressed Sensing

    Full text link
    Recovery of the sparsity pattern (or support) of an unknown sparse vector from a limited number of noisy linear measurements is an important problem in compressed sensing. In the high-dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the measurement rate and the per-sample signal-to-noise ratio (SNR) are finite constants, independent of the vector length. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases computationally simple estimators are near-optimal. Bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for several different recovery algorithms. The tightness of the bounds, in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing information-theoretic necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models

    Polarization as a novel architecture to boost the classical mismatched capacity of B-DMCs

    Full text link
    We show that the mismatched capacity of binary discrete memoryless channels can be improved by channel combining and splitting via Ar{\i}kan's polar transformations. We also show that the improvement is possible even if the transformed channels are decoded with a mismatched polar decoder.Comment: Submitted to ISIT 201

    A Recursive Algorithm for Computing Cramer-Rao-Type Bouads on Estimator Covariance

    Full text link
    We give a recursive algorithm to calculate submatrices of the Cramer-Rao (CR) matrix bound on the covariance of any unbiased estimator of a vector parameter ?_. Our algorithm computes a sequence of lower bounds that converges monotonically to the CR bound with exponential speed of convergence. The recursive algorithm uses an invertible “splitting matrix” to successively approximate the inverse Fisher information matrix. We present a statistical approach to selecting the splitting matrix based on a “complete-data-incomplete-data” formulation similar to that of the well-known EM parameter estimation algorithm. As a concrete illustration we consider image reconstruction from projections for emission computed tomography.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85950/1/Fessler104.pd

    A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity

    Get PDF
    We present a novel notion of complexity that interpolates between and generalizes some classic existing complexity notions in learning theory: for estimators like empirical risk minimization (ERM) with arbitrary bounded losses, it is upper bounded in terms of data-independent Rademacher complexity; for generalized Bayesian estimators, it is upper bounded by the data-dependent information complexity (also known as stochastic or PAC-Bayesian, KL(posterior∄⁥prior)\mathrm{KL}(\text{posterior} \operatorname{\|} \text{prior}) complexity. For (penalized) ERM, the new complexity reduces to (generalized) normalized maximum likelihood (NML) complexity, i.e. a minimax log-loss individual-sequence regret. Our first main result bounds excess risk in terms of the new complexity. Our second main result links the new complexity via Rademacher complexity to L2(P)L_2(P) entropy, thereby generalizing earlier results of Opper, Haussler, Lugosi, and Cesa-Bianchi who did the log-loss case with L∞L_\infty. Together, these results recover optimal bounds for VC- and large (polynomial entropy) classes, replacing localized Rademacher complexity by a simpler analysis which almost completely separates the two aspects that determine the achievable rates: 'easiness' (Bernstein) conditions and model complexity.Comment: 38 page
    • 

    corecore