41,265 research outputs found

    Entanglement-assisted quantum low-density parity-check codes

    Get PDF
    This paper develops a general method for constructing entanglement-assisted quantum low-density parity-check (LDPC) codes, which is based on combinatorial design theory. Explicit constructions are given for entanglement-assisted quantum error-correcting codes (EAQECCs) with many desirable properties. These properties include the requirement of only one initial entanglement bit, high error correction performance, high rates, and low decoding complexity. The proposed method produces infinitely many new codes with a wide variety of parameters and entanglement requirements. Our framework encompasses various codes including the previously known entanglement-assisted quantum LDPC codes having the best error correction performance and many new codes with better block error rates in simulations over the depolarizing channel. We also determine important parameters of several well-known classes of quantum and classical LDPC codes for previously unsettled cases.Comment: 20 pages, 5 figures. Final version appearing in Physical Review

    Minimax risks for sparse regressions: Ultra-high-dimensional phenomenons

    Full text link
    Consider the standard Gaussian linear regression model Y=Xθ+ϵY=X\theta+\epsilon, where YRnY\in R^n is a response vector and XRnp X\in R^{n*p} is a design matrix. Numerous work have been devoted to building efficient estimators of θ\theta when pp is much larger than nn. In such a situation, a classical approach amounts to assume that θ0\theta_0 is approximately sparse. This paper studies the minimax risks of estimation and testing over classes of kk-sparse vectors θ\theta. These bounds shed light on the limitations due to high-dimensionality. The results encompass the problem of prediction (estimation of XθX\theta), the inverse problem (estimation of θ0\theta_0) and linear testing (testing Xθ=0X\theta=0). Interestingly, an elbow effect occurs when the number of variables klog(p/k)k\log(p/k) becomes large compared to nn. Indeed, the minimax risks and hypothesis separation distances blow up in this ultra-high dimensional setting. We also prove that even dimension reduction techniques cannot provide satisfying results in an ultra-high dimensional setting. Moreover, we compute the minimax risks when the variance of the noise is unknown. The knowledge of this variance is shown to play a significant role in the optimal rates of estimation and testing. All these minimax bounds provide a characterization of statistical problems that are so difficult so that no procedure can provide satisfying results
    corecore