1,157 research outputs found

    Improving Detectors Using Entangling Quantum Copiers

    Get PDF
    We present a detection scheme which using imperfect detectors, and imperfect quantum copying machines (which entangle the copies), allows one to extract more information from an incoming signal, than with the imperfect detectors alone.Comment: 4 pages, 2 figures, REVTeX, to be published in Phys. Rev.

    Equivalent efficiency of a simulated photon-number detector

    Get PDF
    Homodyne detection is considered as a way to improve the efficiency of communication near the single-photon level. The current lack of commercially available {\it infrared} photon-number detectors significantly reduces the mutual information accessible in such a communication channel. We consider simulating direct detection via homodyne detection. We find that our particular simulated direct detection strategy could provide limited improvement in the classical information transfer. However, we argue that homodyne detectors (and a polynomial number of linear optical elements) cannot simulate photocounters arbitrarily well, since otherwise the exponential gap between quantum and classical computers would vanish.Comment: 4 pages, 4 figure

    Critical Noise Levels for LDPC decoding

    Get PDF
    We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator (\cM), rather than on the weight enumerator (\cW) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.Comment: 9 pages, 5 figure

    Statistical mechanics of lossy data compression using a non-monotonic perceptron

    Full text link
    The performance of a lossy data compression scheme for uniformly biased Boolean messages is investigated via methods of statistical mechanics. Inspired by a formal similarity to the storage capacity problem in the research of neural networks, we utilize a perceptron of which the transfer function is appropriately designed in order to compress and decode the messages. Employing the replica method, we analytically show that our scheme can achieve the optimal performance known in the framework of lossy compression in most cases when the code length becomes infinity. The validity of the obtained results is numerically confirmed.Comment: 9 pages, 5 figures, Physical Review

    Thouless-Anderson-Palmer Approach for Lossy Compression

    Full text link
    We study an ill-posed linear inverse problem, where a binary sequence will be reproduced using a sparce matrix. According to the previous study, this model can theoretically provide an optimal compression scheme for an arbitrary distortion level, though the encoding procedure remains an NP-complete problem. In this paper, we focus on the consistency condition for a dynamics model of Markov-type to derive an iterative algorithm, following the steps of Thouless-Anderson-Palmer's. Numerical results show that the algorithm can empirically saturate the theoretical limit for the sparse construction of our codes, which also is very close to the rate-distortion function.Comment: 10 pages, 3 figure

    Universal geometric approach to uncertainty, entropy and information

    Get PDF
    It is shown that for any ensemble, whether classical or quantum, continuous or discrete, there is only one measure of the "volume" of the ensemble that is compatible with several basic geometric postulates. This volume measure is thus a preferred and universal choice for characterising the inherent spread, dispersion, localisation, etc, of the ensemble. Remarkably, this unique "ensemble volume" is a simple function of the ensemble entropy, and hence provides a new geometric characterisation of the latter quantity. Applications include unified, volume-based derivations of the Holevo and Shannon bounds in quantum and classical information theory; a precise geometric interpretation of thermodynamic entropy for equilibrium ensembles; a geometric derivation of semi-classical uncertainty relations; a new means for defining classical and quantum localization for arbitrary evolution processes; a geometric interpretation of relative entropy; and a new proposed definition for the spot-size of an optical beam. Advantages of the ensemble volume over other measures of localization (root-mean-square deviation, Renyi entropies, and inverse participation ratio) are discussed.Comment: Latex, 38 pages + 2 figures; p(\alpha)->1/|T| in Eq. (72) [Eq. (A10) of published version

    Illusory Decoherence

    Full text link
    If a quantum experiment includes random processes, then the results of repeated measurements can appear consistent with irreversible decoherence even if the system's evolution prior to measurement was reversible and unitary. Two thought experiments are constructed as examples.Comment: 10 pages, 3 figure

    Quantum Stabilizer Codes and Classical Linear Codes

    Full text link
    We show that within any quantum stabilizer code there lurks a classical binary linear code with similar error-correcting capabilities, thereby demonstrating new connections between quantum codes and classical codes. Using this result -- which applies to degenerate as well as nondegenerate codes -- previously established necessary conditions for classical linear codes can be easily translated into necessary conditions for quantum stabilizer codes. Examples of specific consequences are: for a quantum channel subject to a delta-fraction of errors, the best asymptotic capacity attainable by any stabilizer code cannot exceed H(1/2 + sqrt(2*delta*(1-2*delta))); and, for the depolarizing channel with fidelity parameter delta, the best asymptotic capacity attainable by any stabilizer code cannot exceed 1-H(delta).Comment: 17 pages, ReVTeX, with two figure

    On the existence of 0/1 polytopes with high semidefinite extension complexity

    Full text link
    In Rothvo\ss{} it was shown that there exists a 0/1 polytope (a polytope whose vertices are in \{0,1\}^{n}) such that any higher-dimensional polytope projecting to it must have 2^{\Omega(n)} facets, i.e., its linear extension complexity is exponential. The question whether there exists a 0/1 polytope with high PSD extension complexity was left open. We answer this question in the affirmative by showing that there is a 0/1 polytope such that any spectrahedron projecting to it must be the intersection of a semidefinite cone of dimension~2^{\Omega(n)} and an affine space. Our proof relies on a new technique to rescale semidefinite factorizations

    Information dynamics: Temporal behavior of uncertainty measures

    Full text link
    We carry out a systematic study of uncertainty measures that are generic to dynamical processes of varied origins, provided they induce suitable continuous probability distributions. The major technical tool are the information theory methods and inequalities satisfied by Fisher and Shannon information measures. We focus on a compatibility of these inequalities with the prescribed (deterministic, random or quantum) temporal behavior of pertinent probability densities.Comment: Incorporates cond-mat/0604538, title, abstract changed, text modified, to appear in Cent. Eur. J. Phy
    corecore