4,062 research outputs found

    The hardness of decoding linear codes with preprocessing

    Get PDF
    The problem of maximum-likelihood decoding of linear block codes is known to be hard. The fact that the problem remains hard even if the code is known in advance, and can be preprocessed for as long as desired in order to device a decoding algorithm, is shown. The hardness is based on the fact that existence of a polynomial-time algorithm implies that the polynomial hierarchy collapses. Thus, some linear block codes probably do not have an efficient decoder. The proof is based on results in complexity theory that relate uniform and nonuniform complexity classes

    P-Selectivity, Immunity, and the Power of One Bit

    Full text link
    We prove that P-sel, the class of all P-selective sets, is EXP-immune, but is not EXP/1-immune. That is, we prove that some infinite P-selective set has no infinite EXP-time subset, but we also prove that every infinite P-selective set has some infinite subset in EXP/1. Informally put, the immunity of P-sel is so fragile that it is pierced by a single bit of information. The above claims follow from broader results that we obtain about the immunity of the P-selective sets. In particular, we prove that for every recursive function f, P-sel is DTIME(f)-immune. Yet we also prove that P-sel is not \Pi_2^p/1-immune

    Channel Capacity under General Nonuniform Sampling

    Full text link
    This paper develops the fundamental capacity limits of a sampled analog channel under a sub-Nyquist sampling rate constraint. In particular, we derive the capacity of sampled analog channels over a general class of time-preserving sampling methods including irregular nonuniform sampling. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest SNR among all spectral sets of support size equal to the sampling rate. The capacity under sub-Nyquist sampling can be attained through filter-bank sampling, or through a single branch of modulation and filtering followed by uniform sampling. The capacity under sub-Nyquist sampling is a monotone function of the sampling rate. These results indicate that the optimal sampling schemes suppress aliasing, and that employing irregular nonuniform sampling does not provide capacity gain over uniform sampling sets with appropriate preprocessing for a large class of channels.Comment: 5 pages, to appear in IEEE International Symposium on Information Theory (ISIT), 201

    A Casual Tour Around a Circuit Complexity Bound

    Full text link
    I will discuss the recent proof that the complexity class NEXP (nondeterministic exponential time) lacks nonuniform ACC circuits of polynomial size. The proof will be described from the perspective of someone trying to discover it.Comment: 21 pages, 2 figures. An earlier version appeared in SIGACT News, September 201

    Proof Complexity of Systems of (Non-Deterministic) Decision Trees and Branching Programs

    Get PDF
    This paper studies propositional proof systems in which lines are sequents of decision trees or branching programs, deterministic or non-deterministic. Decision trees (DTs) are represented by a natural term syntax, inducing the system LDT, and non-determinism is modelled by including disjunction, ?, as primitive (system LNDT). Branching programs generalise DTs to dag-like structures and are duly handled by extension variables in our setting, as is common in proof complexity (systems eLDT and eLNDT). Deterministic and non-deterministic branching programs are natural nonuniform analogues of log-space (L) and nondeterministic log-space (NL), respectively. Thus eLDT and eLNDT serve as natural systems of reasoning corresponding to L and NL, respectively. The main results of the paper are simulation and non-simulation results for tree-like and dag-like proofs in LDT, LNDT, eLDT and eLNDT. We also compare them with Frege systems, constant-depth Frege systems and extended Frege systems

    Interpolation in Valiant's theory

    Get PDF
    We investigate the following question: if a polynomial can be evaluated at rational points by a polynomial-time boolean algorithm, does it have a polynomial-size arithmetic circuit? We argue that this question is certainly difficult. Answering it negatively would indeed imply that the constant-free versions of the algebraic complexity classes VP and VNP defined by Valiant are different. Answering this question positively would imply a transfer theorem from boolean to algebraic complexity. Our proof method relies on Lagrange interpolation and on recent results connecting the (boolean) counting hierarchy to algebraic complexity classes. As a byproduct we obtain two additional results: (i) The constant-free, degree-unbounded version of Valiant's hypothesis that VP and VNP differ implies the degree-bounded version. This result was previously known to hold for fields of positive characteristic only. (ii) If exponential sums of easy to compute polynomials can be computed efficiently, then the same is true of exponential products. We point out an application of this result to the P=NP problem in the Blum-Shub-Smale model of computation over the field of complex numbers.Comment: 13 page
    • …
    corecore