40,775 research outputs found

    Concatenated Polar Codes

    Get PDF
    Polar codes have attracted much recent attention as the first codes with low computational complexity that provably achieve optimal rate-regions for a large class of information-theoretic problems. One significant drawback, however, is that for current constructions the probability of error decays sub-exponentially in the block-length (more detailed designs improve the probability of error at the cost of significantly increased computational complexity \cite{KorUS09}). In this work we show how the the classical idea of code concatenation -- using "short" polar codes as inner codes and a "high-rate" Reed-Solomon code as the outer code -- results in substantially improved performance. In particular, code concatenation with a careful choice of parameters boosts the rate of decay of the probability of error to almost exponential in the block-length with essentially no loss in computational complexity. We demonstrate such performance improvements for three sets of information-theoretic problems -- a classical point-to-point channel coding problem, a class of multiple-input multiple output channel coding problems, and some network source coding problems

    Code Construction and Decoding Algorithms for Semi-Quantitative Group Testing with Nonuniform Thresholds

    Full text link
    We analyze a new group testing scheme, termed semi-quantitative group testing, which may be viewed as a concatenation of an adder channel and a discrete quantizer. Our focus is on non-uniform quantizers with arbitrary thresholds. For the most general semi-quantitative group testing model, we define three new families of sequences capturing the constraints on the code design imposed by the choice of the thresholds. The sequences represent extensions and generalizations of Bh and certain types of super-increasing and lexicographically ordered sequences, and they lead to code structures amenable for efficient recursive decoding. We describe the decoding methods and provide an accompanying computational complexity and performance analysis

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Communication Complexity and Secure Function Evaluation

    Full text link
    We suggest two new methodologies for the design of efficient secure protocols, that differ with respect to their underlying computational models. In one methodology we utilize the communication complexity tree (or branching for f and transform it into a secure protocol. In other words, "any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter". The second methodology uses the circuit computing f, enhanced with look-up tables as its underlying computational model. It is possible to simulate any RAM machine in this model with polylogarithmic blowup. Hence it is possible to start with a computation of f on a RAM machine and transform it into a secure protocol. We show many applications of these new methodologies resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the "millionaires problem", where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation

    Scalable k-Means Clustering via Lightweight Coresets

    Full text link
    Coresets are compact representations of data sets such that models trained on a coreset are provably competitive with models trained on the full data set. As such, they have been successfully used to scale up clustering models to massive data sets. While existing approaches generally only allow for multiplicative approximation errors, we propose a novel notion of lightweight coresets that allows for both multiplicative and additive errors. We provide a single algorithm to construct lightweight coresets for k-means clustering as well as soft and hard Bregman clustering. The algorithm is substantially faster than existing constructions, embarrassingly parallel, and the resulting coresets are smaller. We further show that the proposed approach naturally generalizes to statistical k-means clustering and that, compared to existing results, it can be used to compute smaller summaries for empirical risk minimization. In extensive experiments, we demonstrate that the proposed algorithm outperforms existing data summarization strategies in practice.Comment: To appear in the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD
    • …
    corecore