2,464 research outputs found

    Efficient and Error-Correcting Data Structures for Membership and Polynomial Evaluation

    Get PDF
    We construct efficient data structures that are resilient against a constant fraction of adversarial noise. Our model requires that the decoder answers most queries correctly with high probability and for the remaining queries, the decoder with high probability either answers correctly or declares "don't know." Furthermore, if there is no noise on the data structure, it answers all queries correctly with high probability. Our model is the common generalization of a model proposed recently by de Wolf and the notion of "relaxed locally decodable codes" developed in the PCP literature. We measure the efficiency of a data structure in terms of its length, measured by the number of bits in its representation, and query-answering time, measured by the number of bit-probes to the (possibly corrupted) representation. In this work, we study two data structure problems: membership and polynomial evaluation. We show that these two problems have constructions that are simultaneously efficient and error-correcting.Comment: An abridged version of this paper appears in STACS 201

    A New Cross-Layer FPGA-Based Security Scheme for Wireless Networks

    Get PDF
    This chapter presents a new cross-layer security scheme which deploys efficient coding techniques in the physical layer in an upper layer classical cryptographic protocol system. The rationale in designing the new scheme is to enhance security-throughput trade-off in wireless networks which is in contrast to existing schemes which either enhances security at the detriment of data throughput or vice versa. The new scheme is implemented using the residue number system (RNS), non-linear convolutional coding and subband coding at the physical layer and RSA cryptography at the upper layers. The RNS reduces the huge data obtained from RSA cryptography into small parallel data. To increase the security level, iterated wavelet-based subband coding splits the ciphertext into different levels of decomposition. At subsequent levels of decomposition, the ciphertext from the preceding level serves as data for encryption using convolutional codes. In addition, throughput is enhanced by transmitting small parallel data and the bit error correction capability of non-linear convolutional code. It is shown that, various passive and active attacks common to wireless networks could be circumvented. An FPGA implementation applied to CDMA could fit into a single Virtex-4 FPGA due to small parallel data sizes employed

    Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection

    Full text link
    In high-dimensional model selection problems, penalized simple least-square approaches have been extensively used. This paper addresses the question of both robustness and efficiency of penalized model selection methods, and proposes a data-driven weighted linear combination of convex loss functions, together with weighted L1L_1-penalty. It is completely data-adaptive and does not require prior knowledge of the error distribution. The weighted L1L_1-penalty is used both to ensure the convexity of the penalty term and to ameliorate the bias caused by the L1L_1-penalty. In the setting with dimensionality much larger than the sample size, we establish a strong oracle property of the proposed method that possesses both the model selection consistency and estimation efficiency for the true non-zero coefficients. As specific examples, we introduce a robust method of composite L1-L2, and optimal composite quantile method and evaluate their performance in both simulated and real data examples
    • …
    corecore