5,214,520 research outputs found

    Error-Correcting Data Structures

    Get PDF
    We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This new model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.Comment: 15 pages LaTeX; an abridged version will appear in the Proceedings of the STACS 2009 conferenc

    Quantum data processing and error correction

    Get PDF
    This paper investigates properties of noisy quantum information channels. We define a new quantity called {\em coherent information} which measures the amount of quantum information conveyed in the noisy channel. This quantity can never be increased by quantum information processing, and it yields a simple necessary and sufficient condition for the existence of perfect quantum error correction.Comment: LaTeX, 20 page

    Error Correction for Cooperative Data Exchange

    Full text link
    This paper considers the problem of error correction for a cooperative data exchange (CDE) system, where some clients are compromised or failed and send false messages. Assuming each client possesses a subset of the total messages, we analyze the error correction capability when every client is allowed to broadcast only one linearly-coded message. Our error correction capability bound determines the maximum number of clients that can be compromised or failed without jeopardizing the final decoding solution at each client. We show that deterministic, feasible linear codes exist that can achieve the derived bound. We also evaluate random linear codes, where the coding coefficients are drawn randomly, and then develop the probability for a client to withstand a certain number of compromised or failed peers and successfully deduce the complete message for any network size and any initial message distributions

    Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    Full text link
    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on the local complexity of a pixel is used to collect the PPEs to generate an ordered PPE sequence so that, smaller PPEs will be processed first for data embedding. By reversibly shifting the PPE histogram (PPEH) with optimized parameters, the pixels corresponding to the altered PPEH bins can be finally modified to carry the secret data. Experimental results have implied that the proposed method can benefit from the prediction procedure of the PEs, sorting technique as well as parameters selection, and therefore outperform some state-of-the-art works in terms of payload-distortion performance when applied to different images.Comment: There has no technical difference to previous versions, but rather some minor word corrections. A 2-page summary of this paper was accepted by ACM IH&MMSec'16 "Ongoing work session". My homepage: hzwu.github.i

    Model error and sequential data assimilation. A deterministic formulation

    Full text link
    Data assimilation schemes are confronted with the presence of model errors arising from the imperfect description of atmospheric dynamics. These errors are usually modeled on the basis of simple assumptions such as bias, white noise, first order Markov process. In the present work, a formulation of the sequential extended Kalman filter is proposed, based on recent findings on the universal deterministic behavior of model errors in deep contrast with previous approaches (Nicolis, 2004). This new scheme is applied in the context of a spatially distributed system proposed by Lorenz (1996). It is found that (i) for short times, the estimation error is accurately approximated by an evolution law in which the variance of the model error (assumed to be a deterministic process) evolves according to a quadratic law, in agreement with the theory. Moreover, the correlation with the initial condition error appears to play a secondary role in the short time dynamics of the estimation error covariance. (ii) The deterministic description of the model error evolution, incorporated into the classical extended Kalman filter equations, reveals that substantial improvements of the filter accuracy can be gained as compared with the classical white noise assumption. The universal, short time, quadratic law for the evolution of the model error covariance matrix seems very promising for modeling estimation error dynamics in sequential data assimilation

    Optimization for L1-Norm Error Fitting via Data Aggregation

    Get PDF
    We propose a data aggregation-based algorithm with monotonic convergence to a global optimum for a generalized version of the L1-norm error fitting model with an assumption of the fitting function. The proposed algorithm generalizes the recent algorithm in the literature, aggregate and iterative disaggregate (AID), which selectively solves three specific L1-norm error fitting problems. With the proposed algorithm, any L1-norm error fitting model can be solved optimally if it follows the form of the L1-norm error fitting problem and if the fitting function satisfies the assumption. The proposed algorithm can also solve multi-dimensional fitting problems with arbitrary constraints on the fitting coefficients matrix. The generalized problem includes popular models such as regression and the orthogonal Procrustes problem. The results of the computational experiment show that the proposed algorithms are faster than the state-of-the-art benchmarks for L1-norm regression subset selection and L1-norm regression over a sphere. Further, the relative performance of the proposed algorithm improves as data size increases

    A zero-error operational video data compression system

    Get PDF
    A data compression system has been operating since February 1972, using ATS spin-scan cloud cover data. With the launch of ITOS 3 in October 1972, this data compression system has become the only source of near-realtime very high resolution radiometer image data at the data processing facility. The VHRR image data are compressed and transmitted over a 50 kilobit per second wideband ground link. The goal of the data compression experiment was to send data quantized to six bits at twice the rate possible when no compression is used, while maintaining zero error between the transmitted and reconstructed data. All objectives of the data compression experiment were met, and thus a capability of doubling the data throughput of the system has been achieved
    corecore