1,033 research outputs found

    Network Codes for Real-Time Applications

    Get PDF
    We consider the scenario of broadcasting for real-time applications and loss recovery via instantly decodable network coding. Past work focused on minimizing the completion delay, which is not the right objective for real-time applications that have strict deadlines. In this work, we are interested in finding a code that is instantly decodable by the maximum number of users. First, we prove that this problem is NP-Hard in the general case. Then we consider the practical probabilistic scenario, where users have i.i.d. loss probability and the number of packets is linear or polynomial in the number of users. In this scenario, we provide a polynomial-time (in the number of users) algorithm that finds the optimal coded packet. The proposed algorithm is evaluated using both simulation and real network traces of a real-time Android application. Both results show that the proposed coding scheme significantly outperforms the state-of-the-art baselines: an optimal repetition code and a COPE-like greedy scheme.Comment: ToN 2013 Submission Versio

    Finite-state codes

    Get PDF
    A class of codes called finite-state (FS) codes is defined and investigated. The codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived. A general construction for FS codes is given, and it is shown that in many cases the FS codes constructed in this way have a free distance that is the largest possible. Catastrophic error propagation (CEP) for FS codes is also discussed. It is found that to avoid CEP one must solve the graph-theoretic problem of finding a uniquely decodable edge labeling of the state diagram

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    Optimal rate list decoding via derivative codes

    Full text link
    The classical family of [n,k]q[n,k]_q Reed-Solomon codes over a field \F_q consist of the evaluations of polynomials f \in \F_q[X] of degree <k< k at nn distinct field elements. In this work, we consider a closely related family of codes, called (order mm) {\em derivative codes} and defined over fields of large characteristic, which consist of the evaluations of ff as well as its first m1m-1 formal derivatives at nn distinct field elements. For large enough mm, we show that these codes can be list-decoded in polynomial time from an error fraction approaching 1R1-R, where R=k/(nm)R=k/(nm) is the rate of the code. This gives an alternate construction to folded Reed-Solomon codes for achieving the optimal trade-off between rate and list error-correction radius. Our decoding algorithm is linear-algebraic, and involves solving a linear system to interpolate a multivariate polynomial, and then solving another structured linear system to retrieve the list of candidate polynomials ff. The algorithm for derivative codes offers some advantages compared to a similar one for folded Reed-Solomon codes in terms of efficient unique decoding in the presence of side information.Comment: 11 page

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo
    corecore