147,312 research outputs found

    Ramanujan Graphs in Polynomial Time

    Full text link
    The recent work by Marcus, Spielman and Srivastava proves the existence of bipartite Ramanujan (multi)graphs of all degrees and all sizes. However, that paper did not provide a polynomial time algorithm to actually compute such graphs. Here, we provide a polynomial time algorithm to compute certain expected characteristic polynomials related to this construction. This leads to a deterministic polynomial time algorithm to compute bipartite Ramanujan (multi)graphs of all degrees and all sizes

    Factorizing the Stochastic Galerkin System

    Full text link
    Recent work has explored solver strategies for the linear system of equations arising from a spectral Galerkin approximation of the solution of PDEs with parameterized (or stochastic) inputs. We consider the related problem of a matrix equation whose matrix and right hand side depend on a set of parameters (e.g. a PDE with stochastic inputs semidiscretized in space) and examine the linear system arising from a similar Galerkin approximation of the solution. We derive a useful factorization of this system of equations, which yields bounds on the eigenvalues, clues to preconditioning, and a flexible implementation method for a wide array of problems. We complement this analysis with (i) a numerical study of preconditioners on a standard elliptic PDE test problem and (ii) a fluids application using existing CFD codes; the MATLAB codes used in the numerical studies are available online.Comment: 13 pages, 4 figures, 2 table

    The Random Oracle Methodology, Revisited

    Get PDF
    We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.Comment: 31 page

    Noise-Tolerant Learning, the Parity Problem, and the Statistical Query Model

    Full text link
    We describe a slightly sub-exponential time algorithm for learning parity functions in the presence of random classification noise. This results in a polynomial-time algorithm for the case of parity functions that depend on only the first O(log n log log n) bits of input. This is the first known instance of an efficient noise-tolerant algorithm for a concept class that is provably not learnable in the Statistical Query model of Kearns. Thus, we demonstrate that the set of problems learnable in the statistical query model is a strict subset of those problems learnable in the presence of noise in the PAC model. In coding-theory terms, what we give is a poly(n)-time algorithm for decoding linear k by n codes in the presence of random noise for the case of k = c log n loglog n for some c > 0. (The case of k = O(log n) is trivial since one can just individually check each of the 2^k possible messages and choose the one that yields the closest codeword.) A natural extension of the statistical query model is to allow queries about statistical properties that involve t-tuples of examples (as opposed to single examples). The second result of this paper is to show that any class of functions learnable (strongly or weakly) with t-wise queries for t = O(log n) is also weakly learnable with standard unary queries. Hence this natural extension to the statistical query model does not increase the set of weakly learnable functions

    Cryptography from tensor problems

    Get PDF
    We describe a new proposal for a trap-door one-way function. The new proposal belongs to the "multivariate quadratic" family but the trap-door is different from existing methods, and is simpler

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo
    • …
    corecore