285 research outputs found

    Double Character Sums over Subgroups and Intervals

    Full text link
    We estimate double sums Sχ(a,I,G)=∑x∈I∑λ∈Gχ(x+aλ),1≀a<p−1, S_\chi(a, I, G) = \sum_{x \in I} \sum_{\lambda \in G} \chi(x + a\lambda), \qquad 1\le a < p-1, with a multiplicative character χ\chi modulo pp where I={1,
,H}I= \{1,\ldots, H\} and GG is a subgroup of order TT of the multiplicative group of the finite field of pp elements. A nontrivial upper bound on Sχ(a,I,G)S_\chi(a, I, G) can be derived from the Burgess bound if H≄p1/4+ΔH \ge p^{1/4+\varepsilon} and from some standard elementary arguments if T≄p1/2+ΔT \ge p^{1/2+\varepsilon}, where Δ>0\varepsilon>0 is arbitrary. We obtain a nontrivial estimate in a wider range of parameters HH and TT. We also estimate double sums Tχ(a,G)=∑λ,Ό∈Gχ(a+λ+ÎŒ),1≀a<p−1, T_\chi(a, G) = \sum_{\lambda, \mu \in G} \chi(a + \lambda + \mu), \qquad 1\le a < p-1, and give an application to primitive roots modulo pp with 33 non-zero binary digits

    Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer

    Get PDF
    A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time of at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.Comment: 28 pages, LaTeX. This is an expanded version of a paper that appeared in the Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, Nov. 20--22, 1994. Minor revisions made January, 199

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo

    On Karatsuba's Problem Concerning the Divisor Function τ(n)\tau(n)

    Full text link
    We study an asymptotic behavior of the sum \sum\limits_{n\le x}\frac{\D \tau(n)}{\D \tau(n+a)}. Here τ(n)\tau(n) denotes the number of divisors of nn and a≄1a\ge 1 is a fixed integer.Comment: 32 page

    An efficient algorithm for accelerating the convergence of oscillatory series, useful for computing the polylogarithm and Hurwitz zeta functions

    Full text link
    This paper sketches a technique for improving the rate of convergence of a general oscillatory sequence, and then applies this series acceleration algorithm to the polylogarithm and the Hurwitz zeta function. As such, it may be taken as an extension of the techniques given by Borwein's "An efficient algorithm for computing the Riemann zeta function", to more general series. The algorithm provides a rapid means of evaluating Li_s(z) for general values of complex s and the region of complex z values given by |z^2/(z-1)|<4. Alternatively, the Hurwitz zeta can be very rapidly evaluated by means of an Euler-Maclaurin series. The polylogarithm and the Hurwitz zeta are related, in that two evaluations of the one can be used to obtain a value of the other; thus, either algorithm can be used to evaluate either function. The Euler-Maclaurin series is a clear performance winner for the Hurwitz zeta, while the Borwein algorithm is superior for evaluating the polylogarithm in the kidney-shaped region. Both algorithms are superior to the simple Taylor's series or direct summation. The primary, concrete result of this paper is an algorithm allows the exploration of the Hurwitz zeta in the critical strip, where fast algorithms are otherwise unavailable. A discussion of the monodromy group of the polylogarithm is included.Comment: 37 pages, 6 graphs, 14 full-color phase plots. v3: Added discussion of a fast Hurwitz algorithm; expanded development of the monodromy v4:Correction and clarifiction of monodrom

    Generalized Elliptic Integrals and the Legendre M-function

    Get PDF
    We study monotonicity and convexity properties of functions arising in the theory of elliptic integrals, and in particular in the case of a Schwarz-Christoffel conformal mapping from a half-plane to a trapezoid. We obtain sharp monotonicity and convexity results for combinations of these functions, as well as functional inequalities and a linearization property.Comment: 28 page

    ON THE SOLVABILITY OF BILINEAR EQUATIONS IN FINITE FIELDS

    Get PDF

    How Fast Can We Multiply Large Integers on an Actual Computer?

    Full text link
    We provide two complexity measures that can be used to measure the running time of algorithms to compute multiplications of long integers. The random access machine with unit or logarithmic cost is not adequate for measuring the complexity of a task like multiplication of long integers. The Turing machine is more useful here, but fails to take into account the multiplication instruction for short integers, which is available on physical computing devices. An interesting outcome is that the proposed refined complexity measures do not rank the well known multiplication algorithms the same way as the Turing machine model.Comment: To appear in the proceedings of Latin 2014. Springer LNCS 839
    • 

    corecore