53 research outputs found

    Fast Numerical Multivariate Multipoint Evaluation

    Full text link
    We design nearly-linear time numerical algorithms for the problem of multivariate multipoint evaluation over the fields of rational, real and complex numbers. We consider both \emph{exact} and \emph{approximate} versions of the algorithm. The input to the algorithms are (1) coefficients of an mm-variate polynomial ff with degree dd in each variable, and (2) points a1,...,aNa_1,..., a_N each of whose coordinate has value bounded by one and bit-complexity ss. * Approximate version: Given additionally an accuracy parameter tt, the algorithm computes rational numbers β1,,βN\beta_1,\ldots, \beta_N such that f(ai)βi12t|f(a_i) - \beta_i| \leq \frac{1}{2^t} for all ii, and has a running time of ((Nm+dm)(s+t))1+o(1)((Nm + d^m)(s + t))^{1 + o(1)} for all mm and all sufficiently large dd. * Exact version (when over rationals): Given additionally a bound cc on the bit-complexity of all evaluations, the algorithm computes the rational numbers f(a1),...,f(aN)f(a_1), ... , f(a_N), in time ((Nm+dm)(s+c))1+o(1)((Nm + d^m)(s + c))^{1 + o(1)} for all mm and all sufficiently large dd. . Prior to this work, a nearly-linear time algorithm for multivariate multipoint evaluation (exact or approximate) over any infinite field appears to be known only for the case of univariate polynomials, and was discovered in a recent work of Moroz (FOCS 2021). In this work, we extend this result from the univariate to the multivariate setting. However, our algorithm is based on ideas that seem to be conceptually different from those of Moroz (FOCS 2021) and crucially relies on a recent algorithm of Bhargava, Ghosh, Guo, Kumar & Umans (FOCS 2022) for multivariate multipoint evaluation over finite fields, and known efficient algorithms for the problems of rational number reconstruction and fast Chinese remaindering in computational number theory

    The complexity of class polynomial computation via floating point approximations

    Get PDF
    We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmetic-geometric mean. It runs in time O(Dlog5DloglogD)=O(D1+ϵ)=O(h2+ϵ)O (|D| \log^5 |D| \log \log |D|) = O (|D|^{1 + \epsilon}) = O (h^{2 + \epsilon}) for any ϵ>0\epsilon > 0, where DD is the CM discriminant and hh is the degree of the class polynomial. Another fast algorithm uses multipoint evaluation techniques known from symbolic computation; its asymptotic complexity is worse by a factor of logD\log |D|. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary-quadratic order and on a rigorously proven upper bound for the height of class polynomials

    Counting Short Vector Pairs by Inner Product and Relations to the Permanent

    Get PDF

    Harnessing the power of GPUs for problems in real algebraic geometry

    Get PDF
    This thesis presents novel parallel algorithms to leverage the power of GPUs (Graphics Processing Units) for exact computations with polynomials having large integer coefficients. The significance of such computations, especially in real algebraic geometry, is hard to undermine. On massively-parallel architectures such as GPU, the degree of datalevel parallelism exposed by an algorithm is the main performance factor. We attain high efficiency through the use of structured matrix theory to assist the realization of relevant operations on polynomials on the graphics hardware. A detailed complexity analysis, assuming the PRAM model, also confirms that our approach achieves a substantially better parallel complexity in comparison to classical algorithms used for symbolic computations. Aside from the theoretical considerations, a large portion of this work is dedicated to the actual algorithm development and optimization techniques where we pay close attention to the specifics of the graphics hardware. As a byproduct of this work, we have developed high-throughput modular arithmetic which we expect to be useful for other GPU applications, in particular, open-key cryptography. We further discuss the algorithms for the solution of a system of polynomial equations, topology computation of algebraic curves and curve visualization which can profit to the full extent from the GPU acceleration. Extensive benchmarking on a real data demonstrates the superiority of our algorithms over several state-of-the-art approaches available to date. This thesis is written in English.Diese Arbeit beschäftigt sich mit neuen parallelen Algorithmen, die das Leistungspotenzial der Grafik-Prozessoren (GPUs) zur exakten Berechnungen mit ganzzahlige Polynomen nutzen. Solche symbolische Berechnungen sind von großer Bedeutung zur Lösung vieler Probleme aus der reellen algebraischen Geometrie. Für die effziente Implementierung eines Algorithmus auf massiv-parallelen Hardwarearchitekturen, wie z.B. GPU, ist vor allem auf eine hohe Datenparallelität zu achten. Unter Verwendung von Ergebnissen aus der strukturierten Matrix-Theorie konnten wir die entsprechenden Operationen mit Polynomen auf der Grafikkarte leicht übertragen. Außerdem zeigt eine Komplexitätanalyse im PRAM-Rechenmodell, dass die von uns entwickelten Verfahren eine deutlich bessere Komplexität aufweisen als dies für die klassischen Verfahren der Fall ist. Neben dem theoretischen Ergebnis liegt ein weiterer Schwerpunkt dieser Arbeit in der praktischen Implementierung der betrachteten Algorithmen, wobei wir auf der Besonderheiten der Grafikhardware achten. Im Rahmen dieser Arbeit haben wir hocheffiziente modulare Arithmetik entwickelt, von der wir erwarten, dass sie sich für andere GPU Anwendungen, insbesondere der Public-Key-Kryptographie, als nützlich erweisen wird. Darüber hinaus betrachten wir Algorithmen für die Lösung eines Systems von Polynomgleichungen, Topologie Berechnung der algebraischen Kurven und deren Visualisierung welche in vollem Umfang von der GPU-Leistung profitieren können. Zahlreiche Experimente belegen dass wir zur Zeit die beste Verfahren zur Verfügung stellen. Diese Dissertation ist in englischer Sprache verfasst

    Faster Geometric Algorithms via Dynamic Determinant Computation

    Full text link
    The computation of determinants or their signs is the core procedure in many important geometric algorithms, such as convex hull, volume and point location. As the dimension of the computation space grows, a higher percentage of the total computation time is consumed by these computations. In this paper we study the sequences of determinants that appear in geometric algorithms. The computation of a single determinant is accelerated by using the information from the previous computations in that sequence. We propose two dynamic determinant algorithms with quadratic arithmetic complexity when employed in convex hull and volume computations, and with linear arithmetic complexity when used in point location problems. We implement the proposed algorithms and perform an extensive experimental analysis. On one hand, our analysis serves as a performance study of state-of-the-art determinant algorithms and implementations. On the other hand, we demonstrate the supremacy of our methods over state-of-the-art implementations of determinant and geometric algorithms. Our experimental results include a 20 and 78 times speed-up in volume and point location computations in dimension 6 and 11 respectively.Comment: 29 pages, 8 figures, 3 table

    Sparse Polynomial Interpolation and Testing

    Get PDF
    Interpolation is the process of learning an unknown polynomial f from some set of its evaluations. We consider the interpolation of a sparse polynomial, i.e., where f is comprised of a small, bounded number of terms. Sparse interpolation dates back to work in the late 18th century by the French mathematician Gaspard de Prony, and was revitalized in the 1980s due to advancements by Ben-Or and Tiwari, Blahut, and Zippel, amongst others. Sparse interpolation has applications to learning theory, signal processing, error-correcting codes, and symbolic computation. Closely related to sparse interpolation are two decision problems. Sparse polynomial identity testing is the problem of testing whether a sparse polynomial f is zero from its evaluations. Sparsity testing is the problem of testing whether f is in fact sparse. We present effective probabilistic algebraic algorithms for the interpolation and testing of sparse polynomials. These algorithms assume black-box evaluation access, whereby the algorithm may specify the evaluation points. We measure algorithmic costs with respect to the number and types of queries to a black-box oracle. Building on previous work by Garg–Schost and Giesbrecht–Roche, we present two methods for the interpolation of a sparse polynomial modelled by a straight-line program (SLP): a sequence of arithmetic instructions. We present probabilistic algorithms for the sparse interpolation of an SLP, with cost softly-linear in the sparsity of the interpolant: its number of nonzero terms. As an application of these techniques, we give a multiplication algorithm for sparse polynomials, with cost that is sensitive to the size of the output. Multivariate interpolation reduces to univariate interpolation by way of Kronecker substitu- tion, which maps an n-variate polynomial f to a univariate image with degree exponential in n. We present an alternative method of randomized Kronecker substitutions, whereby one can more efficiently reconstruct a sparse interpolant f from multiple univariate images of considerably reduced degree. In error-correcting interpolation, we suppose that some bounded number of evaluations may be erroneous. We present an algorithm for error-correcting interpolation of polynomials that are sparse under the Chebyshev basis. In addition we give a method which reduces sparse Chebyshev-basis interpolation to monomial-basis interpolation. Lastly, we study the class of Boolean functions that admit a sparse Fourier representation. We give an analysis of Levin’s Sparse Fourier Transform algorithm for such functions. Moreover, we give a new algorithm for testing whether a Boolean function is Fourier-sparse. This method reduces sparsity testing to homomorphism testing, which in turn may be solved by the Blum–Luby–Rubinfeld linearity test

    Simultaneous Diagonalization of Incomplete Matrices and Applications

    Get PDF
    We consider the problem of recovering the entries of diagonal matrices {Ua}a\{U_a\}_a for a=1,,ta = 1,\ldots,t from multiple "incomplete" samples {Wa}a\{W_a\}_a of the form Wa=PUaQW_a=PU_aQ, where PP and QQ are unknown matrices of low rank. We devise practical algorithms for this problem depending on the ranks of PP and QQ. This problem finds its motivation in cryptanalysis: we show how to significantly improve previous algorithms for solving the approximate common divisor problem and breaking CLT13 cryptographic multilinear maps.Comment: 16 page
    corecore