394,446 research outputs found

    The complexity of class polynomial computation via floating point approximations

    Get PDF
    We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmetic-geometric mean. It runs in time O(Dlog5DloglogD)=O(D1+ϵ)=O(h2+ϵ)O (|D| \log^5 |D| \log \log |D|) = O (|D|^{1 + \epsilon}) = O (h^{2 + \epsilon}) for any ϵ>0\epsilon > 0, where DD is the CM discriminant and hh is the degree of the class polynomial. Another fast algorithm uses multipoint evaluation techniques known from symbolic computation; its asymptotic complexity is worse by a factor of logD\log |D|. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary-quadratic order and on a rigorously proven upper bound for the height of class polynomials

    PTHash: Revisiting FCH Minimal Perfect Hashing

    Get PDF
    Given a set S of n distinct keys, a function f that bijectively maps the keys of S into the range (0,...,n-1) is called a minimal perfect hash function for S. Algorithms that find such functions when n is large and retain constant evaluation time are of practical interest; for instance, search engines and databases typically use minimal perfect hash functions to quickly assign identifiers to static sets of variable-length keys such as strings. The challenge is to design an algorithm which is efficient in three different aspects: time to find f (construction time), time to evaluate f on a key of S (lookup time), and space of representation for f. Several algorithms have been proposed to trade-off between these aspects. In 1992, Fox, Chen, and Heath (FCH) presented an algorithm at SIGIR providing very fast lookup evaluation. However, the approach received little attention because of its large construction time and higher space consumption compared to other subsequent techniques. Almost thirty years later we revisit their framework and present an improved algorithm that scales well to large sets and reduces space consumption altogether, without compromising the lookup time. We conduct an extensive experimental assessment and show that the algorithm finds functions that are competitive in space with state-of-the art techniques and provide 2-4x better lookup time

    A (Slightly Less Brutal) Method for Numerically Evaluating Structure Functions

    Get PDF
    A fast numerical algorithm for the evolution of parton distributions in x space is described. The method is close in spirit to `brute' force techniques. The necessary integrals are performed by summing the approximate contributions from small steps of the integration region. Because it is a numerical evaluation it shares the advantage with brute force numerical integration that there are no restrictions placed on the functional form of the distributions to be evolved. However, an improvement in the approximation technique results in a significant reduction in the number of integration steps and a savings in time on the order of three hundred fifty. The method has been implemented for the structure functions F_2 and g_1 at next-to-leading order.Comment: 21 pages, LaTeX, 11 epsf figures include

    Moments of spectral functions: Monte Carlo evaluation and verification

    Full text link
    The subject of the present study is the Monte Carlo path-integral evaluation of the moments of spectral functions. Such moments can be computed by formal differentiation of certain estimating functionals that are infinitely-differentiable against time whenever the potential function is arbitrarily smooth. Here, I demonstrate that the numerical differentiation of the estimating functionals can be more successfully implemented by means of pseudospectral methods (e.g., exact differentiation of a Chebyshev polynomial interpolant), which utilize information from the entire interval (β/2,β/2)(-\beta \hbar / 2, \beta \hbar/2). The algorithmic detail that leads to robust numerical approximations is the fact that the path integral action and not the actual estimating functional are interpolated. Although the resulting approximation to the estimating functional is non-linear, the derivatives can be computed from it in a fast and stable way by contour integration in the complex plane, with the help of the Cauchy integral formula (e.g., by Lyness' method). An interesting aspect of the present development is that Hamburger's conditions for a finite sequence of numbers to be a moment sequence provide the necessary and sufficient criteria for the computed data to be compatible with the existence of an inversion algorithm. Finally, the issue of appearance of the sign problem in the computation of moments, albeit in a milder form than for other quantities, is addressed.Comment: 13 pages, 2 figure

    Energy-Optimal Routes for Electric Vehicles

    Get PDF
    Abstract. We study the problem of electric vehicle route planning, where an important aspect is computing paths that minimize energy consumption. Thereby, any method must cope with specific properties, such as recuperation, battery constraints (over- and under-charging), and frequently changing cost functions (e. g., due to weather conditions). This work presents a practical algorithm that quickly computes energy-optimal routes for networks of continental scale. Exploiting multi-level overlay graphs [26, 31], we extend the Customizable Route Planning approach [8] to our scenario in a sound manner. This includes the efficient computation of profile queries and the adaption of bidirectional search to battery constraints. Our experimental study uses detailed consumption data measured from a production vehicle (Peugeot iOn). It reveals for the network of Europe that a new cost function can be incorporated in about five seconds, after which we answer random queries within 0.3ms on average. Additional evaluation on an artificial but realistic [22, 36] vehicle model with unlimited range demonstrates the excellent scalability of our algorithm: Even for long-range queries across Europe it achieves query times below 5ms on average—fast enough for interactive applications. Altogether, our algorithm exhibits faster query times than previous approaches, while improving (metric-dependent) preprocessing time by three orders of magnitude.

    Lattice Green's Functions for High Order Finite Difference Stencils

    Full text link
    Lattice Green's Functions (LGFs) are fundamental solutions to discretized linear operators, and as such they are a useful tool for solving discretized elliptic PDEs on domains that are unbounded in one or more directions. The majority of existing numerical solvers that make use of LGFs rely on a second-order discretization and operate on domains with free-space boundary conditions in all directions. Under these conditions, fast expansion methods are available that enable precomputation of 2D or 3D LGFs in linear time, avoiding the need for brute-force multi-dimensional quadrature of numerically unstable integrals. Here we focus on higher-order discretizations of the Laplace operator on domains with more general boundary conditions, by (1) providing an algorithm for fast and accurate evaluation of the LGFs associated with high-order dimension-split centered finite differences on unbounded domains, and (2) deriving closed-form expressions for the LGFs associated with both dimension-split and Mehrstellen discretizations on domains with one unbounded dimension. Through numerical experiments we demonstrate that these techniques provide LGF evaluations with near machine-precision accuracy, and that the resulting LGFs allow for numerically consistent solutions to high-order discretizations of the Poisson's equation on fully or partially unbounded 3D domains

    Efficient solution of two-dimensional wave propagation problems by Cq-Wavelet BEM: Algorithm and applications

    Get PDF
    In this paper we consider wave propagation problems in two-dimensional unbounded domains, including dissipative effects, reformulated in terms of space-time boundary integral equa- tions. For their solution, we employ a convolution quadrature (CQ) for the temporal and a Galerkin boundary element method (BEM) for the spatial discretization. It is known that one of the main advantages of the CQ-BEMs is the use of the FFT algorithm to retrieve the discrete time integral operators with an optimal linear complexity in time, up to a logarithmic term. It is also known that a key ingredient for the success of such methods is the efficient and accurate evaluation of all the integrals that define the matrix entries associated to the full space-time discretization. This topic has been successfully addressed when standard Lagrangian basis functions are considered for the space discretization. However, it results that, for such a choice of the basis, the BEM matrices are in general fully populated, a drawback that prevents the application of CQ-BEMs to large-scale problems. In this paper, as a possible remedy to reduce the global complexity of the method, we consider approximant functions of wavelet type. In particular, we propose a numerical procedure that, by taking advantage of the fast wavelet transform, allows us on the one hand to compute the matrix entries associated to the choice of wavelet basis functions by maintaining the accuracy of those associated to the Lagrangian basis ones and, on the other hand, to generate sparse matrices without the need of storing a priori the fully populated ones. Such an approach allows in principle the use of wavelet basis of any type and order, combined with CQ based on any stable ordinary differential equations solver. Several numerical results, showing the accuracy of the solution and the gain in terms of computer memory saving, are presented and discussed
    corecore