158,330 research outputs found

    Constrained correlation functions from the Millennium Simulation

    Full text link
    Context. In previous work, we developed a quasi-Gaussian approximation for the likelihood of correlation functions, which, in contrast to the usual Gaussian approach, incorporates fundamental mathematical constraints on correlation functions. The analytical computation of these constraints is only feasible in the case of correlation functions of one-dimensional random fields. Aims. In this work, we aim to obtain corresponding constraints in the case of higher-dimensional random fields and test them in a more realistic context. Methods. We develop numerical methods to compute the constraints on correlation functions which are also applicable for two- and three-dimensional fields. In order to test the accuracy of the numerically obtained constraints, we compare them to the analytical results for the one-dimensional case. Finally, we compute correlation functions from the halo catalog of the Millennium Simulation, check whether they obey the constraints, and examine the performance of the transformation used in the construction of the quasi-Gaussian likelihood. Results. We find that our numerical methods of computing the constraints are robust and that the correlation functions measured from the Millennium Simulation obey them. Despite the fact that the measured correlation functions lie well inside the allowed region of parameter space, i.e. far away from the boundaries of the allowed volume defined by the constraints, we find strong indications that the quasi-Gaussian likelihood yields a substantially more accurate description than the Gaussian one.Comment: 11 pages, 13 figures, updated to match version accepted by A&

    Integer Polynomial Optimization in Fixed Dimension

    Full text link
    We classify, according to their computational complexity, integer optimization problems whose constraints and objective functions are polynomials with integer coefficients and the number of variables is fixed. For the optimization of an integer polynomial over the lattice points of a convex polytope, we show an algorithm to compute lower and upper bounds for the optimal value. For polynomials that are non-negative over the polytope, these sequences of bounds lead to a fully polynomial-time approximation scheme for the optimization problem.Comment: In this revised version we include a stronger complexity bound on our algorithm. Our algorithm is in fact an FPTAS (fully polynomial-time approximation scheme) to maximize a non-negative integer polynomial over the lattice points of a polytop

    Reversibility and Adiabatic Computation: Trading Time and Space for Energy

    Get PDF
    Future miniaturization and mobilization of computing devices requires energy parsimonious `adiabatic' computation. This is contingent on logical reversibility of computation. An example is the idea of quantum computations which are reversible except for the irreversible observation steps. We propose to study quantitatively the exchange of computational resources like time and space for irreversibility in computations. Reversible simulations of irreversible computations are memory intensive. Such (polynomial time) simulations are analysed here in terms of `reversible' pebble games. We show that Bennett's pebbling strategy uses least additional space for the greatest number of simulated steps. We derive a trade-off for storage space versus irreversible erasure. Next we consider reversible computation itself. An alternative proof is provided for the precise expression of the ultimate irreversibility cost of an otherwise reversible computation without restrictions on time and space use. A time-irreversibility trade-off hierarchy in the exponential time region is exhibited. Finally, extreme time-irreversibility trade-offs for reversible computations in the thoroughly unrealistic range of computable versus noncomputable time-bounds are given.Comment: 30 pages, Latex. Lemma 2.3 should be replaced by the slightly better ``There is a winning strategy with n+2n+2 pebbles and m−1m-1 erasures for pebble games GG with TG=m2nT_G= m2^n, for all m≥1m \geq 1'' with appropriate further changes (as pointed out by Wim van Dam). This and further work on reversible simulations as in Section 2 appears in quant-ph/970300

    Computation of generalized matrix functions

    Get PDF
    We develop numerical algorithms for the efficient evaluation of quantities associated with generalized matrix functions [J. B. Hawkins and A. Ben-Israel, Linear and Multilinear Algebra 1(2), 1973, pp. 163-171]. Our algorithms are based on Gaussian quadrature and Golub--Kahan bidiagonalization. Block variants are also investigated. Numerical experiments are performed to illustrate the effectiveness and efficiency of our techniques in computing generalized matrix functions arising in the analysis of networks.Comment: 25 paged, 2 figure

    Accelerating the CM method

    Full text link
    Given a prime q and a negative discriminant D, the CM method constructs an elliptic curve E/\Fq by obtaining a root of the Hilbert class polynomial H_D(X) modulo q. We consider an approach based on a decomposition of the ring class field defined by H_D, which we adapt to a CRT setting. This yields two algorithms, each of which obtains a root of H_D mod q without necessarily computing any of its coefficients. Heuristically, our approach uses asymptotically less time and space than the standard CM method for almost all D. Under the GRH, and reasonable assumptions about the size of log q relative to |D|, we achieve a space complexity of O((m+n)log q) bits, where mn=h(D), which may be as small as O(|D|^(1/4)log q). The practical efficiency of the algorithms is demonstrated using |D| > 10^16 and q ~ 2^256, and also |D| > 10^15 and q ~ 2^33220. These examples are both an order of magnitude larger than the best previous results obtained with the CM method.Comment: 36 pages, minor edits, to appear in the LMS Journal of Computation and Mathematic

    Gradient-Bounded Dynamic Programming with Submodular and Concave Extensible Value Functions

    Full text link
    We consider dynamic programming problems with finite, discrete-time horizons and prohibitively high-dimensional, discrete state-spaces for direct computation of the value function from the Bellman equation. For the case that the value function of the dynamic program is concave extensible and submodular in its state-space, we present a new algorithm that computes deterministic upper and stochastic lower bounds of the value function similar to dual dynamic programming. We then show that the proposed algorithm terminates after a finite number of iterations. Finally, we demonstrate the efficacy of our approach on a high-dimensional numerical example from delivery slot pricing in attended home delivery.Comment: 6 pages, 2 figures, accepted for IFAC World Congress 202

    First steps towards an imprecise Poisson process

    Get PDF
    The Poisson process is the most elementary continuous-time stochastic process that models a stream of repeating events. It is uniquely characterised by a single parameter called the rate. Instead of a single value for this rate, we here consider a rate interval and let it characterise two nested sets of stochastic processes. We call these two sets of stochastic process imprecise Poisson processes, explain why this is justified, and study the corresponding lower and upper (conditional) expectations. Besides a general theoretical framework, we also provide practical methods to compute lower and upper (conditional) expectations of functions that depend on the number of events at a single point in time
    • …
    corecore