11,931 research outputs found

    How to Integrate a Polynomial over a Simplex

    Full text link
    This paper settles the computational complexity of the problem of integrating a polynomial function f over a rational simplex. We prove that the problem is NP-hard for arbitrary polynomials via a generalization of a theorem of Motzkin and Straus. On the other hand, if the polynomial depends only on a fixed number of variables, while its degree and the dimension of the simplex are allowed to vary, we prove that integration can be done in polynomial time. As a consequence, for polynomials of fixed total degree, there is a polynomial time algorithm as well. We conclude the article with extensions to other polytopes, discussion of other available methods and experimental results.Comment: Tables added with new experimental results. References adde

    Businessmen's Expectations Are Neither Rational nor Adaptive

    Get PDF
    A framework which allows for the joint testing of the adaptive and rational expectations hypotheses is presented. We assume joint normality of expectations, realizations and variables in the information set, allowing for parsimonious interpretation of the data; conditional first moments are linear in the conditioning variables, and we can easily recover regression coefficients from them and test simple hypotheses by imposing zero restrictions on these coefficients. The nature of the data, which are responses to business surveys and are all categorical, requires simulation techniques to obtain full information maximum likelihood estimates. We use a latent variable model which allows for the construction of a simple likelihood function. However, this likelihood contains multi-(four)dimensional integrals, requiring simulators to evaluate. Simulated maximum-likelihood estimation is carried out using the Geweke-Hajivassilou-Keane (GHK) method, which is consistent and has low variance. The latter is crucial when maximizing the log-likelihood directly. Identification of the parameters is achieved by placing restrictions on the response thresholds and/or the variances. We find that we can reject both hypotheses. --

    Kira - A Feynman Integral Reduction Program

    Full text link
    In this article, we present a new implementation of the Laporta algorithm to reduce scalar multi-loop integrals---appearing in quantum field theoretic calculations---to a set of master integrals. We extend existing approaches by using an additional algorithm based on modular arithmetic to remove linearly dependent equations from the system of equations arising from integration-by-parts and Lorentz identities. Furthermore, the algebraic manipulations required in the back substitution are optimized. We describe in detail the implementation as well as the usage of the program. In addition, we show benchmarks for concrete examples and compare the performance to Reduze 2 and FIRE 5. In our benchmarks we find that Kira is highly competitive with these existing tools.Comment: 37 pages, 3 figure

    Scattering AMplitudes from Unitarity-based Reduction Algorithm at the Integrand-level

    Get PDF
    SAMURAI is a tool for the automated numerical evaluation of one-loop corrections to any scattering amplitudes within the dimensional-regularization scheme. It is based on the decomposition of the integrand according to the OPP-approach, extended to accommodate an implementation of the generalized d-dimensional unitarity-cuts technique, and uses a polynomial interpolation exploiting the Discrete Fourier Transform. SAMURAI can process integrands written either as numerator of Feynman diagrams or as product of tree-level amplitudes. We discuss some applications, among which the 6- and 8-photon scattering in QED, and the 6-quark scattering in QCD. SAMURAI has been implemented as a Fortran90 library, publicly available, and it could be a useful module for the systematic evaluation of the virtual corrections oriented towards automating next-to-leading order calculations relevant for the LHC phenomenology.Comment: 35 pages, 7 figure

    Modern Feynman Diagrammatic One-Loop Calculations

    Full text link
    In this talk we present techniques for calculating one-loop amplitudes for multi-leg processes using Feynman diagrammatic methods in a semi-algebraic context. Our approach combines the advantages of the different methods allowing for a fast evaluation of the amplitude while monitoring the numerical stability of the calculation. In phase space regions close to singular kinematics we use a method avoiding spurious Gram determinants in the calculation. As an application of our approach we report on the status of the calculation of the amplitude for the process pp→bbˉbbˉ+Xpp\to b\bar{b}b\bar{b}+X.Comment: 10 pages, 2 figures; contribution to the proceedings of the CPP2010 Workshop, 23-25 Sep. 2010, KEK, Tsukuba, Japa

    Prediction based task scheduling in distributed computing

    Full text link

    Certified Roundoff Error Bounds Using Semidefinite Programming.

    Get PDF
    Roundoff errors cannot be avoided when implementing numerical programs with finite precision. The ability to reason about rounding is especially important if one wants to explore a range of potential representations, for instance for FPGAs or custom hardware implementation. This problem becomes challenging when the program does not employ solely linear operations as non-linearities are inherent to many interesting computational problems in real-world applications. Existing solutions to reasoning are limited in the presence of nonlinear correlations between variables, leading to either imprecise bounds or high analysis time. Furthermore, while it is easy to implement a straightforward method such as interval arithmetic, sophisticated techniques are less straightforward to implement in a formal setting. Thus there is a need for methods which output certificates that can be formally validated inside a proof assistant. We present a framework to provide upper bounds on absolute roundoff errors. This framework is based on optimization techniques employing semidefinite programming and sums of squares certificates, which can be formally checked inside the Coq theorem prover. Our tool covers a wide range of nonlinear programs, including polynomials and transcendental operations as well as conditional statements. We illustrate the efficiency and precision of this tool on non-trivial programs coming from biology, optimization and space control. Our tool produces more precise error bounds for 37 percent of all programs and yields better performance in 73 percent of all programs

    Businessmen's expectations are neither rational nor adaptive

    Full text link
    A framework which allows for the joint testing ofthe adaptive and rational expectations hypotheses is presented. We assume joint normality of expectations, realizations and variablesin the information set, allowing for parsimonious interpretationof the data; conditional first moments are linear in the conditioningvariables, and we can easily recover regression coefficients fromthem and test simple hypotheses by imposing zero restrictionson these coefficients. The nature of the data, which are responsesto business surveys and are all categorical, requires simulationtechniques to obtain full information maximum likelihood estimates. We use a latent variable model which allows for the constructionof a simple likelihood function. However, this likelihood containsmulti- (four)dimensional integrals, requiring simulators to evaluate. Simulated maximum-likelihood estimation is carried out usingthe Geweke-Hajivassilou-Keane (GHK) method, which is consistentand has low variance. The latter is crucial when maximizing thelog-likelihood directly. Identification of the parameters isachieved by placing restrictions on the response thresholds and/orthe variances. We find that we can reject both hypotheses
    • …
    corecore