13,757 research outputs found

    How well can we guess theoretical uncertainties?

    Full text link
    The problem of estimating the effect of missing higher orders in perturbation theory is analyzed with emphasis in the application to Higgs production in gluon-gluon fusion. Well-known mathematical methods for an approximated completion of the perturbative series are applied with the goal to not truncate the series, but complete it in a well-defined way, so as to increase the accuracy - if not the precision - of theoretical predictions. The uncertainty arising from the use of the completion procedure is discussed and a recipe for constructing a corresponding probability distribution function is proposed

    Bounding Cache Miss Costs of Multithreaded Computations Under General Schedulers

    Full text link
    We analyze the caching overhead incurred by a class of multithreaded algorithms when scheduled by an arbitrary scheduler. We obtain bounds that match or improve upon the well-known O(Q+S(M/B))O(Q+S \cdot (M/B)) caching cost for the randomized work stealing (RWS) scheduler, where SS is the number of steals, QQ is the sequential caching cost, and MM and BB are the cache size and block (or cache line) size respectively.Comment: Extended abstract in Proceedings of ACM Symp. on Parallel Alg. and Architectures (SPAA) 2017, pp. 339-350. This revision has a few small updates including a missing citation and the replacement of some big Oh terms with precise constant

    On the Complexity of the Equivalence Problem for Probabilistic Automata

    Full text link
    Checking two probabilistic automata for equivalence has been shown to be a key problem for efficiently establishing various behavioural and anonymity properties of probabilistic systems. In recent experiments a randomised equivalence test based on polynomial identity testing outperformed deterministic algorithms. In this paper we show that polynomial identity testing yields efficient algorithms for various generalisations of the equivalence problem. First, we provide a randomized NC procedure that also outputs a counterexample trace in case of inequivalence. Second, we show how to check for equivalence two probabilistic automata with (cumulative) rewards. Our algorithm runs in deterministic polynomial time, if the number of reward counters is fixed. Finally we show that the equivalence problem for probabilistic visibly pushdown automata is logspace equivalent to the Arithmetic Circuit Identity Testing problem, which is to decide whether a polynomial represented by an arithmetic circuit is identically zero.Comment: technical report for a FoSSaCS'12 pape

    Implementation of the Combined--Nonlinear Condensation Transformation

    Full text link
    We discuss several applications of the recently proposed combined nonlinear-condensation transformation (CNCT) for the evaluation of slowly convergent, nonalternating series. These include certain statistical distributions which are of importance in linguistics, statistical-mechanics theory, and biophysics (statistical analysis of DNA sequences). We also discuss applications of the transformation in experimental mathematics, and we briefly expand on further applications in theoretical physics. Finally, we discuss a related Mathematica program for the computation of Lerch's transcendent.Comment: 23 pages, 1 table, 1 figure (Comput. Phys. Commun., in press

    Affine functions and series with co-inductive real numbers

    Get PDF
    We extend the work of A. Ciaffaglione and P. Di Gianantonio on mechanical verification of algorithms for exact computation on real numbers, using infinite streams of digits implemented as co-inductive types. Four aspects are studied: the first aspect concerns the proof that digit streams can be related to the axiomatized real numbers that are already axiomatized in the proof system (axiomatized, but with no fixed representation). The second aspect re-visits the definition of an addition function, looking at techniques to let the proof search mechanism perform the effective construction of an algorithm that is correct by construction. The third aspect concerns the definition of a function to compute affine formulas with positive rational coefficients. This should be understood as a testbed to describe a technique to combine co-recursion and recursion to obtain a model for an algorithm that appears at first sight to be outside the expressive power allowed by the proof system. The fourth aspect concerns the definition of a function to compute series, with an application on the series that is used to compute Euler's number e. All these experiments should be reproducible in any proof system that supports co-inductive types, co-recursion and general forms of terminating recursion, but we performed with the Coq system [12, 3, 14]

    Out-of-sample forecast tests robust to the choice of window size

    Get PDF
    This paper proposes new methodologies for evaluating out-of-sample forecasting performance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide range of window sizes. The authors show that the tests proposed in the literature may lack the power to detect predictive ability and might be subject to data snooping across different window sizes if used repeatedly. An empirical application shows the usefulness of the methodologies for evaluating exchange rate models' forecasting ability.Forecasting

    An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities

    Full text link
    We describe an extension of Earley's parser for stochastic context-free grammars that computes the following quantities given a stochastic context-free grammar and an input string: a) probabilities of successive prefixes being generated by the grammar; b) probabilities of substrings being generated by the nonterminals, including the entire string being generated by the grammar; c) most likely (Viterbi) parse of the string; d) posterior expected number of applications of each grammar production, as required for reestimating rule probabilities. (a) and (b) are computed incrementally in a single left-to-right pass over the input. Our algorithm compares favorably to standard bottom-up parsing methods for SCFGs in that it works efficiently on sparse grammars by making use of Earley's top-down control structure. It can process any context-free rule format without conversion to some normal form, and combines computations for (a) through (d) in a single algorithm. Finally, the algorithm has simple extensions for processing partially bracketed inputs, and for finding partial parses and their likelihoods on ungrammatical inputs.Comment: 45 pages. Slightly shortened version to appear in Computational Linguistics 2

    Tight polynomial worst-case bounds for loop programs

    Get PDF
    In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple programming language - representing non-deterministic imperative programs with bounded loops, and arithmetics limited to addition and multiplication - it is possible to decide precisely whether a program has certain growth-rate properties, in particular whether a computed value, or the program's running time, has a polynomial growth rate. A natural and intriguing problem was to move from answering the decision problem to giving a quantitative result, namely, a tight polynomial upper bound. This paper shows how to obtain asymptotically-tight, multivariate, disjunctive polynomial bounds for this class of programs. This is a complete solution: whenever a polynomial bound exists it will be found. A pleasant surprise is that the algorithm is quite simple; but it relies on some subtle reasoning. An important ingredient in the proof is the forest factorization theorem, a strong structural result on homomorphisms into a finite monoid
    corecore