555 research outputs found

    Genetic Improvement of Data gives Binary Logarithm from sqrt

    Get PDF
    Automated search in the form of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), plus manual code changes, transforms 512 Newton-Raphson floating point start numbers from an open source GNU C library, glibc, table driven square root function to create a new bespoke custom mathematical implementation of double precision binary logarithm log2 for C in seconds

    Evaluation of genetic improvement tools for improvement of non-functional properties of software

    Get PDF
    Genetic improvement (GI) improves both functional properties of software, such as bug repair, and non-functional properties, such as execution time, energy consumption, or source code size. There are studies summarising and comparing GI tools for improving functional properties of software; however there is no such study for improvement of its non-functional properties using GI. Therefore, this research aims to survey and report on the existing GI tools for improvement of non-functional properties of software. We conducted a literature review of available GI tools, and ran multiple experiments on the found open-source tools to examine their usability. We applied a cross-testing strategy to check whether the available tools can work on different programs. Overall, we found 63 GI papers that use a GI tool to improve nonfunctional properties of software, within which 31 are accompanied with open-source code. We were able to successfully run eight GI tools, and found that ultimately only two ---Gin and PyGGI--- can be readily applied to new general software

    Detecting Floating-Point Errors via Atomic Conditions

    Get PDF
    This paper tackles the important, difficult problem of detecting program inputs that trigger large floating-point errors in numerical code. It introduces a novel, principled dynamic analysis that leverages the mathematically rigorously analyzed condition numbers for atomic numerical operations, which we call atomic conditions, to effectively guide the search for large floating-point errors. Compared with existing approaches, our work based on atomic conditions has several distinctive benefits: (1) it does not rely on high-precision implementations to act as approximate oracles, which are difficult to obtain in general and computationally costly; and (2) atomic conditions provide accurate, modular search guidance. These benefits in combination lead to a highly effective approach that detects more significant errors in real-world code (e.g., widely-used numerical library functions) and achieves several orders of speedups over the state-of-the-art, thus making error analysis significantly more practical. We expect the methodology and principles behind our approach to benefit other floating-point program analysis tasks such as debugging, repair and synthesis. To facilitate the reproduction of our work, we have made our implementation, evaluation data and results publicly available on GitHub at https://github.com/FP-Analysis/atomic-condition.ISSN:2475-142

    Quality-diversity in dissimilarity spaces

    Full text link
    The theory of magnitude provides a mathematical framework for quantifying and maximizing diversity. We apply this framework to formulate quality-diversity algorithms in generic dissimilarity spaces. In particular, we instantiate and demonstrate a very general version of Go-Explore with promising performance.Comment: Minor bug fix: see new appendix J for details. Only small quantitative effects; no significant changes to results (but all redone

    Stochastic volatility models: calibration, pricing and hedging

    Get PDF
    Stochastic volatility models have long provided a popular alternative to the Black- Scholes-Merton framework. They provide, in a self-consistent way, an explanation for the presence of implied volatility smiles/skews seen in practice. Incorporating jumps into the stochastic volatility framework gives further freedom to nancial mathematicians to t both the short and long end of the implied volatility surface. We present three stochastic volatility models here - the Heston model, the Bates model and the SVJJ model. The latter two models incorporate jumps in the stock price process and, in the case of the SVJJ model, jumps in the volatility process. We analyse the e ects that the di erent model parameters have on the implied volatility surface as well as the returns distribution. We also present pricing techniques for determining vanilla European option prices under the dynamics of the three models. These include the fast Fourier transform (FFT) framework of Carr and Madan as well as two Monte Carlo pricing methods. Making use of the FFT pricing framework, we present calibration techniques for tting the models to option data. Speci cally, we examine the use of the genetic algorithm, adaptive simulated annealing and a MATLAB optimisation routine for tting the models to option data via a leastsquares calibration routine. We favour the genetic algorithm and make use of it in tting the three models to ALSI and S&P 500 option data. The last section of the dissertation provides hedging techniques for the models via the calculation of option price sensitivities. We nd that a delta, vega and gamma hedging scheme provides the best results for the Heston model. The inclusion of jumps in the stock price and volatility processes, however, worsens the performance of this scheme. MATLAB code for some of the routines implemented is provided in the appendix

    Automated Discovery of Numerical Approximation Formulae Via Genetic Programming

    Get PDF
    This thesis describes the use of genetic programming to automate the discovery of numerical approximation formulae. Results are presented involving rediscovery of known approximations for Harmonic numbers and discovery of rational polynomial approximations for functions of one or more variables, the latter of which are compared to Padé approximations obtained through a symbolic mathematics package. For functions of a single variable, it is shown that evolved solutions can be considered superior to Padé approximations, which represent a powerful technique from numerical analysis, given certain tradeoffs between approximation cost and accuracy, while for functions of more than one variable, we are able to evolve rational polynomial approximations where no Padé approximation can be computed. Furthermore, it is shown that evolved approximations can be iteratively improved through the evolution of approximations to their error function. Based on these results, we consider genetic programming to be a powerful and effective technique for the automated discovery of numerical approximation formulae

    Multivariate Statistical Machine Learning Methods for Genomic Prediction

    Get PDF
    This book is open access under a CC BY 4.0 license This open access book brings together the latest genome base prediction models currently being used by statisticians, breeders and data scientists. It provides an accessible way to understand the theory behind each statistical learning tool, the required pre-processing, the basics of model building, how to train statistical learning methods, the basic R scripts needed to implement each statistical learning tool, and the output of each tool. To do so, for each tool the book provides background theory, some elements of the R statistical software for its implementation, the conceptual underpinnings, and at least two illustrative examples with data from real-world genomic selection experiments. Lastly, worked-out examples help readers check their own comprehension. The book will greatly appeal to readers in plant (and animal) breeding, geneticists and statisticians, as it provides in a very accessible way the necessary theory, the appropriate R code, and illustrative examples for a complete understanding of each statistical learning tool. In addition, it weighs the advantages and disadvantages of each tool

    Novel Computational Methods for Censored Data and Regression

    Get PDF
    This dissertation can be divided into three topics. In the first topic, we derived a recursive algorithm for the constrained Kaplan-Meier estimator, which promotes the computation speed up to fifty times compared to the current method that uses EM algorithm. We also showed how this leads to the vast improvement of empirical likelihood analysis with right censored data. After a brief review of regularized regressions, we investigated the computational problems in the parametric/non-parametric hybrid accelerated failure time models and its regularization in a high dimensional setting. We also illustrated that, when the number of pieces increases, the discussed models are close to a nonparametric one. In the last topic, we discussed a semi-parametric approach of hypothesis testing problem in the binary choice model. The major tools used are Buckley-James like algorithm and empirical likelihood. The essential idea, which is similar to the first topic, is iteratively computing linear constrained empirical likelihood using optimization algorithms including EM, and iterative convex minorant algorithm
    • …
    corecore