3 research outputs found

    Compilers that learn to optimise: a probabilistic machine learning approach

    Get PDF
    Compiler optimisation is the process of making a compiler produce better code, i.e. code that, for example, runs faster on a target architecture. Although numerous program transformations for optimisation have been proposed in the literature, these transformations are not always beneficial and they can interact in very complex ways. Traditional approaches adopted by compiler writers fix the order of the transformations and decide when and how these transformations should be applied to a program by using hard-coded heuristics. However, these heuristics require a lot of time and effort to construct and may sacrifice performance on programs they have not been tuned for.This thesis proposes a probabilistic machine learning solution to the compiler optimisation problem that automatically determines "good" optimisation strategies for programs. This approach uses predictive modelling in order to search the space of compiler transformations. Unlike most previous work that learns when/how to apply a single transformation in isolation or a fixed-order set of transformations, the techniques proposed in this thesis are capable of tackling the general problem of predicting "good" sequences of compiler transformations. This is achieved by exploiting transference across programs with two different techniques: Predictive Search Distributions (PSD) and multi-task Gaussian process prediction (multi-task GP). While the former directly addresses the problem of predicting "good" transformation sequences, the latter learns regression models (or proxies) of the performance of the programs in order to rapidly scan the space of transformation sequences.Both methods, PSD and multi-task GP, are formulated as general machine learning techniques. In particular, the PSD method is proposed in order to speed up search in combinatorial optimisation problems by learning a distribution over good solutions on a set of problem in¬ stances and using that distribution to search the optimisation space of a problem that has not been seen before. Likewise, multi-task GP is proposed as a general method for multi-task learning that directly models the correlation between several machine learning tasks, exploiting the shared information across the tasks.Additionally, this thesis presents an extension to the well-known analysis of variance (ANOVA) methodology in order to deal with sequence data. This extension is used to address the problem of optimisation space characterisation by identifying and quantifying the main effects of program transformations and their interactions.Finally, the machine learning methods proposed are successfully applied to a data set that has been generated as a result of the application of source-to-source transformations to 12 C programs from the UTDSP benchmark suite

    Improving Market Risk Management with Heuristic Algorithms

    Get PDF
    Recent changes in the regulatory framework for banking supervision increase the regulatory oversight and minimum capital requirements for financial institutions. In this thesis, we research active portfolio optimisation techniques with heuristic algorithms to manage new regulatory challenges faced in risk management. We first study if heuristic algorithms can support risk management to find global optimal solutions to reduce the regulatory capital requirements. In a benchmark comparison of variance, Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) objective functions combined with different optimisation routines, we show that the Threshold Accepting (TA) heuristic algorithm reduces the capital requirements compared with the Trust-Region (TR) local search algorithm. Secondly, we introduce a new risk management approach based on the Unconditional Coverage test to optimally manage the regulatory capital requirements, while avoiding to over- or underestimate the portfolio risk. In an empirical analysis with TA and TR optimisation, we show that our new approach successfully optimises the portfolio risk-return profile and reduces the capital requirements. Next, we analyse the effect of different estimation techniques on the capital requirements. More specifically, empirical and analytical VaR and CVaR estimation is compared with a simulation-based approach using a multivariate GARCH process. The optimisation is performed using the Population-Based Incremental Learning (PBIL) algorithm. We find that the parametric and empirical distribution assumption generate similar results and neither of them clearly outperforms the other. However, portfolios optimised with the simulation approach reduce the capital requirements by about 11%. Finally, we introduce a global VaR and CVaR hedging approach with multivariate GARCH process and PBIL optimisation. Our hedging framework provides a self-financing hedge that reduces transaction costs by using standardised derivatives. The empirical study shows that the new approach increases the stability of the portfolio while avoiding high transaction costs. The results are compared with benchmark portfolios optimised with a Genetic Algorithm
    corecore