33 research outputs found

    Comparing Machine Learning and Interpolation Methods for Loop-Level Calculations

    Full text link
    The need to approximate functions is ubiquitous in science, either due to empirical constraints or high computational cost of accessing the function. In high-energy physics, the precise computation of the scattering cross-section of a process requires the evaluation of computationally intensive integrals. A wide variety of methods in machine learning have been used to tackle this problem, but often the motivation of using one method over another is lacking. Comparing these methods is typically highly dependent on the problem at hand, so we specify to the case where we can evaluate the function a large number of times, after which quick and accurate evaluation can take place. We consider four interpolation and three machine learning techniques and compare their performance on three toy functions, the four-point scalar Passarino-Veltman D0D_0 function, and the two-loop self-energy master integral MM. We find that in low dimensions (d=3d = 3), traditional interpolation techniques like the Radial Basis Function perform very well, but in higher dimensions (d=5,6,9d=5, 6, 9) we find that multi-layer perceptrons (a.k.a neural networks) do not suffer as much from the curse of dimensionality and provide the fastest and most accurate predictions.Comment: 30 pages, 17 figures, v2:added a few references, v3: new title, added a few reference

    Tumour growth: An approach to calibrate parameters of a multiphase porous media model based on in vitro observations of Neuroblastoma spheroid growth in a hydrogel microenvironment

    Get PDF
    To unravel processes that lead to the growth of solid tumours, it is necessary to link knowledge of cancer biology with the physical properties of the tumour and its interaction with the surrounding microenvironment. Our understanding of the underlying mechanisms is however still imprecise. We therefore developed computational physics-based models, which incorporate the interaction of the tumour with its surroundings based on the theory of porous media. However, the experimental validation of such models represents a challenge to its clinical use as a prognostic tool. This study combines a physics-based model with in vitro experiments based on microfluidic devices used to mimic a three-dimensional tumour microenvironment. By conducting a global sensitivity analysis, we identify the most influential input parameters and infer their posterior distribution based on Bayesian calibration. The resulting probability density is in agreement with the scattering of the experimental data and thus validates the proposed workflow. This study demonstrates the huge challenges associated with determining precise parameters with usually only limited data for such complex processes and models, but also demonstrates in general how to indirectly characterise the mechanical properties of neuroblastoma spheroids that cannot feasibly be measured experimentally

    Scalable Hierarchical Gaussian Process Models for Regression and Pattern Classification

    Get PDF
    Gaussian processes, which are distributions over functions, are powerful nonparametric tools for the two major machine learning tasks: regression and classification. Both tasks are concerned with learning input-output mappings from example input-output pairs. In Gaussian process (GP) regression and classification, such mappings are modeled by Gaussian processes. In GP regression, the likelihood is Gaussian for continuous outputs, and hence closed-form solutions for prediction and model selection can be obtained. In GP classification, the likelihood is non-Gaussian for discrete/categorical outputs, and hence closed-form solutions are not available, and approximate inference methods must be resorted

    Interacting with scientific workflows

    Get PDF

    Emulation of Stochastic Computer Models with an Application to Building Design

    Get PDF
    Computer models are becoming increasingly common in many areas of science and engineering; aiding in the understanding of scientific processes and providing critical information for important decisions. These models often rely on complex mathematics, and they can use a large amount of computing power to perform a single simulation. This greatly limits the usefulness of these models, because many simulations are needed for most real world problems. A common solution to this is to train a second, statistical, model using a small set of initial simulations. This statistical model is simpler and quicker to run (and is often known as an emulator, with the computer model known as a simulator). This may initially seem like a convoluted approach, but it has shown great promise and continues to gain use in practice. Properly designing emulators often depends on properties of the simulator in question, and so tailoring emulators to the specific problem at hand is essential. Stochastic simulators are one type of computer model which give randomly different outputs each time they are run, even if they are run for the exact same scenario (i.e. the exact same inputs). This thesis deals primarily with stochastic simulators, and how to build and use emulators for these. These simulators can be very difficult to build emulators for, as the emulator will need to learn both the underlying trends and the structure of the randomness. This thesis also uses the engineering design of buildings to exemplify some of the issues and the developed techniques. Before a building is made, engineers can simulate different properties of the building (such as its internal temperature), and use that to make modifications to the design. We argue in this thesis that this process should be done stochastically, modifying the simulators to produce random outputs as a result of the random nature of weather (which affects the internal properties of any building). This then provides motivation for the stochastic emulation techniques and also acts as an interesting case-study. Outside of these two guiding objectives, the first three chapters in this thesis (after the introduction) can generally be read independently: we develop techniques for checking the quality of a stochastic emulator; we develop a methodology for improved stochastic emulation by using deterministic (non-stochastic) simulations; and we propose a framework for deciding on an acceptable building design. The remaining chapters then discuss some attributes of specific emulation techniques, and provide concluding remarks.Engineering and Physical Sciences Research Council (EPSRC
    corecore