6 research outputs found

    Comparing Machine Learning and Interpolation Methods for Loop-Level Calculations

    Full text link
    The need to approximate functions is ubiquitous in science, either due to empirical constraints or high computational cost of accessing the function. In high-energy physics, the precise computation of the scattering cross-section of a process requires the evaluation of computationally intensive integrals. A wide variety of methods in machine learning have been used to tackle this problem, but often the motivation of using one method over another is lacking. Comparing these methods is typically highly dependent on the problem at hand, so we specify to the case where we can evaluate the function a large number of times, after which quick and accurate evaluation can take place. We consider four interpolation and three machine learning techniques and compare their performance on three toy functions, the four-point scalar Passarino-Veltman D0D_0 function, and the two-loop self-energy master integral MM. We find that in low dimensions (d=3d = 3), traditional interpolation techniques like the Radial Basis Function perform very well, but in higher dimensions (d=5,6,9d=5, 6, 9) we find that multi-layer perceptrons (a.k.a neural networks) do not suffer as much from the curse of dimensionality and provide the fastest and most accurate predictions.Comment: 30 pages, 17 figures, v2:added a few references, v3: new title, added a few reference

    Quasi-optimization of Neuro-fuzzy Expert Systems using Asymptotic Least-squares and Modified Radial Basis Function Models: Intelligent Planning of Operational Research Problems

    Get PDF
    The uncertainty found in many industrialization systems poses a significant challenge; partic-ularly in modelling production planning and optimizing manufacturing flow. In aggregate production planning, a key requirement is an ability to accurately predict demand from a range of influencing factors, such as consumption for example. Accurately building such causal models can be problematic if significant uncertainties are present, such as when the data are fuzzy, uncertain, fluctuate and are non-linear. AI models, such as Adaptive Neuro-Fuzzy Inference Systems (ANFIS), can cope with this better than most but even these well-established approaches fail if the data is scarce, poorly scaled and noisy. ANFIS is a combination of two approaches; Sugeno-type Fuzzy Inference System (FIS)and Artificial Neural Networks (ANN). Two sets of parameters are required to define the model: premise parameters and consequent parameters. Together, they ensure that the correct number and shape of membership functions are used and combined to produce reliable outputs. However, optimally determining values for these parameters can only happen if there are enough data samples representing the problem space to ensure that the method can converge. Mitigation strategies are suggested in the literature, such as fixing the premise parameters to avoid over-fitting, but, for many practitioners, this is not an adequate solution, as their expertise lies in the application domain, not in the AI domain. The work presented here is motivated by a real-world challenge in modelling and pre-dicting demand for the gasoline industry in Iraq, an application where both the quality and quantity of the training data can significantly affect prediction accuracy. To overcome data scarcity, we propose novel data expansion algorithms that are able to augment the original data with new samples drawn from the same distribution. By using a combination of carefully chosen and suitably modified radial basis function models, we show how robust methods can overcome problems of over-smoothing at boundary values and turning points. We further show how transformed least-squares (TLS) approximation of the data can be constructed to asymptotically bound the effect of outliers to enable accurate data expansion to take place. Though the problem of scaling/normalization is well understood in some AI applications, we assess the impact on model accuracy for two specific scaling techniques. By comparing and contrasting a range of data scaling and data expansion methods, we can evaluate their effectiveness in reducing prediction error. Throughout this work, the various methods are explained and expanded upon using the case study drawn from the oil and gas industry in Iraq which focuses on the accurate prediction of yearly gasoline consumption. This case study, and others are used to demonstrate, empirically, the effectiveness of the approaches presented when compared to current state of the art. Finally, we present a tool developed in Matlab to allow practitioners to experiment with all methods and options presented in this work
    corecore