29,132 research outputs found

    Predicting regression test failures using genetic algorithm-selected dynamic performance analysis metrics

    Get PDF
    A novel framework for predicting regression test failures is proposed. The basic principle embodied in the framework is to use performance analysis tools to capture the runtime behaviour of a program as it executes each test in a regression suite. The performance information is then used to build a dynamically predictive model of test outcomes. Our framework is evaluated using a genetic algorithm for dynamic metric selection in combination with state-of-the-art machine learning classifiers. We show that if a program is modified and some tests subsequently fail, then it is possible to predict with considerable accuracy which of the remaining tests will also fail which can be used to help prioritise tests in time constrained testing environments

    Overview of Remaining Useful Life prediction techniques in Through-life Engineering Services

    Get PDF
    Through-life Engineering Services (TES) are essential in the manufacture and servicing of complex engineering products. TES improves support services by providing prognosis of run-to-failure and time-to-failure on-demand data for better decision making. The concept of Remaining Useful Life (RUL) is utilised to predict life-span of components (of a service system) with the purpose of minimising catastrophic failure events in both manufacturing and service sectors. The purpose of this paper is to identify failure mechanisms and emphasise the failure events prediction approaches that can effectively reduce uncertainties. It will demonstrate the classification of techniques used in RUL prediction for optimisation of products’ future use based on current products in-service with regards to predictability, availability and reliability. It presents a mapping of degradation mechanisms against techniques for knowledge acquisition with the objective of presenting to designers and manufacturers ways to improve the life-span of components

    Predictive modeling of die filling of the pharmaceutical granules using the flexible neural tree

    Get PDF
    In this work, a computational intelligence (CI) technique named flexible neural tree (FNT) was developed to predict die filling performance of pharmaceutical granules and to identify significant die filling process variables. FNT resembles feedforward neural network, which creates a tree-like structure by using genetic programming. To improve accuracy, FNT parameters were optimized by using differential evolution algorithm. The performance of the FNT-based CI model was evaluated and compared with other CI techniques: multilayer perceptron, Gaussian process regression, and reduced error pruning tree. The accuracy of the CI model was evaluated experimentally using die filling as a case study. The die filling experiments were performed using a model shoe system and three different grades of microcrystalline cellulose (MCC) powders (MCC PH 101, MCC PH 102, and MCC DG). The feed powders were roll-compacted and milled into granules. The granules were then sieved into samples of various size classes. The mass of granules deposited into the die at different shoe speeds was measured. From these experiments, a dataset consisting true density, mean diameter (d50), granule size, and shoe speed as the inputs and the deposited mass as the output was generated. Cross-validation (CV) methods such as 10FCV and 5x2FCV were applied to develop and to validate the predictive models. It was found that the FNT-based CI model (for both CV methods) performed much better than other CI models. Additionally, it was observed that process variables such as the granule size and the shoe speed had a higher impact on the predictability than that of the powder property such as d50. Furthermore, validation of model prediction with experimental data showed that the die filling behavior of coarse granules could be better predicted than that of fine granules

    Which heuristics can aid financial-decision-making?

    Get PDF
    © 2015 Elsevier Inc. We evaluate the contribution of Nobel Prize-winner Daniel Kahneman, often in association with his late co-author Amos Tversky, to the development of our understanding of financial decision-making and the evolution of behavioural finance as a school of thought within Finance. Whilst a general evaluation of the work of Kahneman would be a massive task, we constrain ourselves to a more narrow discussion of his vision of financial-decision making compared to a possible alternative advanced by Gerd Gigerenzer along with numerous co-authors. Both Kahneman and Gigerenzer agree on the centrality of heuristics in decision making. However, for Kahneman heuristics often appear as a fall back when the standard von-Neumann-Morgenstern axioms of rational decision-making do not describe investors' choices. In contrast, for Gigerenzer heuristics are simply a more effective way of evaluating choices in the rich and changing decision making environment investors must face. Gigerenzer challenges Kahneman to move beyond substantiating the presence of heuristics towards a more tangible, testable, description of their use and disposal within the ever changing decision-making environment financial agents inhabit. Here we see the emphasis placed by Gigerenzer on how context and cognition interact to form new schemata for fast and frugal reasoning as offering a productive vein of new research. We illustrate how the interaction between cognition and context already characterises much empirical research and it appears the fast and frugal reasoning perspective of Gigerenzer can provide a framework to enhance our understanding of how financial decisions are made

    An investigation of machine learning based prediction systems

    Get PDF
    Traditionally, researchers have used either o�f-the-shelf models such as COCOMO, or developed local models using statistical techniques such as stepwise regression, to obtain software eff�ort estimates. More recently, attention has turned to a variety of machine learning methods such as artifcial neural networks (ANNs), case-based reasoning (CBR) and rule induction (RI). This paper outlines some comparative research into the use of these three machine learning methods to build software e�ort prediction systems. We briefly describe each method and then apply the techniques to a dataset of 81 software projects derived from a Canadian software house in the late 1980s. We compare the prediction systems in terms of three factors: accuracy, explanatory value and configurability. We show that ANN methods have superior accuracy and that RI methods are least accurate. However, this view is somewhat counteracted by problems with explanatory value and configurability. For example, we found that considerable eff�ort was required to configure the ANN and that this compared very unfavourably with the other techniques, particularly CBR and least squares regression (LSR). We suggest that further work be carried out, both to further explore interaction between the enduser and the prediction system, and also to facilitate configuration, particularly of ANNs

    Update on the ICUD-SIU consultation on multi-parametric magnetic resonance imaging in localised prostate cancer

    Get PDF
    Introduction: Prostate cancer (PCa) imaging is a rapidly evolving field. Dramatic improvements in prostate MRI during the last decade will probably change the accuracy of diagnosis. This chapter reviews recent current evidence about MRI diagnostic performance and impact on PCa management. Materials and methods: The International Consultation on Urological Diseases nominated a committee to review the literature on prostate MRI. A search of the PubMed database was conducted to identify articles focussed on MP-MRI detection and staging protocols, reporting and scoring systems, the role of MP-MRI in diagnosing PCa prior to biopsy, in active surveillance, in focal therapy and in detecting local recurrence after treatment. Results: Differences in opinion were reported in the use of the strength of magnets [1.5 Tesla (T) vs. 3T] and coils. More agreement was found regarding the choice of pulse sequences; diffusion-weighted MRI (DW-MRI), dynamic contrast-enhanced MRI (DCE MRI), and/or MR spectroscopy imaging (MRSI) are recommended in addition to conventional T2-weighted anatomical sequences. In 2015, the Prostate Imaging Reporting and Data System (PI-RADS version 2) was described to standardize image acquisition and interpretation. MP-MRI improves detection of clinically significant PCa (csPCa) in the repeat biopsy setting or before the confirmatory biopsy in patients considering active surveillance. It is useful to guide focal treatment and to detect local recurrences after treatment. Its role in biopsy-naive patients or during the course of active surveillance remains debated. Conclusion: MP-MRI is increasingly used to improve detection of csPCa and for the selection of a suitable therapeutic approach

    Learning to Predict with Highly Granular Temporal Data: Estimating individual behavioral profiles with smart meter data

    Get PDF
    Big spatio-temporal datasets, available through both open and administrative data sources, offer significant potential for social science research. The magnitude of the data allows for increased resolution and analysis at individual level. While there are recent advances in forecasting techniques for highly granular temporal data, little attention is given to segmenting the time series and finding homogeneous patterns. In this paper, it is proposed to estimate behavioral profiles of individuals' activities over time using Gaussian Process-based models. In particular, the aim is to investigate how individuals or groups may be clustered according to the model parameters. Such a Bayesian non-parametric method is then tested by looking at the predictability of the segments using a combination of models to fit different parts of the temporal profiles. Model validity is then tested on a set of holdout data. The dataset consists of half hourly energy consumption records from smart meters from more than 100,000 households in the UK and covers the period from 2015 to 2016. The methodological approach developed in the paper may be easily applied to datasets of similar structure and granularity, for example social media data, and may lead to improved accuracy in the prediction of social dynamics and behavior

    Characterisation of large changes in wind power for the day-ahead market using a fuzzy logic approach

    Get PDF
    Wind power has become one of the renewable resources with a major growth in the electricity market. However, due to its inherent variability, forecasting techniques are necessary for the optimum scheduling of the electric grid, specially during ramp events. These large changes in wind power may not be captured by wind power point forecasts even with very high resolution Numerical Weather Prediction (NWP) models. In this paper, a fuzzy approach for wind power ramp characterisation is presented. The main benefit of this technique is that it avoids the binary definition of ramp event, allowing to identify changes in power out- put that can potentially turn into ramp events when the total percentage of change to be considered a ramp event is not met. To study the application of this technique, wind power forecasts were obtained and their corresponding error estimated using Genetic Programming (GP) and Quantile Regression Forests. The error distributions were incorporated into the characterisation process, which according to the results, improve significantly the ramp capture. Results are presented using colour maps, which provide a useful way to interpret the characteristics of the ramp events
    corecore