742 research outputs found

    Evaluation of physics constrained data-driven methods for turbulence model uncertainty quantification

    Full text link
    In order to achieve a virtual certification process and robust designs for turbomachinery, the uncertainty bounds for Computational Fluid Dynamics have to be known. The formulation of turbulence closure models implies a major source of the overall uncertainty of Reynolds-averaged Navier-Stokes simulations. We discuss the common practice of applying a physics constrained eigenspace perturbation of the Reynolds stress tensor in order to account for the model form uncertainty of turbulence models. Since the basic methodology often leads to overly generous uncertainty estimates, we extend a recent approach of adding a machine learning strategy. The application of a data-driven method is motivated by striving for the detection of flow regions, which are prone to suffer from a lack of turbulence model prediction accuracy. In this way any user input related to choosing the degree of uncertainty is supposed to become obsolete. This work especially investigates an approach, which tries to determine an a priori estimation of prediction confidence, when there is no accurate data available to judge the prediction. The flow around the NACA 4412 airfoil at near-stall conditions demonstrates the successful application of the data-driven eigenspace perturbation framework. Furthermore, we especially highlight the objectives and limitations of the underlying methodology

    Towards Personalized Medicine Using Systems Biology And Machine Learning

    Get PDF
    The rate of acquiring biological data has greatly surpassed our ability to interpret it. At the same time, we have started to understand that evolution of many diseases such as cancer, are the results of the interplay between the disease itself and the immune system of the host. It is now well accepted that cancer is not a single disease, but a “complex collection of distinct genetic diseases united by common hallmarks”. Understanding the differences between such disease subtypes is key not only in providing adequate treatments for known subtypes but also identifying new ones. These unforeseen disease subtypes are one of the main reasons high-profile clinical trials fail. To identify such cases, we proposed a classification technique, based on Support Vector Machines, that is able to automatically identify samples that are dissimilar from the classes used for training. We assessed the performance of this approach both with artificial data and data from the UCI machine learning repository. Moreover, we showed in a leukemia experiment that our method is able to identify 65% of the MLL patients when it was trained only on AML vs. ALL. In addition, to augment our ability to understand the disease mechanism in each subgroup, we proposed a systems biology approach able to consider all measured gene expressing changes, thus eliminating the possibility that small but important gene changes (e.g. transcription factors) are omitted from the analysis. We showed that this approach provides consistent results that do not depend on the choice of an arbitrary threshold for the differential regulation. We also showed in a multiple sclerosis study that this approach is able to obtain consistent results across multiple experiments performed by different groups on different technologies, that could not be achieved based solely using differential expression. The cut-off free impact analysis was released as part of the ROntoTools Bioconductor package

    Interpretable Machine Learning Model for Clinical Decision Making

    Get PDF
    Despite machine learning models being increasingly used in medical decision-making and meeting classification predictive accuracy standards, they remain untrusted black-boxes due to decision-makers\u27 lack of insight into their complex logic. Therefore, it is necessary to develop interpretable machine learning models that will engender trust in the knowledge they generate and contribute to clinical decision-makers intention to adopt them in the field. The goal of this dissertation was to systematically investigate the applicability of interpretable model-agnostic methods to explain predictions of black-box machine learning models for medical decision-making. As proof of concept, this study addressed the problem of predicting the risk of emergency readmissions within 30 days of being discharged for heart failure patients. Using a benchmark data set, supervised classification models of differing complexity were trained to perform the prediction task. More specifically, Logistic Regression (LR), Random Forests (RF), Decision Trees (DT), and Gradient Boosting Machines (GBM) models were constructed using the Healthcare Cost and Utilization Project (HCUP) Nationwide Readmissions Database (NRD). The precision, recall, area under the ROC curve for each model were used to measure predictive accuracy. Local Interpretable Model-Agnostic Explanations (LIME) was used to generate explanations from the underlying trained models. LIME explanations were empirically evaluated using explanation stability and local fit (R2). The results demonstrated that local explanations generated by LIME created better estimates for Decision Trees (DT) classifiers
    • …
    corecore