5,850 research outputs found

    Computational methods for prediction of in vitro effects of new chemical structures

    Get PDF
    Background With a constant increase in the number of new chemicals synthesized every year, it becomes important to employ the most reliable and fast in silico screening methods to predict their safety and activity profiles. In recent years, in silico prediction methods received great attention in an attempt to reduce animal experiments for the evaluation of various toxicological endpoints, complementing the theme of replace, reduce and refine. Various computational approaches have been proposed for the prediction of compound toxicity ranging from quantitative structure activity relationship modeling to molecular similarity-based methods and machine learning. Within the “Toxicology in the 21st Century” screening initiative, a crowd-sourcing platform was established for the development and validation of computational models to predict the interference of chemical compounds with nuclear receptor and stress response pathways based on a training set containing more than 10,000 compounds tested in high-throughput screening assays. Results Here, we present the results of various molecular similarity-based and machine-learning based methods over an independent evaluation set containing 647 compounds as provided by the Tox21 Data Challenge 2014. It was observed that the Random Forest approach based on MACCS molecular fingerprints and a subset of 13 molecular descriptors selected based on statistical and literature analysis performed best in terms of the area under the receiver operating characteristic curve values. Further, we compared the individual and combined performance of different methods. In retrospect, we also discuss the reasons behind the superior performance of an ensemble approach, combining a similarity search method with the Random Forest algorithm, compared to individual methods while explaining the intrinsic limitations of the latter. Conclusions Our results suggest that, although prediction methods were optimized individually for each modelled target, an ensemble of similarity and machine-learning approaches provides promising performance indicating its broad applicability in toxicity prediction

    Panic, irrationality, herding: Three ambiguous terms in crowd dynamics research

    Get PDF
    Background: The three terms “panic”, “irrationality” and “herding” are ubiquitous in the crowd dynamics literature and have a strong influence on both modelling and management practices. The terms are also commonly shared between the scientific and non-scientific domains. The pervasiveness of the use of these terms is to the point where their underlying assumptions have often been treated as common knowledge by both experts and lay persons. Yet, at the same time, the literature on crowd dynamics presents ample debate, contradiction and inconsistency on these topics. Method: This review is the first to systematically revisit these three terms in a unified study to highlight the scope of this debate. We extracted from peer-reviewed journal articles direct quotes that offer a definition, conceptualisation or supporting/contradicting evidence on these terms and/or their underlying theories. To further examine the suitability of the term herding, a secondary and more detailed analysis is also conducted on studies that have specifically investigated this phenomenon in empirical settings. Results. The review shows that (i) there is no consensus on the definition for the terms panic and irrationality; and that (ii) the literature is highly divided along discipline lines on how accurate these theories/terminologies are for describing human escape behaviour. The review reveals a complete division and disconnection between studies published by social scientists and those from the physical science domain; also, between studies whose main focus is on numerical simulation versus those with empirical focus. (iii) Despite the ambiguity of the definitions and the missing consensus in the literature, these terms are still increasingly and persistently mentioned in crowd evacuation studies. (iv) Different to panic and irrationality, there is relative consistency in definitions of the term herding, with the term usually being associated with ‘(blind) imitation’. However, based on the findings of empirical studies, we argue why, despite the relative consistency in meaning, (v) the term herding itself lacks adequate nuance and accuracy for describing the role of ‘social influence’ in escape behaviour. Our conclusions also emphasise the importance of distinguishing between the social influence on various aspects of evacuation behaviour and avoiding generalisation across various behavioural layers. Conclusions. We argue that the use of these three terms in the scientific literature does not contribute constructively to extending the knowledge or to improving the modelling capabilities in the field of crowd dynamics. This is largely due to the ambiguity of these terms, the overly simplistic nature of their assumptions, or the fact that the theories they represent are not readily verifiable. Recommendations: We suggest that it would be beneficial for advancing this research field that the phenomena related to these three terms are clearly defined by more tangible and quantifiable terms and be formulated as verifiable hypotheses, so they can be operationalised for empirical testing

    Credit scoring: comparison of non‐parametric techniques against logistic regression

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceOver the past decades, financial institutions have been giving increased importance to credit risk management as a critical tool to control their profitability. More than ever, it became crucial for these institutions to be able to well discriminate between good and bad clients for only accepting the credit applications that are not likely to default. To calculate the probability of default of a particular client, most financial institutions have credit scoring models based on parametric techniques. Logistic regression is the current industry standard technique in credit scoring models, and it is one of the techniques under study in this dissertation. Although it is regarded as a robust and intuitive technique, it is still not free from several critics towards the model assumptions it takes that can compromise its predictions. This dissertation intends to evaluate the gains in performance resulting from using more modern non-parametric techniques instead of logistic regression, performing a model comparison over four different real-life credit datasets. Specifically, the techniques compared against logistic regression in this study consist of two single classifiers (decision tree and SVM with RBF kernel) and two ensemble methods (random forest and stacking with cross-validation). The literature review demonstrates that heterogeneous ensemble approaches have a weaker presence in credit scoring studies and, because of that, stacking with cross-validation was considered in this study. The results demonstrate that logistic regression outperforms the decision tree classifier, has similar performance in relation to SVM and slightly underperforms both ensemble approaches in similar extents

    Artificial intelligence, bias and clinical safety

    Get PDF
    This is the final version. Available on open access from BMJ Publishing group via the DOI in this recordEngineering and Physical Sciences Research Council (EPSRC

    Risk Analytics in Econometrics

    Get PDF
    [eng] This thesis addresses the framework of risk analytics as a compendium of four main pillars: (i) big data, (ii) intensive programming, (iii) advanced analytics and machine learning, and (iv) risk analysis. Under the latter mainstay, this PhD dissertation reviews potential hazards known as “extreme events” that could negatively impact the wellbeing of people, profitability of firms, or the economic stability of a country, but which also have been underestimated or incorrectly treated by traditional modelling techniques. The objective of this thesis is to develop econometric and machine learning algorithms that can improve the predictive capacity of those extreme events and improve the comprehension of the phenomena contrary to some modern advanced methods which are black boxes in terms of interpretation. This thesis presents seven chapters that provide a methodological contribution to the existing literature by building techniques that transform the new valuable insights of big data into more accurate predictions that support decisions under risk, and increase robustness for more reliable and real results. This PhD thesis focuses uniquely on extremal events which are trigged into a binary variable, mostly known as class-imbalanced data and rare events in binary response, in other words, whose classes that are not equally distributed. The scope of research tackle real cases studies in the field of risk and insurance, where it is highly important to specify a level of claims of an event in order to foresee its impact and to provide a personalized treatment. After Chapter 1 corresponding to the introduction, Chapter 2 proposes a weighting mechanism to incorporated in the weighted likelihood estimation of a generalized linear model to improve the predictive performance of the highest and lowest deciles of prediction. Chapter 3 proposes two different weighting procedures for a logistic regression model with complex survey data or specific sampling designed data. Its objective is to control the randomness of data and provide more sensitivity to the estimated model. Chapter 4 proposes a rigorous review of trials with modern and classical predictive methods to uncover and discuss the efficiency of certain methods over others, and which and how gaps in machine learning literature can be addressed efficiently. Chapter 5 proposes a novel boosting-based method that overcomes certain existing methods in terms of predictive accuracy and also, recovers some interpretation of the model with imbalanced data. Chapter 6 develops another boosting-based algorithm which is able to improve the predictive capacity of rare events and get approximated as a generalized linear model in terms of interpretation. And finally, Chapter 7 includes the conclusions and final remarks. The present thesis highlights the importance of developing alternative modelling algorithms that reduces uncertainty, especially when there are potential limitations that impede to know all the previous factors that influence on the presence of a rare event or imbalanced-data phenomenon. This thesis merges two important approaches in modelling predictive literature as they are: “econometrics” and “machine learning”. All in all, this thesis contributes to enhance the methodology of how empirical analysis in many experimental and non-experimental sciences have being doing so far

    Searching for rules to detect defective modules: A subgroup discovery approach

    Get PDF
    Data mining methods in software engineering are becoming increasingly important as they can support several aspects of the software development life-cycle such as quality. In this work, we present a data mining approach to induce rules extracted from static software metrics characterising fault-prone modules. Due to the special characteristics of the defect prediction data (imbalanced, inconsistency, redundancy) not all classification algorithms are capable of dealing with this task conveniently. To deal with these problems, Subgroup Discovery (SD) algorithms can be used to find groups of statistically different data given a property of interest. We propose EDER-SD (Evolutionary Decision Rules for Subgroup Discovery), a SD algorithm based on evolutionary computation that induces rules describing only fault-prone modules. The rules are a well-known model representation that can be easily understood and applied by project managers and quality engineers. Thus, rules can help them to develop software systems that can be justifiably trusted. Contrary to other approaches in SD, our algorithm has the advantage of working with continuous variables as the conditions of the rules are defined using intervals. We describe the rules obtained by applying our algorithm to seven publicly available datasets from the PROMISE repository showing that they are capable of characterising subgroups of fault-prone modules. We also compare our results with three other well known SD algorithms and the EDER-SD algorithm performs well in most cases.Ministerio de Educación y Ciencia TIN2007-68084-C02-00Ministerio de Educación y Ciencia TIN2010-21715-C02-0

    Clinical prediction modelling in oral health: A review of study quality and empirical examples of model development

    Get PDF
    Background Substantial efforts have been made to improve the reproducibility and reliability of scientific findings in health research. These efforts include the development of guidelines for the design, conduct and reporting of preclinical studies (ARRIVE), clinical trials (ROBINS-I, CONSORT), observational studies (STROBE), and systematic reviews and meta-analyses (PRISMA). In recent years, the use of prediction modelling has increased in the health sciences. Clinical prediction models use information at the individual patient level to estimate the probability of a health outcome(s). Such models offer the potential to assist in clinical decision-making and to improve medical care. Guidelines such as PROBAST (Prediction model Risk Of Bias Assessment Tool) have been recently published to further inform the conduct of prediction modelling studies. Related guidelines for the reporting of these studies, such as TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis) instrument, have also been developed. Since the early 2000s, oral health prediction models have been used to predict the risk of various types of oral conditions, including dental caries, periodontal diseases and oral cancers. However, there is a lack of information on the methodological quality and reporting transparency of the published oral health prediction modelling studies. As a consequence, and due to the unknown quality and reliability of these studies, it remains unclear to what extent it is possible to generalise their findings and to replicate their derived models. Moreover, there remains a need to demonstrate the conduct of prediction modelling studies in oral health field following the contemporary guidelines. This doctoral project addresses these issues using two systematic reviews and two empirical analyses. This thesis is the first comprehensive and systematic project reviewing the study quality and demonstrating the use of registry data and longitudinal cohorts to develop clinical prediction models in oral health. Aims • To identify and examine the quality of existing prediction modelling studies in the major fields of oral health.• To demonstrate the conduct and reporting of a prediction modelling study following current guidelines, incorporating machine learning algorithms and accounting for multiple sources of biases. Methods As one of the most prevalent oral conditions, chronic periodontitis was chosen as the exemplar pathology for the first part of this thesis. A systematic review was conducted to investigate the existing prediction models for the incidence and progression of this condition. Based upon this initial overview, a more comprehensive critical review was conducted to assess the methodological quality and completeness of reporting for prediction modelling studies in the field of oral health. The risk of bias in the existing literature was assessed using the PROBAST criteria, and the quality of study reporting was measured in accordance with the TRIPOD guidelines. Following these two reviews, this research project demonstrated the conduct and reporting of a clinical prediction modelling study using two empirical examples. Two types of analyses that are commonly used for two different types of outcome data were adopted: survival analysis for censored outcomes and logistic regression analysis for binary outcomes. Models were developed to 1) predict the three- and five-year disease-specific survival of patients with oral and pharyngeal cancers, based on 21,154 cases collected by a large cancer registry program in the US, the Surveillance, Epidemiology and End Results (SEER) program, and 2) to predict the occurrence of acute and persistent pain following root canal treatment, based on the electronic dental records of 708 adult patients collected by the National Practice-Based Research Network. In these two case studies, all prediction models were developed in five steps: (i) framing the research question; (ii) data acquisition and pre-processing; (iii) model generation; (iv) model validation and performance evaluation; and (v) model presentation and reporting. In accordance with the PROBAST recommendations, the risk of bias during the modelling process was reduced in the following aspects: • In the first case study, three types of biases were taken into account: (i) bias due to missing data was reduced by adopting compatible methods to conduct imputation; (ii) bias due to unmeasured predictors was tested by sensitivity analysis; and (iii) bias due to the initial choice of modelling approach was addressed by comparing tree-based machine learning algorithms (survival tree, random survival forest and conditional inference forest) with the traditional statistical model (Cox regression). • In the second case study, the following strategies were employed: (i) missing data were addressed by multiple imputation with missing indicator methods; (ii) a multilevel logistic regression approach was adopted for model development in order to fit Table of Contents xi the hierarchical structure of the data; (iii) model complexity was reduced using the Least Absolute Shrinkage and Selection Operator (LASSO) for predictor selection; and (iv) the models’ predictive performance was evaluated comprehensively by using the Area Under the Precision Recall Curve (AUPRC) in addition to the Area Under the Receiver Operating Characteristic curve (AUROC); (v) finally, and most importantly, given the existing criticism in the research community concerning the gender-based and racial bias in risk prediction models, we compared the models’ predictive performance built with different sets of predictors (including a clinical set, a sociodemographic set and a combination of both, the ‘general’ set). Results The first and second review studies indicated that, in the field of oral health, the popularity of multivariable prediction models has increased in recent years. Bias and variance are two components of the uncertainty (e.g., the mean squared error) in model estimation. However, the majority of the existing studies did not account for various sources of bias, such as measurement error and inappropriate handling of missing data. Moreover, non-transparent reporting and lack of reproducibility of the models were also identified in the existing oral health prediction modelling studies. These findings provided motivation to conduct two case studies aimed at demonstrating adherence to the contemporary guidelines and to best practice. In the third study, comparable predictive capabilities between Cox regression and the non-parametric tree-based machine learning algorithms were observed for predicting the survival of patients with oral and pharyngeal cancers. For example, the C-index for a Cox model and a random survival forest in predicting three-year survival were 0.82 and 0.84, respectively. A novelty of this study was the development of an online calculator designed to provide an open and transparent estimation of patients’ survival probability for up to five years after diagnosis. This calculator has clinical translational potential and could aid in patient stratification and treatment planning, at least in the context of ongoing research. In addition, the transparent reporting of this study was achieved by following the TRIPOD checklist and sharing all data and codes. In the fourth study, LASSO regression suggested that pre-treatment clinical factors were important in the development of one-week and six-month postoperative pain following root canal treatment. Among all the developed multilevel logistic models, models with a clinical set of predictors yielded similar predictive performance to models with a general set of predictors, while the models with sociodemographic predictors showed the weakest predictive ability. For example, for predicting one-week postoperative pain, the AUROC for models with clinical, sociodemographic and general predictors were 0.82, 0.68 and 0,84, respectively, and the AUPRC were 0.66, 0.40 and 0.72, respectively. Conclusion The significance of this research project is twofold. First, prediction models have been developed for potential clinical use in the context of various oral conditions. Second, this research represents the first attempt to standardise the conduct of this type of studies in oral health research. This thesis presents three conclusions: 1) Adherence to contemporary best practice guidelines such as PROBAST and TRIPOD is limited in the field of oral health research. In response, this PhD project disseminates these guidelines and leverages their advantages to develop effective prediction models for use in dentistry and oral health. 2) Use of appropriate procedures, accounting for and adapting to multiple sources of bias in model development, produces predictive tools of increased reliability and accuracy that hold the potential to be implemented in clinical practice. Therefore, for future prediction modelling research, it is important that data analysts work towards eliminating bias, regardless of the areas in which the models are employed. 3) Machine learning algorithms provide alternatives to traditional statistical models for clinical prediction purposes. Additionally, in the presence of clinical factors, sociodemographic characteristics contribute less to the improvement of models’ predictive performance or to providing cogent explanations of the variance in the models, regardless of the modelling approach. Therefore, it is timely to reconsider the use of sociodemographic characteristics in clinical prediction modelling research. It is suggested that this is a proportionate and evidence based strategy aimed at reducing biases in healthcare risk prediction that may be derived from gender and racial characteristics inherent in sociodemographic data sets.Thesis (Ph.D.) -- University of Adelaide, School of Public Health, 202
    corecore