5,312 research outputs found

    Using Conditional Inference Forests to Identify the Factors Affecting Crash Severity on Arterial Corridors

    Get PDF
    Introduction The study aims at identifying traffic/highway design/driver-vehicle information significantly related with fatal/severe crashes on urban arterials for different crash types. Since the data used in this study are observational (i.e., collected outside the purview of a designed experiment), an information discovery approach is adopted for this study. Method Random Forests, which are ensembles of individual trees grown by CART (Classification and Regression Tree) algorithm, are applied in numerous applications for this purpose. Specifically, conditional inference forests have been implemented. In each tree of the conditional inference forest, splits are based on how good the association is. Chi-square test statistics are used to measure the association. Apart from identifying the variables that improve classification accuracy, the methodology also clearly identifies the variables that are neutral to accuracy, and also those that decrease it. Results The methodology is quite insightful in identifying the variables of interest in the database (e.g., alcohol/ drug use and higher posted speed limits contribute to severe crashes). Failure to use safety equipment by all passengers and presence of driver/passenger in the vulnerable age group (more than 55 years or less than 3 years) increased the severity of injuries given a crash had occurred. A new variable, ‘element’ has been used in this study, which assigns crashes to segments, intersections, or access points based on the information from site location, traffic control, and presence of signals. Impact The authors were able to identify roadway locations where severe crashes tend to occur. For example, segments and access points were found to be riskier for single vehicle crashes. Higher skid resistance and k-factor also contributed toward increased severity of injuries in crashes

    Predicting continuous conflict perception with Bayesian Gaussian processes

    Get PDF
    Conflict is one of the most important phenomena of social life, but it is still largely neglected by the computing community. This work proposes an approach that detects common conversational social signals (loudness, overlapping speech, etc.) and predicts the conflict level perceived by human observers in continuous, non-categorical terms. The proposed regression approach is fully Bayesian and it adopts Automatic Relevance Determination to identify the social signals that influence most the outcome of the prediction. The experiments are performed over the SSPNet Conflict Corpus, a publicly available collection of 1430 clips extracted from televised political debates (roughly 12 hours of material for 138 subjects in total). The results show that it is possible to achieve a correlation close to 0.8 between actual and predicted conflict perception

    Exploring the association of the discharge medicines review with patient hospital readmissions through national routine data linkage in Wales: a retrospective cohort study

    Get PDF
    Objective To evaluate the association of the discharge medicines review (DMR) community pharmacy service with hospital readmissions through linking National Health Service data sets. Design Retrospective cohort study. Setting All hospitals and 703 community pharmacies across Wales. Participants Inpatients meeting the referral criteria for a community pharmacy DMR. Interventions Information related to the patient’s medication and hospital stay is provided to the community pharmacists on discharge from hospital, who undertake a two-part service involving medicines reconciliation and a medicine use review. To investigate the association of this DMR service with hospital readmission, a data linking process was undertaken across six national databases. Primary outcome Rate of hospital readmission within 90 days for patients with and without a DMR part 1 started. Secondary outcome Strength of association of age decile, sex, deprivation decile, diagnostic grouping and DMR type (started or not started) with reduction in readmission within 90 days. Results 1923 patients were referred for a DMR over a 13-month period (February 2017–April 2018). Provision of DMR was found to be the most significant attributing factor to reducing likelihood of 90-day readmission using χ2 testing and classification methods. Cox regression survival analysis demonstrated that those receiving the intervention had a lower hospital readmission rate at 40 days (p<0.000, HR: 0.59739, CI 0.5043 to 0.7076). Conclusions DMR after a hospital discharge is associated with a reduction in risk of hospital readmission within 40 days. Linking data across disparate national data records is feasible but requires a complex processual architecture. There is a significant value for integrated informatics to improve continuity and coherency of care, and also to facilitate service optimisation, evaluation and evidenced-based practice

    Observer-biased bearing condition monitoring: from fault detection to multi-fault classification

    Get PDF
    Bearings are simultaneously a fundamental component and one of the principal causes of failure in rotary machinery. The work focuses on the employment of fuzzy clustering for bearing condition monitoring, i.e., fault detection and classification. The output of a clustering algorithm is a data partition (a set of clusters) which is merely a hypothesis on the structure of the data. This hypothesis requires validation by domain experts. In general, clustering algorithms allow a limited usage of domain knowledge on the cluster formation process. In this study, a novel method allowing for interactive clustering in bearing fault diagnosis is proposed. The method resorts to shrinkage to generalize an otherwise unbiased clustering algorithm into a biased one. In this way, the method provides a natural and intuitive way to control the cluster formation process, allowing for the employment of domain knowledge to guiding it. The domain expert can select a desirable level of granularity ranging from fault detection to classification of a variable number of faults and can select a specific region of the feature space for detailed analysis. Moreover, experimental results under realistic conditions show that the adopted algorithm outperforms the corresponding unbiased algorithm (fuzzy c-means) which is being widely used in this type of problems. (C) 2016 Elsevier Ltd. All rights reserved.Grant number: 145602

    Applications of Supervised Machine Learning in Autism Spectrum Disorder Research: A Review

    Get PDF
    Autism spectrum disorder (ASD) research has yet to leverage big data on the same scale as other fields; however, advancements in easy, affordable data collection and analysis may soon make this a reality. Indeed, there has been a notable increase in research literature evaluating the effectiveness of machine learning for diagnosing ASD, exploring its genetic underpinnings, and designing effective interventions. This paper provides a comprehensive review of 45 papers utilizing supervised machine learning in ASD, including algorithms for classification and text analysis. The goal of the paper is to identify and describe supervised machine learning trends in ASD literature as well as inform and guide researchers interested in expanding the body of clinically, computationally, and statistically sound approaches for mining ASD data

    Fault Analysis of Electromechanical Systems using Information Entropy Concepts

    Get PDF
    Fault analysis of mechanical and electromechanical systems has been a subject of considerable interest in the systems and control research community. Entropy, under its various formulations is an important variable, which is unrivaled when it comes to measuring order (or organization) and/or disorder (or disorganization). Researchers have successfully used entropy based concepts to solve various challenging problems in engineering, mathematics, meteorology, biotechnology, medicine, statistics etc. This research tries to analyze faults in electromechanical systems using information entropy concepts. The objectives of this research are to develop a method to evaluate signal entropy of a dynamical system using only input/output measurements, and to use this entropy measure to analyze faults within a dynamical system. Given discrete-time signals corresponding to the three-phase voltages and currents of an electromechanical system being monitored, the problem is to analyze whether or not this system is healthy. The concepts of Shannon entropy and relative entropy come from the field of Information Theory. They measure the degree of uncertainty that exists in a system. The main idea behind this approach is that the system's dynamics may have regularities hidden in measurements that are not obvious to see. The Shannon entropy and relative entropy measures are calculated by using probability distribution functions (PDF) that are formed by sampling the time series currents and voltages of a system. The system's health is monitored by, first, sampling the currents and voltages at certain time intervals, then generating the corresponding PDFs and, finally, calculating the information entropy measures. If the system dynamics are unchanged, or in other words, the system continues to be healthy, then the relative entropy measures will be consistently low or constant. But, if the system dynamics change due to damage, then the corresponding relative entropy and Shannon entropy measures will be increasing compared to the entropy of the system with less damage
    • …
    corecore