2,803 research outputs found

    Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence

    Get PDF
    Objectives: Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features) on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance. Methods: We built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric +, indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests. Results: All tests of summary performance (mean F3) surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric+ features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88% to 98% for the second pass through citations and from 38% to 48% overall. Conclusions: A computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming studies associated with discordant decisions for further consideration. © 2014 Bekhuis et al

    Alaska Results First Initiative: Adult Criminal Justice Program Benefit Cost Analysis

    Get PDF
    Results of the Alaska Results First Initiative show that most of Alaska’s evidence-based adult criminal justice programs are showing positive return on state investment of money. Notably, all but one of those programs are shown to measurably reduce recidivism — the likelihood that an inmate will re-offend when released — which not only improves public safety, but saves the state the costs associated with criminal activity. The State of Alaska annually invests in Alaska’s adult criminal justice system to provide services and programs to eligible offenders, including domestic violence treatment, vocational and general education, and re-entry services. The study estimates that approximately $20.58 million in state funds were invested annually to the 19 evidence-based adult criminal justice programs that are shown — by academic studies and rigorous reviews — to yield results. The report is the result of a multi-year project, with support and participation of all three branches of Alaska state government, and in partnership with the Pew-MacArthur Results First Initiative.The Alaska Justice Information Center is jointly funded by the State of Alaska and the Alaska Mental Health Trust Authority and is housed within the Justice Center at University of Alaska Anchorage.Executive Summary / Chapter 1. Introduction / PART I. COMPILE / Chapter 2. Alaska’s Adult Criminal Justice Program Inventory / PART II. COST / Chapter 3. Programmatic Costs / Chapter 4. Programmatic Benefits: Baseline Recidivism / Chapter 5. Programmatic Benefits: Avoided Future Costs / PART III. COMPARE / Chapter 6. Benefit Cost Analysis / Chapter 7. Using Alaska’s Results First Model to Inform Strategic Adult Criminal Justice Programming / References / APPENDICE

    2011 Strategic roadmap for Australian research infrastructure

    Get PDF
    The 2011 Roadmap articulates the priority research infrastructure areas of a national scale (capability areas) to further develop Australia’s research capacity and improve innovation and research outcomes over the next five to ten years. The capability areas have been identified through considered analysis of input provided by stakeholders, in conjunction with specialist advice from Expert Working Groups &nbsp; It is intended the Strategic Framework will provide a high-level policy framework, which will include principles to guide the development of policy advice and the design of programs related to the funding of research infrastructure by the Australian Government. Roadmapping has been identified in the Strategic Framework Discussion Paper as the most appropriate prioritisation mechanism for national, collaborative research infrastructure. The strategic identification of Capability areas through a consultative roadmapping process was also validated in the report of the 2010 NCRIS Evaluation. The 2011 Roadmap is primarily concerned with medium to large-scale research infrastructure. However, any landmark infrastructure (typically involving an investment in excess of $100 million over five years from the Australian Government) requirements identified in this process will be noted. NRIC has also developed a ‘Process to identify and prioritise Australian Government landmark research infrastructure investments’ which is currently under consideration by the government as part of broader deliberations relating to research infrastructure. NRIC will have strategic oversight of the development of the 2011 Roadmap as part of its overall policy view of research infrastructure

    Deep Risk Prediction and Embedding of Patient Data: Application to Acute Gastrointestinal Bleeding

    Get PDF
    Acute gastrointestinal bleeding is a common and costly condition, accounting for over 2.2 million hospital days and 19.2 billion dollars of medical charges annually. Risk stratification is a critical part of initial assessment of patients with acute gastrointestinal bleeding. Although all national and international guidelines recommend the use of risk-assessment scoring systems, they are not commonly used in practice, have sub-optimal performance, may be applied incorrectly, and are not easily updated. With the advent of widespread electronic health record adoption, longitudinal clinical data captured during the clinical encounter is now available. However, this data is often noisy, sparse, and heterogeneous. Unsupervised machine learning algorithms may be able to identify structure within electronic health record data while accounting for key issues with the data generation process: measurements missing-not-at-random and information captured in unstructured clinical note text. Deep learning tools can create electronic health record-based models that perform better than clinical risk scores for gastrointestinal bleeding and are well-suited for learning from new data. Furthermore, these models can be used to predict risk trajectories over time, leveraging the longitudinal nature of the electronic health record. The foundation of creating relevant tools is the definition of a relevant outcome measure; in acute gastrointestinal bleeding, a composite outcome of red blood cell transfusion, hemostatic intervention, and all-cause 30-day mortality is a relevant, actionable outcome that reflects the need for hospital-based intervention. However, epidemiological trends may affect the relevance and effectiveness of the outcome measure when applied across multiple settings and patient populations. Understanding the trends in practice, potential areas of disparities, and value proposition for using risk stratification in patients presenting to the Emergency Department with acute gastrointestinal bleeding is important in understanding how to best implement a robust, generalizable risk stratification tool. Key findings include a decrease in the rate of red blood cell transfusion since 2014 and disparities in access to upper endoscopy for patients with upper gastrointestinal bleeding by race/ethnicity across urban and rural hospitals. Projected accumulated savings of consistent implementation of risk stratification tools for upper gastrointestinal bleeding total approximately $1 billion 5 years after implementation. Most current risk scores were designed for use based on the location of the bleeding source: upper or lower gastrointestinal tract. However, the location of the bleeding source is not always clear at presentation. I develop and validate electronic health record based deep learning and machine learning tools for patients presenting with symptoms of acute gastrointestinal bleeding (e.g., hematemesis, melena, hematochezia), which is more relevant and useful in clinical practice. I show that they outperform leading clinical risk scores for upper and lower gastrointestinal bleeding, the Glasgow Blatchford Score and the Oakland score. While the best performing gradient boosted decision tree model has equivalent overall performance to the fully connected feedforward neural network model, at the very low risk threshold of 99% sensitivity the deep learning model identifies more very low risk patients. Using another deep learning model that can model longitudinal risk, the long-short-term memory recurrent neural network, need for transfusion of red blood cells can be predicted at every 4-hour interval in the first 24 hours of intensive care unit stay for high risk patients with acute gastrointestinal bleeding. Finally, for implementation it is important to find patients with symptoms of acute gastrointestinal bleeding in real time and characterize patients by risk using available data in the electronic health record. A decision rule-based electronic health record phenotype has equivalent performance as measured by positive predictive value compared to deep learning and natural language processing-based models, and after live implementation appears to have increased the use of the Acute Gastrointestinal Bleeding Clinical Care pathway. Patients with acute gastrointestinal bleeding but with other groups of disease concepts can be differentiated by directly mapping unstructured clinical text to a common ontology and treating the vector of concepts as signals on a knowledge graph; these patients can be differentiated using unbalanced diffusion earth mover’s distances on the graph. For electronic health record data with data missing not at random, MURAL, an unsupervised random forest-based method, handles data with missing values and generates visualizations that characterize patients with gastrointestinal bleeding. This thesis forms a basis for understanding the potential for machine learning and deep learning tools to characterize risk for patients with acute gastrointestinal bleeding. In the future, these tools may be critical in implementing integrated risk assessment to keep low risk patients out of the hospital and guide resuscitation and timely endoscopic procedures for patients at higher risk for clinical decompensation

    Large-scale inference in the focally damaged human brain

    Get PDF
    Clinical outcomes in focal brain injury reflect the interactions between two distinct anatomically distributed patterns: the functional organisation of the brain and the structural distribution of injury. The challenge of understanding the functional architecture of the brain is familiar; that of understanding the lesion architecture is barely acknowledged. Yet, models of the functional consequences of focal injury are critically dependent on our knowledge of both. The studies described in this thesis seek to show how machine learning-enabled high-dimensional multivariate analysis powered by large-scale data can enhance our ability to model the relation between focal brain injury and clinical outcomes across an array of modelling applications. All studies are conducted on internationally the largest available set of MR imaging data of focal brain injury in the context of acute stroke (N=1333) and employ kernel machines at the principal modelling architecture. First, I examine lesion-deficit prediction, quantifying the ceiling on achievable predictive fidelity for high-dimensional and low-dimensional models, demonstrating the former to be substantially higher than the latter. Second, I determine the marginal value of adding unlabelled imaging data to predictive models within a semi-supervised framework, quantifying the benefit of assembling unlabelled collections of clinical imaging. Third, I compare high- and low-dimensional approaches to modelling response to therapy in two contexts: quantifying the effect of treatment at the population level (therapeutic inference) and predicting the optimal treatment in an individual patient (prescriptive inference). I demonstrate the superiority of the high-dimensional approach in both settings

    Curriculum Committee Report - January 21, 2010

    Get PDF

    Prediction and control in human neuromusculoskeletal models

    Get PDF
    Computational neuromusculoskeletal modelling enables the generation and testing of hypotheses about human movement on a large scale, in silico. Humanoid models, which increasingly aim to replicate the full complexity of the human nervous and musculoskeletal systems, are built on extensive prior knowledge, extracted from anatomical imaging, kinematic and kinetic measurement, and codified as model description. Where inverse dynamic analysis is applied, its basis is in Newton's laws of motion, and in solving for muscular redundancy it is necessary to invoke knowledge of central nervous motor strategy. This epistemological approach contrasts strongly with the models of machine learning, which are generally over-parameterised and largely data-driven. Even as spectacular performance has been delivered by the application of these models in a number of discrete domains of artificial intelligence, work towards general human-level intelligence has faltered, leading many to wonder if the data-driven approach is fundamentally limited, and spurring efforts to combine machine learning with knowledge-based modelling. Through a series of five studies, this thesis explores the combination of neuromusculoskeletal modelling with machine learning in order to enhance the core tasks of prediction and control. Several principles for the development of clinically useful artificially intelligent systems emerge: stability, computational efficiency and incorporation of prior knowledge. The first study concerns the use of neural network function approximators for the prediction of internal forces during human movement, an important task with many clinical applications, but one for which the standard tools of modelling are slow and cumbersome. By training on a large dataset of motions and their corresponding forces, state of the art performance is demonstrated, with many-fold increases in inference speed enabling the deployment of trained models for use in a real time biofeedback system. Neural networks trained in this way, to imitate some optimal controller, encode a mapping from high-level movement descriptors to actuator commands, and may thus be deployed in simulation as \textit{policies} to control the actions of humanoid models. Unfortunately, the high complexity of realistic simulation makes stable control a challenging task, beyond the capabilities of such naively trained models. The objective of the second study was to improve performance and stability of policy-based controllers for humanoid models in simulation. A novel technique was developed, borrowing from established unsupervised adversarial methods in computer vision. This technique enabled significant gains in performance relative to a neural network baseline, without the need for additional access to the optimal controller. For the third study, increases in the capabilities of these policy-based controllers were sought. Reinforcement learning is widely considered the most powerful means of optimising such policies, but it is computationally inefficient, and this inefficiency limits its clinical utility. To mitigate this problem, a novel framework, making use of domain-specific knowledge present in motion data, and in an inverse model of the biomechanical system, was developed. Training on simple desktop hardware, this framework enabled rapid initialisation of humanoid models that were able to move naturally through a 3-dimensional simulated environment, with 900-fold improvements in sample efficiency relative to a related technique based on pure reinforcement learning. After training with subject-specific anatomical parameters, and motion data, learned policies represent personalised models of motor control that may be further interrogated to test hypotheses about movement. For the fourth study, subject-specific controllers were taken and used as the substrate for transfer learning, by removing kinematic constraints and optimising with respect to the magnitude of the medial knee joint reaction force, an important biomechanical variable in osteoarthritis of the knee. Models learned new kinematic strategies for the reduction of this biomarker, which were subsequently validated by their use, in the real world, to construct subject-specific routines for real time gait retraining. Six out of eight subjects were able to reduce medial knee joint loading by pursuing the personalised kinematic targets found in simulation. Personalisation of assistive devices, such as limb prostheses, is another area of growing interest, and one for which computational frameworks promise cost-effective solutions. Reinforcement learning provides powerful techniques for this task but the expansion of the scope of optimisation, to include previously static elements of a prosthesis, is problematic for its complexity and resulting sample inefficiency. The fifth and final study demonstrates a new algorithm that leverages the methods described in the previous studies, and additional techniques for variance control, to surmount this problem, improving sample efficiency and simultaneously, through the use of prior knowledge encoded in motion data, providing a rational means of determining optimality in the prosthesis. Trained models were able to jointly optimise motor control and prosthesis design to enable improved performance in a walking task, and optimised designs were robust to both random seed and reward specification. This algorithm could be used to speed the design and production of real personalised prostheses, representing a potent realisation of the potential benefits of combined reinforcement learning and realistic neuromusculoskeletal modelling.Open Acces

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning
    • …
    corecore