126,459 research outputs found

    Development of a Composite Health Index in Children with Cystic Fibrosis: A Pipeline for Data Processing, Machine Learning, and Model Implementation using Electronic Health Records

    Get PDF
    Cystic Fibrosis (CF) is a heterogeneous multi-faceted genetic condition that primarily affects the lungs and digestive system. For children and young people living with CF, timely management is necessary to prevent the establishment of severe disease. Modern data capture through electronic health records (EHR) have created an opportunity to use machine learning algorithms to classify subgroups of disease to understand health status and prognosis. The overall aim of this thesis was to develop a composite health index in children with CF. An iterative approach to unsupervised cluster analysis was developed to identify homogeneous clusters of children with CF in a pre-existing encounter-based CF database from Toronto Canada. An external validation of the model was carried out in a historical CF dataset from Great Ormond Street Hospital (GOSH) in London UK. The clusters were also re-created and validated using EHR data from GOSH when it first became accessible in 2021. The interpretability and sensitivity of the GOSH EHR model was explored. Lastly, a scoping review was carried out to investigate common barriers to implementation of prognostic machine learning algorithms in paediatric respiratory care. A cluster model was identified that detailed four clusters associated with time to future hospitalisation, pulmonary exacerbation, and lung function. The clusters were also associated with different disease related variables such as comorbidities, anthropometrics, microbiology infections, and treatment history. An app was developed to display individualised cluster assignment, which will be a useful way to interpret the cluster model clinically. The review of prognostic machine learning algorithms identified a lack of reproducibility and validations as the major limitation to model reporting that impair clinical translation. EHR systems facilitate point-of-care access of individualised data and integrated machine learning models. However, there is a gap in translation to clinical implementation of machine learning models. With appropriate regulatory frameworks the health index developed for children with CF could be implemented in CF care

    A Robust and Efficient Three-Layered Dialogue Component for a Speech-to-Speech Translation System

    Get PDF
    We present the dialogue component of the speech-to-speech translation system VERBMOBIL. In contrast to conventional dialogue systems it mediates the dialogue while processing maximally 50% of the dialogue in depth. Special requirements like robustness and efficiency lead to a 3-layered hybrid architecture for the dialogue module, using statistics, an automaton and a planner. A dialogue memory is constructed incrementally.Comment: Postscript file, compressed and uuencoded, 15 pages, to appear in Proceedings of EACL-95, Dublin

    Enhancing Operation of a Sewage Pumping Station for Inter Catchment Wastewater Transfer by Using Deep Learning and Hydraulic Model

    Get PDF
    This paper presents a novel Inter Catchment Wastewater Transfer (ICWT) method for mitigating sewer overflow. The ICWT aims at balancing the spatial mismatch of sewer flow and treatment capacity of Wastewater Treatment Plant (WWTP), through collaborative operation of sewer system facilities. Using a hydraulic model, the effectiveness of ICWT is investigated in a sewer system in Drammen, Norway. Concerning the whole system performance, we found that the S{\o}ren Lemmich pump station plays a vital role in the ICWT framework. To enhance the operation of this pump station, it is imperative to construct a multi-step ahead water level prediction model. Hence, one of the most promising artificial intelligence techniques, Long Short Term Memory (LSTM), is employed to undertake this task. Experiments demonstrated that LSTM is superior to Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN), Feed-forward Neural Network (FFNN) and Support Vector Regression (SVR)

    Applying digital content management to support localisation

    Get PDF
    The retrieval and presentation of digital content such as that on the World Wide Web (WWW) is a substantial area of research. While recent years have seen huge expansion in the size of web-based archives that can be searched efficiently by commercial search engines, the presentation of potentially relevant content is still limited to ranked document lists represented by simple text snippets or image keyframe surrogates. There is expanding interest in techniques to personalise the presentation of content to improve the richness and effectiveness of the user experience. One of the most significant challenges to achieving this is the increasingly multilingual nature of this data, and the need to provide suitably localised responses to users based on this content. The Digital Content Management (DCM) track of the Centre for Next Generation Localisation (CNGL) is seeking to develop technologies to support advanced personalised access and presentation of information by combining elements from the existing research areas of Adaptive Hypermedia and Information Retrieval. The combination of these technologies is intended to produce significant improvements in the way users access information. We review key features of these technologies and introduce early ideas for how these technologies can support localisation and localised content before concluding with some impressions of future directions in DCM

    Earth orbital teleoperator system man-machine interface evaluation

    Get PDF
    The teleoperator system man-machine interface evaluation develops and implements a program to determine human performance requirements in teleoperator systems

    Standardization of electroencephalography for multi-site, multi-platform and multi-investigator studies: Insights from the canadian biomarker integration network in depression

    Get PDF
    Subsequent to global initiatives in mapping the human brain and investigations of neurobiological markers for brain disorders, the number of multi-site studies involving the collection and sharing of large volumes of brain data, including electroencephalography (EEG), has been increasing. Among the complexities of conducting multi-site studies and increasing the shelf life of biological data beyond the original study are timely standardization and documentation of relevant study parameters. We presentthe insights gained and guidelines established within the EEG working group of the Canadian Biomarker Integration Network in Depression (CAN-BIND). CAN-BIND is a multi-site, multi-investigator, and multiproject network supported by the Ontario Brain Institute with access to Brain-CODE, an informatics platform that hosts a multitude of biological data across a growing list of brain pathologies. We describe our approaches and insights on documenting and standardizing parameters across the study design, data collection, monitoring, analysis, integration, knowledge-translation, and data archiving phases of CAN-BIND projects. We introduce a custom-built EEG toolbox to track data preprocessing with open-access for the scientific community. We also evaluate the impact of variation in equipment setup on the accuracy of acquired data. Collectively, this work is intended to inspire establishing comprehensive and standardized guidelines for multi-site studies
    • 

    corecore