102 research outputs found

    Correlation and real time classification of physiological streams for critical care monitoring.

    Get PDF
    This thesis presents a framework for the deployment of algorithms that support the correlation and real-time classification of physiological data streams through the development of clinically meaningful alerts using a blend of expert knowledge in the domain and pattern recognition programming based on clinical rules. Its relevance is demonstrated via a real world case study within the context of neonatal intensive care to provide real-time classification of neonatal spells. Events are first detected in individual streams independently; then synced together based on timestamps; and finally assessed to determine the start and end of a multi-signal episode. The episode is then processed through a classifier based on clinical rules to determine a classification. The output of the algorithms has been shown, in a single use case study with 24 hours of patient data, to detect clinically significant relative changes in heart rate, blood oxygen saturation levels and pauses in breathing in the respiratory impedance signal. The accuracy of the algorithm for detecting these is 97.8%, 98.3% and 98.9% respectively. The accuracy for correlating the streams and determining spells classifications is 98.9%. Future research will focus on the clinical validation of these algorithms and the application of the framework for the detection and classification of signals in other clinical contexts

    A method to detect and represent temporal patterns from time series data and its application for analysis of physiological data streams

    Get PDF
    In critical care, complex systems and sensors continuously monitor patients??? physiological features such as heart rate, respiratory rate thus generating significant amounts of data every second. This results to more than 2 million records generated per patient in an hour. It???s an immense challenge for anyone trying to utilize this data when making critical decisions about patient care. Temporal abstraction and data mining are two research fields that have tried to synthesize time oriented data to detect hidden relationships that may exist in the data. Various researchers have looked at techniques for generating abstractions from clinical data. However, the variety and speed of data streams generated often overwhelms current systems which are not designed to handle such data. Other attempts have been to understand the complexity in time series data utilizing mining techniques, however, existing models are not designed to detect temporal relationships that might exist in time series data (Inibhunu & McGregor, 2016). To address this challenge, this thesis has proposed a method that extends the existing knowledge discovery frameworks to include components for detecting and representing temporal relationships in time series data. The developed method is instantiated within the knowledge discovery component of Artemis, a cloud based platform for processing physiological data streams. This is a unique approach that utilizes pattern recognition principles to facilitate functions for; (a) temporal representation of time series data with abstractions, (b) temporal pattern generation and quantification (c) frequent patterns identification and (d) building a classification system. This method is applied to a neonatal intensive care case study with a motivating problem that discovery of specific patterns from patient data could be crucial for making improved decisions within patient care. Another application is in chronic care to detect temporal relationships in ambulatory patient data before occurrence of an adverse event. The research premise is that discovery of hidden relationships and patterns in data would be valuable in building a classification system that automatically characterize physiological data streams. Such characterization could aid in detection of new normal and abnormal behaviors in patients who may have life threatening conditions

    A flexible, longitudinal and surrogate consent model: Consent of Infants for Neonatal Secondary-use research (CoINS) Model

    Get PDF
    Documenting healthcare, along with technology enabling capture of streaming patient telemetry, can deliver large datasets offering opportunities to discover new insights primarily identified through retrospective secondary use research. Research involving health data requires consent of the subject patient or someone with the power to speak on that patient???s behalf. Flexible consent models that capture consent preferences while allowing updates as preferences change are needed. This research proposes and demonstrates one solution in a case study collecting surrogate consent from parents for the physiological data of infant inpatients in the Neonatal Intensive Care Unit (NICU) and attaching this consent as a wrapper controlling access to their data. 145 parents were approached and 134 provided consent: with 78 percent of infants consented during their first week of life. This research supports the contention that using a flexible consent approach enhances willingness to consent use of infant???s health data for secondary research purposes

    Time-Series Embedded Feature Selection Using Deep Learning: Data Mining Electronic Health Records for Novel Biomarkers

    Get PDF
    As health information technologies continue to advance, routine collection and digitisation of patient health records in the form of electronic health records present as an ideal opportunity for data-mining and exploratory analysis of biomarkers and risk factors indicative of a potentially diverse domain of patient outcomes. Patient records have continually become more widely available through various initiatives enabling open access whilst maintaining critical patient privacy. In spite of such progress, health records remain not widely adopted within the current clinical statistical analysis domain due to challenging issues derived from such “big data”.Deep learning based temporal modelling approaches present an ideal solution to health record challenges through automated self-optimisation of representation learning, able to man-ageably compose the high-dimensional domain of patient records into data representations able to model complex data associations. Such representations can serve to condense and reduce dimensionality to emphasise feature sparsity and importance through novel embedded feature selection approaches. Accordingly, application towards patient records enable complex mod-elling and analysis of the full domain of clinical features to select biomarkers of predictive relevance.Firstly, we propose a novel entropy regularised neural network ensemble able to highlight risk factors associated with hospitalisation risk of individuals with dementia. The application of which, was able to reduce a large domain of unique medical events to a small set of relevant risk factors able to maintain hospitalisation discrimination.Following on, we continue our work on ensemble architecture approaches with a novel cas-cading LSTM ensembles to predict severe sepsis onset within critical patients in an ICU critical care centre. We demonstrate state-of-the-art performance capabilities able to outperform that of current related literature.Finally, we propose a novel embedded feature selection application dubbed 1D convolu-tion feature selection using sparsity regularisation. Said methodology was evaluated on both domains of dementia and sepsis prediction objectives to highlight model capability and generalisability. We further report a selection of potential biomarkers for the aforementioned case study objectives highlighting clinical relevance and potential novelty value for future clinical analysis.Accordingly, we demonstrate the effective capability of embedded feature selection ap-proaches through the application of temporal based deep learning architectures in the discovery of effective biomarkers across a variety of challenging clinical applications

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Informatics for Health 2017 : advancing both science and practice

    Get PDF
    Conference report, The Informatics for Health congress, 24-26 April 2017, in Manchester, UK.Introduction : The Informatics for Health congress, 24-26 April 2017, in Manchester, UK, brought together the Medical Informatics Europe (MIE) conference and the Farr Institute International Conference. This special issue of the Journal of Innovation in Health Informatics contains 113 presentation abstracts and 149 poster abstracts from the congress. Discussion : The twin programmes of “Big Data” and “Digital Health” are not always joined up by coherent policy and investment priorities. Substantial global investment in health IT and data science has led to sound progress but highly variable outcomes. Society needs an approach that brings together the science and the practice of health informatics. The goal is multi-level Learning Health Systems that consume and intelligently act upon both patient data and organizational intervention outcomes. Conclusions : Informatics for Health demonstrated the art of the possible, seen in the breadth and depth of our contributions. We call upon policy makers, research funders and programme leaders to learn from this joined-up approach.Publisher PDFPeer reviewe

    University of South Alabama College of Medicine Annual Report for 2016-2017

    Get PDF
    This Annual Report of the College of Medicine catalogues accomplishments of our faculty, students, residents, fellows and staff in teaching, research, scholarly and community service during the 2016-2017 academic year.https://jagworks.southalabama.edu/com_report/1001/thumbnail.jp

    Informatics for Health 2017: Advancing both science and practice

    Full text link
    • …
    corecore