1,814 research outputs found

    Clustering of clinical multivariate time-series utilizing recent advances in machine-learning

    Get PDF
    The purpose of this thesis is to set the groundwork for future research on developing a machine-learning based anomaly detection system for hospitalized patients. Our first step was to study and analyze the project’s needs, background, and literature examining similar criteria. In the second step, we interviewed medical experts and researchers. Based on our research and the suggestions received in our interviews, we explored methods that could be utilized to approach the issue based on the data we collected. The results of these approaches were then discussed. According to the results, the K-means algorithm, which utilizes principle components to cluster, obtained the highest quality. We then discussed how other algorithms have been influenced more by the shape of the data than by the values of the data. Afterward, we made some suggestions about how this research could be approached in the future as we move forward

    Data-Driven Representation Learning in Multimodal Feature Fusion

    Get PDF
    abstract: Modern machine learning systems leverage data and features from multiple modalities to gain more predictive power. In most scenarios, the modalities are vastly different and the acquired data are heterogeneous in nature. Consequently, building highly effective fusion algorithms is at the core to achieve improved model robustness and inferencing performance. This dissertation focuses on the representation learning approaches as the fusion strategy. Specifically, the objective is to learn the shared latent representation which jointly exploit the structural information encoded in all modalities, such that a straightforward learning model can be adopted to obtain the prediction. We first consider sensor fusion, a typical multimodal fusion problem critical to building a pervasive computing platform. A systematic fusion technique is described to support both multiple sensors and descriptors for activity recognition. Targeted to learn the optimal combination of kernels, Multiple Kernel Learning (MKL) algorithms have been successfully applied to numerous fusion problems in computer vision etc. Utilizing the MKL formulation, next we describe an auto-context algorithm for learning image context via the fusion with low-level descriptors. Furthermore, a principled fusion algorithm using deep learning to optimize kernel machines is developed. By bridging deep architectures with kernel optimization, this approach leverages the benefits of both paradigms and is applied to a wide variety of fusion problems. In many real-world applications, the modalities exhibit highly specific data structures, such as time sequences and graphs, and consequently, special design of the learning architecture is needed. In order to improve the temporal modeling for multivariate sequences, we developed two architectures centered around attention models. A novel clinical time series analysis model is proposed for several critical problems in healthcare. Another model coupled with triplet ranking loss as metric learning framework is described to better solve speaker diarization. Compared to state-of-the-art recurrent networks, these attention-based multivariate analysis tools achieve improved performance while having a lower computational complexity. Finally, in order to perform community detection on multilayer graphs, a fusion algorithm is described to derive node embedding from word embedding techniques and also exploit the complementary relational information contained in each layer of the graph.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Identifying TBI Physiological States by Clustering Multivariate Clinical Time-Series Data

    Full text link
    Determining clinically relevant physiological states from multivariate time series data with missing values is essential for providing appropriate treatment for acute conditions such as Traumatic Brain Injury (TBI), respiratory failure, and heart failure. Utilizing non-temporal clustering or data imputation and aggregation techniques may lead to loss of valuable information and biased analyses. In our study, we apply the SLAC-Time algorithm, an innovative self-supervision-based approach that maintains data integrity by avoiding imputation or aggregation, offering a more useful representation of acute patient states. By using SLAC-Time to cluster data in a large research dataset, we identified three distinct TBI physiological states and their specific feature profiles. We employed various clustering evaluation metrics and incorporated input from a clinical domain expert to validate and interpret the identified physiological states. Further, we discovered how specific clinical events and interventions can influence patient states and state transitions.Comment: 10 pages, 7 figures, 2 table

    Deep learning methods for improving diabetes management tools

    Get PDF
    Diabetes is a chronic disease that is characterised by a lack of regulation of blood glucose concentration in the body, and thus elevated blood glucose levels. Consequently, affected individuals can experience extreme variations in their blood glucose levels with exogenous insulin treatment. This has associated debilitating short-term and long-term complications that affect quality of life and can result in death in the worst instance. The development of technologies such as glucose meters and, more recently, continuous glucose monitors have offered the opportunity to develop systems towards improving clinical outcomes for individuals with diabetes through better glucose control. Data-driven methods can enable the development of the next generation of diabetes management tools focused on i) informativeness ii) safety and iii) easing the burden of management. This thesis aims to propose deep learning methods for improving the functionality of the variety of diabetes technology tools available for self-management. In the pursuit of the aforementioned goals, a number of deep learning methods are developed and geared towards improving the functionality of the existing diabetes technology tools, generally classified as i) self-monitoring of blood glucose ii) decision support systems and iii) artificial pancreas. These frameworks are primarily based on the prediction of glucose concentration levels. The first deep learning framework we propose is geared towards improving the artificial pancreas and decision support systems that rely on continuous glucose monitors. We first propose a convolutional recurrent neural network (CRNN) in order to forecast the glucose concentration levels over both short-term and long-term horizons. The predictive accuracy of this model outperforms those of traditional data-driven approaches. The feasibility of this proposed approach for ambulatory use is then demonstrated with the implementation of a decision support system on a smartphone application. We further extend CRNNs to the multitask setting to explore the effectiveness of leveraging population data for developing personalised models with limited individual data. We show that this enables earlier deployment of applications without significantly compromising performance and safety. The next challenge focuses on easing the burden of management by proposing a deep learning framework for automatic meal detection and estimation. The deep learning framework presented employs multitask learning and quantile regression to safely detect and estimate the size of unannounced meals with high precision. We also demonstrate that this facilitates automated insulin delivery for the artificial pancreas system, improving glycaemic control without significantly increasing the risk or incidence of hypoglycaemia. Finally, the focus shifts to improving self-monitoring of blood glucose (SMBG) with glucose meters. We propose an uncertainty-aware deep learning model based on a joint Gaussian Process and deep learning framework to provide end users with more dynamic and continuous information similar to continuous glucose sensors. Consequently, we show significant improvement in hyperglycaemia detection compared to the standard SMBG. We hope that through these methods, we can achieve a more equitable improvement in usability and clinical outcomes for individuals with diabetes.Open Acces

    The Application of Computer Techniques to ECG Interpretation

    Get PDF
    This book presents some of the latest available information on automated ECG analysis written by many of the leading researchers in the field. It contains a historical introduction, an outline of the latest international standards for signal processing and communications and then an exciting variety of studies on electrophysiological modelling, ECG Imaging, artificial intelligence applied to resting and ambulatory ECGs, body surface mapping, big data in ECG based prediction, enhanced reliability of patient monitoring, and atrial abnormalities on the ECG. It provides an extremely valuable contribution to the field

    Modelling Irregularly Sampled Time Series Without Imputation

    Full text link
    Modelling irregularly-sampled time series (ISTS) is challenging because of missing values. Most existing methods focus on handling ISTS by converting irregularly sampled data into regularly sampled data via imputation. These models assume an underlying missing mechanism leading to unwanted bias and sub-optimal performance. We present SLAN (Switch LSTM Aggregate Network), which utilizes a pack of LSTMs to model ISTS without imputation, eliminating the assumption of any underlying process. It dynamically adapts its architecture on the fly based on the measured sensors. SLAN exploits the irregularity information to capture each sensor's local summary explicitly and maintains a global summary state throughout the observational period. We demonstrate the efficacy of SLAN on publicly available datasets, namely, MIMIC-III, Physionet 2012 and Physionet 2019. The code is available at https://github.com/Rohit102497/SLAN

    Time Series as Images: Vision Transformer for Irregularly Sampled Time Series

    Full text link
    Irregularly sampled time series are increasingly prevalent, particularly in medical domains. While various specialized methods have been developed to handle these irregularities, effectively modeling their complex dynamics and pronounced sparsity remains a challenge. This paper introduces a novel perspective by converting irregularly sampled time series into line graph images, then utilizing powerful pre-trained vision transformers for time series classification in the same way as image classification. This method not only largely simplifies specialized algorithm designs but also presents the potential to serve as a universal framework for time series modeling. Remarkably, despite its simplicity, our approach outperforms state-of-the-art specialized algorithms on several popular healthcare and human activity datasets. Especially in the rigorous leave-sensors-out setting where a portion of variables is omitted during testing, our method exhibits strong robustness against varying degrees of missing observations, achieving an impressive improvement of 42.8% in absolute F1 score points over leading specialized baselines even with half the variables masked. Code and data are available at https://github.com/Leezekun/ViTSTComment: Accepted to NeurIPS2023. Code and data are available at: https://github.com/Leezekun/ViTS
    • …
    corecore