1,618 research outputs found

    Machine Learning for the Early Detection of Acute Episodes in Intensive Care Units

    Get PDF
    In Intensive Care Units (ICUs), mere seconds might define whether a patient lives or dies. Predictive models capable of detecting acute events in advance may allow for anticipated interventions, which could mitigate the consequences of those events and promote a greater number of lives saved. Several predictive models developed for this purpose have failed to meet the high requirements of ICUs. This might be due to the complexity of anomaly prediction tasks, and the inefficient utilization of ICU data. Moreover, some essential intensive care demands, such as continuous monitoring, are often not considered when developing these solutions, making them unfit to real contexts. This work approaches two topics within the mentioned problem: the relevance of ICU data used to predict acute episodes and the benefits of applying Layered Learning (LL) techniques to counter the complexity of these tasks. The first topic was undertaken through a study on the relevance of information retrieved from physiological signals and clinical data for the early detection of Acute Hypotensive Episodes (AHE) in ICUs. Then, the potentialities of LL were accessed through an in-depth analysis of the applicability of a recently proposed approach on the same topic. Furthermore, different optimization strategies enabled by LL configurations were proposed, including a new approach aimed at false alarm reduction. The results regarding data relevance might contribute to a shift in paradigm in terms of information retrieved for AHE prediction. It was found that most of the information commonly used in the literature might be wrongly perceived as valuable, since only three features related to blood pressure measures presented actual distinctive traits. On another note, the different LL-based strategies developed confirm the versatile possibilities offered by this paradigm. Although these methodologies did not promote significant performance improvements in this specific context, they can be further explored and adapted to other domains.Em Unidades de Cuidados Intensivos (UCIs), meros segundos podem ser o fator determinante entre a vida e a morte de um paciente. Modelos preditivos para a previsão de eventos adversos podem promover intervenções antecipadas, com vista à mitigação das consequências destes eventos, e traduzir-se num maior número de vidas salvas. Múltiplos modelos desenvolvidos para este propósito não corresponderam às exigências das UCIs. Isto pode dever-se à complexidade de tarefas de previsão de anomalias e à ineficiência no uso da informação gerada em UCIs. Além disto, algumas necessidades inerentes à provisão de cuidados intensivos, tais como a monitorização contínua, são muitas vezes ignoradas no desenvolvimento destas soluções, tornando-as desadequadas para contextos reais. Este projeto aborda dois tópicos dentro da problemática introduzida, nomeadamente a relevância da informação usada para prever episódios agudos, e os benefícios de técnicas de Aprendizagem em Camadas (AC) para contrariar a complexidade destas tarefas. Numa primeira fase, foi conduzido um estudo sobre o impacto de diversos sinais fisiológicos e dados clínicos no contexto da previsão de episódios agudos de hipotensão. As potencialidades do paradigma de AC foram avaliadas através da análise de uma abordagem proposta recentemente para o mesmo caso de estudo. Nesta segunda fase, diversas estratégias de otimização compatíveis com configurações em camadas foram desenvolvidas, incluindo um modelo para reduzir falsos alarmes. Os resultados relativos à relevância da informação podem contribuir para uma mudança de paradigma em termos da informação usada para treinar estes modelos. A maior parte da informação poderá estar a ser erroneamente considerada como importante, uma vez que apenas três variáveis, deduzidas dos valores de pressão arterial, foram identificadas como realmente impactantes. Por outro lado, as diferentes estratégias baseadas em AC confirmaram a versatilidade oferecida por este paradigma. Apesar de não terem promovido melhorias significativas neste contexto, estes métodos podem ser adaptados a outros domínios

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Basic Science to Clinical Research: Segmentation of Ultrasound and Modelling in Clinical Informatics

    Get PDF
    The world of basic science is a world of minutia; it boils down to improving even a fraction of a percent over the baseline standard. It is a domain of peer reviewed fractions of seconds and the world of squeezing every last ounce of efficiency from a processor, a storage medium, or an algorithm. The field of health data is based on extracting knowledge from segments of data that may improve some clinical process or practice guideline to improve the time and quality of care. Clinical informatics and knowledge translation provide this information in order to reveal insights to the world of improving patient treatments, regimens, and overall outcomes. In my world of minutia, or basic science, the movement of blood served an integral role. The novel detection of sound reverberations map out the landscape for my research. I have applied my algorithms to the various anatomical structures of the heart and artery system. This serves as a basis for segmentation, active contouring, and shape priors. The algorithms presented, leverage novel applications in segmentation by using anatomical features of the heart for shape priors and the integration of optical flow models to improve tracking. The presented techniques show improvements over traditional methods in the estimation of left ventricular size and function, along with plaque estimation in the carotid artery. In my clinical world of data understanding, I have endeavoured to decipher trends in Alzheimer’s disease, Sepsis of hospital patients, and the burden of Melanoma using mathematical modelling methods. The use of decision trees, Markov models, and various clustering techniques provide insights into data sets that are otherwise hidden. Finally, I demonstrate how efficient data capture from providers can achieve rapid results and actionable information on patient medical records. This culminated in generating studies on the burden of illness and their associated costs. A selection of published works from my research in the world of basic sciences to clinical informatics has been included in this thesis to detail my transition. This is my journey from one contented realm to a turbulent one

    Multimodal Signal Processing for Diagnosis of Cardiorespiratory Disorders

    Get PDF
    This thesis addresses the use of multimodal signal processing to develop algorithms for the automated processing of two cardiorespiratory disorders. The aim of the first application of this thesis was to reduce false alarm rate in an intensive care unit. The goal was to detect five critical arrhythmias using processing of multimodal signals including photoplethysmography, arterial blood pressure, Lead II and augmented right arm electrocardiogram (ECG). A hierarchical approach was used to process the signals as well as a custom signal processing technique for each arrhythmia type. Sleep disorders are a prevalent health issue, currently costly and inconvenient to diagnose, as they normally require an overnight hospital stay by the patient. In the second application of this project, we designed automated signal processing algorithms for the diagnosis of sleep apnoea with a main focus on the ECG signal processing. We estimated the ECG-derived respiratory (EDR) signal using different methods: QRS-complex area, principal component analysis (PCA) and kernel PCA. We proposed two algorithms (segmented PCA and approximated PCA) for EDR estimation to enable applying the PCA method to overnight recordings and rectify the computational issues and memory requirement. We compared the EDR information against the chest respiratory effort signals. The performance was evaluated using three automated machine learning algorithms of linear discriminant analysis (LDA), extreme learning machine (ELM) and support vector machine (SVM) on two databases: the MIT PhysioNet database and the St. Vincent’s database. The results showed that the QRS area method for EDR estimation combined with the LDA classifier was the highest performing method and the EDR signals contain respiratory information useful for discriminating sleep apnoea. As a final step, heart rate variability (HRV) and cardiopulmonary coupling (CPC) features were extracted and combined with the EDR features and temporal optimisation techniques were applied. The cross-validation results of the minute-by-minute apnoea classification achieved an accuracy of 89%, a sensitivity of 90%, a specificity of 88%, and an AUC of 0.95 which is comparable to the best results reported in the literature

    Interactive Exploration of Temporal Event Sequences

    Get PDF
    Life can often be described as a series of events. These events contain rich information that, when put together, can reveal history, expose facts, or lead to discoveries. Therefore, many leading organizations are increasingly collecting databases of event sequences: Electronic Medical Records (EMRs), transportation incident logs, student progress reports, web logs, sports logs, etc. Heavy investments were made in data collection and storage, but difficulties still arise when it comes to making use of the collected data. Analyzing millions of event sequences is a non-trivial task that is gaining more attention and requires better support due to its complex nature. Therefore, I aimed to use information visualization techniques to support exploratory data analysis---an approach to analyzing data to formulate hypotheses worth testing---for event sequences. By working with the domain experts who were analyzing event sequences, I identified two important scenarios that guided my dissertation: First, I explored how to provide an overview of multiple event sequences? Lengthy reports often have an executive summary to provide an overview of the report. Unfortunately, there was no executive summary to provide an overview for event sequences. Therefore, I designed LifeFlow, a compact overview visualization that summarizes multiple event sequences, and interaction techniques that supports users' exploration. Second, I examined how to support users in querying for event sequences when they are uncertain about what they are looking for. To support this task, I developed similarity measures (the M&M measure 1-2) and user interfaces (Similan 1-2) for querying event sequences based on similarity, allowing users to search for event sequences that are similar to the query. After that, I ran a controlled experiment comparing exact match and similarity search interfaces, and learned the advantages and disadvantages of both interfaces. These lessons learned inspired me to develop Flexible Temporal Search (FTS) that combines the benefits of both interfaces. FTS gives confident and countable results, and also ranks results by similarity. I continued to work with domain experts as partners, getting them involved in the iterative design, and constantly using their feedback to guide my research directions. As the research progressed, several short-term user studies were conducted to evaluate particular features of the user interfaces. Both quantitative and qualitative results were reported. To address the limitations of short-term evaluations, I included several multi-dimensional in-depth long-term case studies with domain experts in various fields to evaluate deeper benefits, validate generalizability of the ideas, and demonstrate practicability of this research in non-laboratory environments. The experience from these long-term studies was combined into a set of design guidelines for temporal event sequence exploration. My contributions from this research are LifeFlow, a visualization that compactly displays summaries of multiple event sequences, along with interaction techniques for users' explorations; similarity measures (the M&M measure 1-2) and similarity search interfaces (Similan 1-2) for querying event sequences; Flexible Temporal Search (FTS), a hybrid query approach that combines the benefits of exact match and similarity search; and case study evaluations that results in a process model and a set of design guidelines for temporal event sequence exploration. Finally, this research has revealed new directions for exploring event sequences

    Thirty years of artificial intelligence in medicine (AIME) conferences: A review of research themes

    Get PDF
    Over the past 30 years, the international conference on Artificial Intelligence in MEdicine (AIME) has been organized at different venues across Europe every 2 years, establishing a forum for scientific exchange and creating an active research community. The Artificial Intelligence in Medicine journal has published theme issues with extended versions of selected AIME papers since 1998

    Efficient Process Data Warehousing

    Get PDF
    This dissertation presents a data processing architecture for efficient data warehousing from historical data sources. The present work has three primary contributions. The first contribution is the development of a generalized process data warehousing (PDW) architecture that includes multilayer data processing steps to transform raw data streams into useful information that facilitates data-driven decision making. The second contribution is exploring the applicability of the proposed architecture to the case of sparse process data. We have tested the proposed approach in a medical monitoring system, which takes physiological data and predicts the clinical setting in which the data is most likely to be seen. We have performed a set of experiments with real clinical data (from Children’s Hospital of Pittsburgh) that demonstrate the high utility of the present approach. The third contribution is exploring the applicability of the proposed PDW architecture to the case of redundant process data. We have designed and developed a conflict-aware data fusion strategy for the efficient aggregation of historical data. We have elaborated a simulation-based study of the tradeoffs between the data fusion solutions and data accuracy, and have also evaluated the solutions to a large-scale integrated framework (Tycho data) that includes historical data from heterogeneous sources in different subject areas. Finally, we propose and have evaluated a state sequence recovery (SSR) framework, which integrates work from two previous studies, which are both sparse and redundant studies. Our experimental results are based on several algorithms that have been developed and tested in different simulation set-up scenarios under both normal and exponential data distributions
    • …
    corecore