961 research outputs found

    False alarm reduction in critical care

    Get PDF
    High false alarm rates in the ICU decrease quality of care by slowing staff response times while increasing patient delirium through noise pollution. The 2015 PhysioNet/Computing in Cardiology Challenge provides a set of 1250 multi-parameter ICU data segments associated with critical arrhythmia alarms, and challenges the general research community to address the issue of false alarm suppression using all available signals. Each data segment was 5 minutes long (for real time analysis), ending at the time of the alarm. For retrospective analysis, we provided a further 30 seconds of data after the alarm was triggered. A total of 750 data segments were made available for training and 500 were held back for testing. Each alarm was reviewed by expert annotators, at least two of whom agreed that the alarm was either true or false. Challenge participants were invited to submit a complete, working algorithm to distinguish true from false alarms, and received a score based on their program's performance on the hidden test set. This score was based on the percentage of alarms correct, but with a penalty that weights the suppression of true alarms five times more heavily than acceptance of false alarms. We provided three example entries based on well-known, open source signal processing algorithms, to serve as a basis for comparison and as a starting point for participants to develop their own code. A total of 38 teams submitted a total of 215 entries in this year's Challenge. This editorial reviews the background issues for this challenge, the design of the challenge itself, the key achievements, and the follow-up research generated as a result of the Challenge, published in the concurrent special issue of Physiological Measurement. Additionally we make some recommendations for future changes in the field of patient monitoring as a result of the Challenge.National Institutes of Health (U.S.) (Grant R01-GM104987)National Institute of General Medical Sciences (U.S.) (Grant U01-EB-008577)National Institutes of Health (U.S.) (Grant R01-EB-001659

    Multimodal Signal Processing for Diagnosis of Cardiorespiratory Disorders

    Get PDF
    This thesis addresses the use of multimodal signal processing to develop algorithms for the automated processing of two cardiorespiratory disorders. The aim of the first application of this thesis was to reduce false alarm rate in an intensive care unit. The goal was to detect five critical arrhythmias using processing of multimodal signals including photoplethysmography, arterial blood pressure, Lead II and augmented right arm electrocardiogram (ECG). A hierarchical approach was used to process the signals as well as a custom signal processing technique for each arrhythmia type. Sleep disorders are a prevalent health issue, currently costly and inconvenient to diagnose, as they normally require an overnight hospital stay by the patient. In the second application of this project, we designed automated signal processing algorithms for the diagnosis of sleep apnoea with a main focus on the ECG signal processing. We estimated the ECG-derived respiratory (EDR) signal using different methods: QRS-complex area, principal component analysis (PCA) and kernel PCA. We proposed two algorithms (segmented PCA and approximated PCA) for EDR estimation to enable applying the PCA method to overnight recordings and rectify the computational issues and memory requirement. We compared the EDR information against the chest respiratory effort signals. The performance was evaluated using three automated machine learning algorithms of linear discriminant analysis (LDA), extreme learning machine (ELM) and support vector machine (SVM) on two databases: the MIT PhysioNet database and the St. Vincent’s database. The results showed that the QRS area method for EDR estimation combined with the LDA classifier was the highest performing method and the EDR signals contain respiratory information useful for discriminating sleep apnoea. As a final step, heart rate variability (HRV) and cardiopulmonary coupling (CPC) features were extracted and combined with the EDR features and temporal optimisation techniques were applied. The cross-validation results of the minute-by-minute apnoea classification achieved an accuracy of 89%, a sensitivity of 90%, a specificity of 88%, and an AUC of 0.95 which is comparable to the best results reported in the literature

    Assessing ECG signal quality indices to discriminate ECGs with artefacts from pathologically different arrhythmic ECGs

    Get PDF
    False and non-actionable alarms in critical care can be reduced by developing algorithms which assess the trueness of an arrhythmia alarm from a bedside monitor. Computational approaches that automatically identify artefacts in ECG signals are an important branch of physiological signal processing which tries to address this issue. Signal quality indices (SQIs) derived considering differences between artefacts which occur in ECG signals and normal QRS morphology have the potential to discriminate pathologically different arrhythmic ECG segments as artefacts. Using ECG signals from the PhysioNet/Computing in Cardiology Challenge 2015 training set, we studied previously reported ECG SQIs in the scientific literature to differentiate ECG segments with artefacts from arrhythmic ECG segments. We found that the ability of SQIs to discriminate between ECG artefacts and arrhythmic ECG varies based on arrhythmia type since the pathology of each arrhythmic ECG waveform is different. Therefore, to reduce the risk of SQIs classifying arrhythmic events as noise it is important to validate and test SQIs with databases that include arrhythmias. Arrhythmia specific SQIs may also minimize the risk of misclassifying arrhythmic events as noise

    The Application of Computer Techniques to ECG Interpretation

    Get PDF
    This book presents some of the latest available information on automated ECG analysis written by many of the leading researchers in the field. It contains a historical introduction, an outline of the latest international standards for signal processing and communications and then an exciting variety of studies on electrophysiological modelling, ECG Imaging, artificial intelligence applied to resting and ambulatory ECGs, body surface mapping, big data in ECG based prediction, enhanced reliability of patient monitoring, and atrial abnormalities on the ECG. It provides an extremely valuable contribution to the field

    Advanced analyses of physiological signals and their role in Neonatal Intensive Care

    Get PDF
    Preterm infants admitted to the neonatal intensive care unit (NICU) face an array of life-threatening diseases requiring procedures such as resuscitation and invasive monitoring, and other risks related to exposure to the hospital environment, all of which may have lifelong implications. This thesis examined a range of applications for advanced signal analyses in the NICU, from identifying of physiological patterns associated with neonatal outcomes, to evaluating the impact of certain treatments on physiological variability. Firstly, the thesis examined the potential to identify infants at risk of developing intraventricular haemorrhage, often interrelated with factors leading to preterm birth, mechanical ventilation, hypoxia and prolonged apnoeas. This thesis then characterised the cardiovascular impact of caffeine therapy which is often administered to prevent and treat apnoea of prematurity, finding greater pulse pressure variability and enhanced responsiveness of the autonomic nervous system. Cerebral autoregulation maintains cerebral blood flow despite fluctuations in arterial blood pressure and is an important consideration for preterm infants who are especially vulnerable to brain injury. Using various time and frequency domain correlation techniques, the thesis found acute changes in cerebral autoregulation of preterm infants following caffeine therapy. Nutrition in early life may also affect neurodevelopment and morbidity in later life. This thesis developed models for identifying malnutrition risk using anthropometry and near-infrared interactance features. This thesis has presented a range of ways in which advanced analyses including time series analysis, feature selection and model development can be applied to neonatal intensive care. There is a clear role for such analyses in early detection of clinical outcomes, characterising the effects of relevant treatments or pathologies and identifying infants at risk of later morbidity

    Analysis of physiological signals using machine learning methods

    Get PDF
    Technological advances in data collection enable scientists to suggest novel approaches, such as Machine Learning algorithms, to process and make sense of this information. However, during this process of collection, data loss and damage can occur for reasons such as faulty device sensors or miscommunication. In the context of time-series data such as multi-channel bio-signals, there is a possibility of losing a whole channel. In such cases, existing research suggests imputing the missing parts when the majority of data is available. One way of understanding and classifying complex signals is by using deep neural networks. The hyper-parameters of such models have been optimised using the process of back propagation. Over time, improvements have been suggested to enhance this algorithm. However, an essential drawback of the back propagation can be the sensitivity to noisy data. This thesis proposes two novel approaches to address the missing data challenge and back propagation drawbacks: First, suggesting a gradient-free model in order to discover the optimal hyper-parameters of a deep neural network. The complexity of deep networks and high-dimensional optimisation parameters presents challenges to find a suitable network structure and hyper-parameter configuration. This thesis proposes the use of a minimalist swarm optimiser, Dispersive Flies Optimisation(DFO), to enable the selected model to achieve better results in comparison with the traditional back propagation algorithm in certain conditions such as limited number of training samples. The DFO algorithm offers a robust search process for finding and determining the hyper-parameter configurations. Second, imputing whole missing bio-signals within a multi-channel sample. This approach comprises two experiments, namely the two-signal and five-signal imputation models. The first experiment attempts to implement and evaluate the performance of a model mapping bio-signals from A toB and vice versa. Conceptually, this is an extension to transfer learning using CycleGenerative Adversarial Networks (CycleGANs). The second experiment attempts to suggest a mechanism imputing missing signals in instances where multiple data channels are available for each sample. The capability to map to a target signal through multiple source domains achieves a more accurate estimate for the target domain. The results of the experiments performed indicate that in certain circumstances, such as having a limited number of samples, finding the optimal hyper-parameters of a neural network using gradient-free algorithms outperforms traditional gradient-based algorithms, leading to more accurate classification results. In addition, Generative Adversarial Networks could be used to impute the missing data channels in multi-channel bio-signals, and the generated data used for further analysis and classification tasks

    Deep Neuroevolution: Training Deep Neural Networks for False Alarm Detection in Intensive Care Units

    Get PDF
    We present a neuroevolution based-approach for training neural networks based on genetic algorithms, as applied to the problem of detecting false alarms in Intensive Care Units (ICU) based on physiological data. Typically, optimisation in neural networks is performed via backpropagation (BP) with stochastic gradient-based learning. Nevertheless, recent works have shown promising results in terms of utilising gradient-free, population-based genetic algorithms, suggesting that in certain cases gradient-based optimisation is not the best approach to follow. In this paper, we empirically show that utilising evolutionary and swarm intelligence algorithms can improve the performance of deep neural networks in problems such as the detection of false alarms in ICU. In more detail, we present results that improve the state-of-the-art accuracy on the corresponding Physionet challenge, while reducing the number of suppressed true alarms by deploying and adapting Dispersive Flies Optimisation (DFO)

    Towards better heartbeat segmentation with deep learning classification

    Get PDF
    The confidence of medical equipment is intimately related to false alarms. The higher the number of false events occurs, the less truthful is the equipment. In this sense, reducing (or suppressing) false positive alarms is hugely desirable. In this work, we propose a feasible and real-time approach that works as a validation method for a heartbeat segmentation third-party algorithm. The approach is based on convolutional neural networks (CNNs), which may be embedded in dedicated hardware. Our proposal aims to detect the pattern of a single heartbeat and classifies them into two classes: a heartbeat and not a heartbeat. For this, a seven-layer convolution network is employed for both data representation and classification. We evaluate our approach in two well-settled databases in the literature on the raw heartbeat signal. The first database is a conventional on-the-person database called MIT-BIH, and the second is one less uncontrolled off-the-person type database known as CYBHi. To evaluate the feasibility and the performance of the proposed approach, we use as a baseline the Pam-Tompkins algorithm, which is a well-known method in the literature and still used in the industry. We compare the baseline against the proposed approach: a CNN model validating the heartbeats detected by a third-party algorithm. In this work, the third-party algorithm is the same as the baseline for comparison purposes. The results support the feasibility of our approach showing that our method can enhance the positive prediction of the Pan-Tompkins algorithm from 97.84%/90.28% to 100.00%/96.77% by slightly decreasing the sensitivity from 95.79%/96.95% to 92.98%/95.71% on the MIT-BIH/CYBHi databases
    corecore