209 research outputs found

    Heart Rate Variability: A possible machine learning biomarker for mechanical circulatory device complications and heart recovery

    Get PDF
    Cardiovascular disease continues to be the number one cause of death in the United States, with heart failure patients expected to increase to \u3e8 million by 2030. Mechanical circulatory support (MCS) devices are now better able to manage acute and chronic heart failure refractory to medical therapy, both as bridge to transplant or as bridge to destination. Despite significant advances in MCS device design and surgical implantation technique, it remains difficult to predict response to device therapy. Heart rate variability (HRV), measuring the variation in time interval between adjacent heartbeats, is an objective device diagnostic regularly recorded by various MCS devices that has been shown to have significant prognostic value for both sudden cardiac death as well as all-cause mortality in congestive heart failure (CHF) patients. Limited studies have examined HRV indices as promising risk factors and predictors of complication and recovery from left ventricular assist device therapy in end-stage CHF patients. If paired with new advances in machine learning utilization in medicine, HRV represents a potential dynamic biomarker for monitoring and predicting patient status as more patients enter the mechanotrope era of MCS devices for destination therapy

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Deep Learning Strategies for Pool Boiling Heat Flux Prediction Using Image Sequences

    Get PDF
    The understanding of bubble dynamics during boiling is critical to the design of advanced heater surfaces to improve the boiling heat transfer. The stochastic bubble nucleation, growth, and coalescence processes have made it challenging to obtain mechanistic models that can predict boiling heat flux based on the bubble dynamics. Traditional boiling image analysis relies on the extraction of the dominant physical quantities from the images and is thus limited to the existing knowledge of these quantities. Recently, machine-learning-aided analysis has shown success in boiling crisis detection, heat flux prediction, real-time image analysis, etc., whereas most of the existing studies are focused on static boiling images, failing to capture the dynamic behaviors of the bubbles. To address this issue, in the present work, a convolutional long short-term memory (ConvLSTM) model is developed to enable quantitative prediction of heat flux based on sequences of boiling images, where the convolutional layers are used to extract the features of the boiling images and the LSTM layers to identify the temporal features of the sequences. A convolutional neural network (CNN) model that is based on the classification of static images is also developed as a reference. Both models are trained with images of HFE-7100 boiling on silicon micropillar arrays at different steady-state heat fluxes. The results show that both CNN and ConvLSTM models have led to accurate predictions of heat flux based on the boiling images. In particular, the ConvLSTM model is shown to yield higher accuracy for heat flux predictions of completely unseen data, indicating a higher level of generality. Another focus of the present work is the forecasting capability of data-driven models using boiling images under transient heat loads. A CNN regression model is coupled with a one-dimensional LSTM model to enable a quantitative forecast of heat flux during boiling. The model is trained using image sequences of water boiling on planar copper surfaces with power ramp-up and has demonstrated a reliable forecasting capability

    Wearable Technologies and AI at the Far Edge for Chronic Heart Failure Prevention and Management: A Systematic Review and Prospects

    Get PDF
    Smart wearable devices enable personalized at-home healthcare by unobtrusively collecting patient health data and facilitating the development of intelligent platforms to support patient care and management. The accurate analysis of data obtained from wearable devices is crucial for interpreting and contextualizing health data and facilitating the reliable diagnosis and management of critical and chronic diseases. The combination of edge computing and artificial intelligence has provided real-time, time-critical, and privacy-preserving data analysis solutions. However, based on the envisioned service, evaluating the additive value of edge intelligence to the overall architecture is essential before implementation. This article aims to comprehensively analyze the current state of the art on smart health infrastructures implementing wearable and AI technologies at the far edge to support patients with chronic heart failure (CHF). In particular, we highlight the contribution of edge intelligence in supporting the integration of wearable devices into IoT-aware technology infrastructures that provide services for patient diagnosis and management. We also offer an in-depth analysis of open challenges and provide potential solutions to facilitate the integration of wearable devices with edge AI solutions to provide innovative technological infrastructures and interactive services for patients and doctors

    Automated detection of atrial fibrillation using long short-term memory network with RR interval signals

    Get PDF
    Atrial Fibrillation (AF), either permanent or intermittent (paroxysnal AF), increases the risk of cardioembolic stroke. Accurate diagnosis of AF is obligatory for initiation of effective treatment to prevent stroke. Long term cardiac monitoring improves the likelihood of diagnosing paroxysmal AF. We used a deep learning system to detect AF beats in Heart Rate (HR) signals. The data was partitioned with a sliding window of 100 beats. The resulting signal blocks were directly fed into a deep Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The system was validated and tested with data from the MIT-BIH Atrial Fibrillation Database. It achieved 98.51% accuracy with 10-fold cross-validation (20 subjects) and 99.77% with blindfold validation (3 subjects). The proposed system structure is straight forward, because there is no need for information reduction through feature extraction. All the complexity resides in the deep learning system, which gets the entire information from a signal block. This setup leads to the robust performance for unknown data, as measured with the blind fold validation. The proposed Computer-Aided Diagnosis (CAD) system can be used for long-term monitoring of the human heart. To the best of our knowledge, the proposed system is the first to incorporate deep learning for AF beat detection

    Utilizing Temporal Information in The EHR for Developing a Novel Continuous Prediction Model

    Get PDF
    Type 2 diabetes mellitus (T2DM) is a nation-wide prevalent chronic condition, which includes direct and indirect healthcare costs. T2DM, however, is a preventable chronic condition based on previous clinical research. Many prediction models were based on the risk factors identified by clinical trials. One of the major tasks of the T2DM prediction models is to estimate the risks for further testing by HbA1c or fasting plasma glucose to determine whether the patient has or does not have T2DM because nation-wide screening is not cost-effective. Those models had substantial limitations on data quality, such as missing values. In this dissertation, I tested the conventional models which were based on the most widely used risk factors to predict the possibility of developing T2DM. The AUC was an average of 0.5, which implies the conventional model cannot be used to screen for T2DM risks. Based on this result, I further implemented three types of temporal representations, including non-temporal representation, interval-temporal representation, and continuous-temporal representation for building the T2DM prediction model. According to the results, continuous-temporal representation had the best performance. Continuous-temporal representation was based on deep learning methods. The result implied that the deep learning method could overcome the data quality issue and could achieve better performance. This dissertation also contributes to a continuous risk output model based on the seq2seq model. This model can generate a monotonic increasing function for a given patient to predict the future probability of developing T2DM. The model is workable but still has many limitations to overcome. Finally, this dissertation demonstrates some risks factors which are underestimated and are worthy for further research to revise the current T2DM screening guideline. The results were still preliminary. I need to collaborate with an epidemiologist and other fields to verify the findings. In the future, the methods for building a T2DM prediction model can also be used for other prediction models of chronic conditions
    corecore