7 research outputs found

    Prediction of ventricular fibrillation using support vector machine

    Get PDF
    Sudden cardiac death (SCD) remains one of the top causes of high mortality rate. Early prediction of ventricular fibrillation (VF), and hence SCD, can improve the survival chance of a patient by enabling earlier treatment. Heart rate variability analysis (HRV) has been widely adopted by the researchers in VF prediction. Different combinations of features from multiple domains were explored but the spectral analysis was performed without the required preprocessing or on a shorter segment as opposed to the standards of The European and North American Task force on HRV. Thus, our study aimed to develop a robust prediction algorithm by including only time domain and nonlinear features while maintaining the prediction resolution of one minute. Nine time domain features and seven nonlinear features were extracted and classified using support vector machine (SVM) of different kernels. High accuracy of 94.7% and sensitivity of 100% were achieved using extraction of only two HRV features and Gaussian kernel SVM without complicated preprocessing of HRV signals. This algorithm with high accuracy and low computational burden is beneficial for embedded system and real-time application which could help alert the individuals sooner and hence improving patient survival chance

    Automated prediction of sudden cardiac death using statistically extracted features from electrocardiogram signals

    Get PDF
    Sudden cardiac death (SCD) is becoming a severe problem despite significant advancements in the usage of the information and communication technology (ICT) in the health industry. Predicting an unexpected SCD of a person is of high importance. It might increase the survival rate. In this work, we have developed an automated method for predicting SCD utilizing statistical measures. We extracted the intrinsic attributes of the electrocardiogram (ECG) signals using Hilbert-Huang and wavelet transforms. Then utilizing machine learning (ML) classifier, we are using these traits to automatically classify regular and SCD existing risks. Support vector machine (SVM), decision tree (DT), naive Bayes (NB), discriminate k-nearest neighbors (KNN), analysis (Disc.), as well as an ensemble of classifiers also utilized (Ens.). The efficiency and practicality of the proposed methods are evaluated using a standard database and measured ECG data obtained from 18 ECG records of SCD cases and 18 ECG records of normal cases. For the automated scheme, the set of features can predict SCD very fast that is, half an hour before the occurrence of SCD with an average accuracy of 100.0% (KNN), 99.9% (SVM), 98.5% (NB), 99.4% (DT), 99.5% (Disc.), and 100.0% (Ens.

    Three-dimensional Phase Space Characteristics of Electrocardiogram Segments in Online and Early Prediction of Sudden Cardiac Death

    Get PDF
    Introduction: Predicting sudden cardiac death (SCD) using electrocardiogram (ECG) signals has come to the attention of researchers in recent years. One of the most common SCD identifiers is ventricular fibrillation (VF). The main objective of the present study was to provide an online prediction system of SCD using innovative ECG measures 10 minutes before VF onset. Additionally, it aimed to evaluate the different segments of the ECG signal (which depend on ventricular function) comparatively to determine the efficient component in predicting SCD. The ECG segments were QS, RT, QR, QT, and ST.Material and Methods: After defining the ECG characteristic points and segments, innovative measures were appraised using the three-dimensional phase space of the ECG component. Tracking signal dynamics and lowering the computational cost make the feature suitable for online and offline applications. Finally, the prediction was performed using the support vector machine (SVM).Results: Using the QR measures, SCD detection was realized ten minutes before its occurrence with an accuracy, specificity, and sensitivity of 100%.Conclusion: The superiority of the proposed system compared to the state-of-the-art SCD prediction schemes was revealed in terms of both classification performances and computational speed

    Lossless and low-cost integer-based lifting wavelet transform

    Get PDF
    Discrete wavelet transform (DWT) is a powerful tool for analyzing real-time signals, including aperiodic, irregular, noisy, and transient data, because of its capability to explore signals in both the frequency- and time-domain in different resolutions. For this reason, they are used extensively in a wide number of applications in image and signal processing. Despite the wide usage, the implementation of the wavelet transform is usually lossy or computationally complex, and it requires expensive hardware. However, in many applications, such as medical diagnosis, reversible data-hiding, and critical satellite data, lossless implementation of the wavelet transform is desirable. It is also important to have more hardware-friendly implementations due to its recent inclusion in signal processing modules in system-on-chips (SoCs). To address the need, this research work provides a generalized implementation of a wavelet transform using an integer-based lifting method to produce lossless and low-cost architecture while maintaining the performance close to the original wavelets. In order to achieve a general implementation method for all orthogonal and biorthogonal wavelets, the Daubechies wavelet family has been utilized at first since it is one of the most widely used wavelets and based on a systematic method of construction of compact support orthogonal wavelets. Though the first two phases of this work are for Daubechies wavelets, they can be generalized in order to apply to other wavelets as well. Subsequently, some techniques used in the primary works have been adopted and the critical issues for achieving general lossless implementation have solved to propose a general lossless method. The research work presented here can be divided into several phases. In the first phase, low-cost architectures of the Daubechies-4 (D4) and Daubechies-6 (D6) wavelets have been derived by applying the integer-polynomial mapping. A lifting architecture has been used which reduces the cost by a half compared to the conventional convolution-based approach. The application of integer-polynomial mapping (IPM) of the polynomial filter coefficient with a floating-point value further decreases the complexity and reduces the loss in signal reconstruction. Also, the “resource sharing” between lifting steps results in a further reduction in implementation costs and near-lossless data reconstruction. In the second phase, a completely lossless or error-free architecture has been proposed for the Daubechies-8 (D8) wavelet. Several lifting variants have been derived for the same wavelet, the integer mapping has been applied, and the best variant is determined in terms of performance, using entropy and transform coding gain. Then a theory has been derived regarding the impact of scaling steps on the transform coding gain (GT). The approach results in the lowest cost lossless architecture of the D8 in the literature, to the best of our knowledge. The proposed approach may be applied to other orthogonal wavelets, including biorthogonal ones to achieve higher performance. In the final phase, a general algorithm has been proposed to implement the original filter coefficients expressed by a polyphase matrix into a more efficient lifting structure. This is done by using modified factorization, so that the factorized polyphase matrix does not include the lossy scaling step like the conventional lifting method. This general technique has been applied on some widely used orthogonal and biorthogonal wavelets and its advantages have been discussed. Since the discrete wavelet transform is used in a vast number of applications, the proposed algorithms can be utilized in those cases to achieve lossless, low-cost, and hardware-friendly architectures

    Contribuciones de las técnicas machine learning a la cardiología. Predicción de reestenosis tras implante de stent coronario

    Get PDF
    [ES]Antecedentes: Existen pocos temas de actualidad equiparables a la posibilidad de la tecnología actual para desarrollar las mismas capacidades que el ser humano, incluso en medicina. Esta capacidad de simular los procesos de inteligencia humana por parte de máquinas o sistemas informáticos es lo que conocemos hoy en día como inteligencia artificial. Uno de los campos de la inteligencia artificial con mayor aplicación a día de hoy en medicina es el de la predicción, recomendación o diagnóstico, donde se aplican las técnicas machine learning. Asimismo, existe un creciente interés en las técnicas de medicina de precisión, donde las técnicas machine learning pueden ofrecer atención médica individualizada a cada paciente. El intervencionismo coronario percutáneo (ICP) con stent se ha convertido en una práctica habitual en la revascularización de los vasos coronarios con enfermedad aterosclerótica obstructiva significativa. El ICP es asimismo patrón oro de tratamiento en pacientes con infarto agudo de miocardio; reduciendo las tasas de muerte e isquemia recurrente en comparación con el tratamiento médico. El éxito a largo plazo del procedimiento está limitado por la reestenosis del stent, un proceso patológico que provoca un estrechamiento arterial recurrente en el sitio de la ICP. Identificar qué pacientes harán reestenosis es un desafío clínico importante; ya que puede manifestarse como un nuevo infarto agudo de miocardio o forzar una nueva resvascularización del vaso afectado, y que en casos de reestenosis recurrente representa un reto terapéutico. Objetivos: Después de realizar una revisión de las técnicas de inteligencia artificial aplicadas a la medicina y con mayor profundidad, de las técnicas machine learning aplicadas a la cardiología, el objetivo principal de esta tesis doctoral ha sido desarrollar un modelo machine learning para predecir la aparición de reestenosis en pacientes con infarto agudo de miocardio sometidos a ICP con implante de un stent. Asimismo, han sido objetivos secundarios comparar el modelo desarrollado con machine learning con los scores clásicos de riesgo de reestenosis utilizados hasta la fecha; y desarrollar un software que permita trasladar esta contribución a la práctica clínica diaria de forma sencilla. Para desarrollar un modelo fácilmente aplicable, realizamos nuestras predicciones sin variables adicionales a las obtenidas en la práctica rutinaria. Material: El conjunto de datos, obtenido del ensayo GRACIA-3, consistió en 263 pacientes con características demográficas, clínicas y angiográficas; 23 de ellos presentaron reestenosis a los 12 meses después de la implantación del stent. Todos los desarrollos llevados a cabo se han hecho en Python y se ha utilizado computación en la nube, en concreto AWS (Amazon Web Services). Metodología: Se ha utilizado una metodología para trabajar con conjuntos de datos pequeños y no balanceados, siendo importante el esquema de validación cruzada anidada utilizado, así como la utilización de las curvas PR (precision-recall, exhaustividad-sensibilidad), además de las curvas ROC, para la interpretación de los modelos. Se han entrenado los algoritmos más habituales en la literatura para elegir el que mejor comportamiento ha presentado. Resultados: El modelo con mejores resultados ha sido el desarrollado con un clasificador extremely randomized trees; que superó significativamente (0,77; área bajo la curva ROC a los tres scores clínicos clásicos; PRESTO-1 (0,58), PRESTO-2 (0,58) y TLR (0,62). Las curvas exhaustividad sensibilidad ofrecieron una imagen más precisa del rendimiento del modelo extremely randomized trees que muestra un algoritmo eficiente (0,96) para no reestenosis, con alta exhaustividad y alta sensibilidad. Para un umbral considerado óptimo, de 1,000 pacientes sometidos a implante de stent, nuestro modelo machine learning predeciría correctamente 181 (18%) más casos en comparación con el mejor score de riesgo clásico (TLR). Las variables más importantes clasificadas según su contribución a las predicciones fueron diabetes, enfermedad coronaria en 2 ó más vasos, flujo TIMI post-ICP, plaquetas anormales, trombo post-ICP y colesterol anormal. Finalmente, se ha desarrollado una calculadora para trasladar el modelo a la práctica clínica. La calculadora permite estimar el riesgo individual de cada paciente y situarlo en una zona de riesgo, facilitando la toma de decisión al médico en cuanto al seguimiento adecuado para el mismo. Conclusiones: Aplicado inmediatamente después de la implantación del stent, un modelo machine learning diferencia mejor a aquellos pacientes que presentarán o no reestenosis respecto a los discriminadores clásicos actuales
    corecore