53 research outputs found

    Waveform prototype-based feature learning for automatic detection of the early repolarization pattern in ECG signals

    Get PDF
    Objective: Our aim was to develop an automated detection method, for prescreening purposes, of early repolarization (ER) pattern with slur/notch configuration in electrocardiogram (ECG) signals using a waveform prototype-based feature vector for supervised classification. Approach: The feature vectors consist of fragments of the ECG signal where the ER pattern is located, instead of abstract descriptive variables of ECG waveforms. The tested classifiers included linear discriminant analysis, k-nearest neighbor algorithm, and support vector machine (SVM). Main results: SVM showed the best performance in Friedman tests in our test data including 5676 subjects representing 45408 leads. Accuracies of the different classifiers showed results well over 90%, indicating that the waveform prototype-based feature vector is an effective representation of the differences between ECG signals with and without the ER pattern. The accuracy of inferior ER was 92.74% and 92.21% for lateral ER. The sensitivity achieved was 91.80% and specificity was 92.73%. Significance: The algorithm presented here showed good performance results, indicating that it could be used as a prescreening tool of ER, and it provides an additional identification of critical cases based on the distances to the classifier decision boundary, which are close to the 0.1 mV threshold and are difficult to label.Peer reviewe

    Waveform prototype-based feature learning for automatic detection of the early repolarization pattern in ECG signals

    Get PDF
    Objective: Our aim was to develop an automated detection method, for prescreening purposes, of early repolarization (ER) pattern with slur/notch configuration in electrocardiogram (ECG) signals using a waveform prototype-based feature vector for supervised classification. Approach: The feature vectors consist of fragments of the ECG signal where the ER pattern is located, instead of abstract descriptive variables of ECG waveforms. The tested classifiers included linear discriminant analysis, k-nearest neighbor algorithm, and support vector machine (SVM). Main results: SVM showed the best performance in Friedman tests in our test data including 5676 subjects representing 45408 leads. Accuracies of the different classifiers showed results well over 90%, indicating that the waveform prototype-based feature vector is an effective representation of the differences between ECG signals with and without the ER pattern. The accuracy of inferior ER was 92.74% and 92.21% for lateral ER. The sensitivity achieved was 91.80% and specificity was 92.73%. Significance: The algorithm presented here showed good performance results, indicating that it could be used as a prescreening tool of ER, and it provides an additional identification of critical cases based on the distances to the classifier decision boundary, which are close to the 0.1 mV threshold and are difficult to label.Peer reviewe

    Automatic detection of early repolarization pattern in ECG signals with waveform prototype-based learning

    Get PDF
    Abstract. Early repolarization (ER) pattern was considered a benign finding until 2008, when it was associated with sudden cardiac arrest (SCA). Since then, the interest of the medical community on the topic has grown, stating the need to develop methods to detect the pattern and analyze the risk of SCA. This thesis presents an automatic detection method of ER using supervised classification. The novelty of the method lies in the features used to construct the classification models. The features consist of prototypes that are composed by fragments of the ECG signal where the ER pattern is located. Three different classifier models were included and compared: linear discriminant analysis (LDA), k-nearest neighbor (KNN) algorithm and support vector machine (SVM). The method was tested in a dataset of 5676 subjects, manually labeled by an experienced analyst who followed the medical guidelines. The algorithm for the detection of ER is composed of different stages. First, the ECG signals are processed to locate characteristic points and remove unwanted noise. Then, the features are extracted from the signals and the classifiers are trained. Finally, the results are fused and the detection of ER is evaluated. Accuracies of the different classifiers showed results over 90%, demonstrating the discrimitative power of the features between ECG signals with and without the ER pattern. Additionally, dimensionality reduction of the features was implemented with Isomap and generalized regression neural networks (GRNN) without affecting the performance of the method. Moreover, analysis of critical cases that are difficult to label was performed based on the distances to the classifier decision boundary, improving the sensitivity of the detection. Hence, the method presented here could be used to discriminate between ECG signals with and without the ER pattern

    Detection of Abnormalities based on Gamma Wave EEG Signal for Autism Spectrum Disorder

    Get PDF
    Diagnosing Autism Spectrum Disorder (ASD) by using the traits of abnormalities in their gamma waveform has been proposed in this study to suggest an objective method to detect the disorder using Electroencephalography (EEG) signal. Gamma waveform plays an important role in learning, memory and information processing where it shows slower activities in ASD person compared to a normal person, thus, causing the patients to have trouble in processing knowledge, communicate and pay attention. This study applies Probabilistic Neural Network (PNN) and General Regression Neural Network (GRNN) to classify the data into normal and abnormal classes. Classification algorithm by PNN was used as a benchmark for the outcomes. The results show that even though PNN and GRNN have similar architecture, but with fundamental difference, the outcomes are different. In this case, PNN performs better than GRNN. To obtain the desired results, we used three and four statistical features (mean, minimum, maximum and standard deviation) for both methods. The outcomes of using PNN with four features are more accurate (99.5% for normal class and 80.5% for abnormal class) compared to only three features. Furthermore, the outcomes of using GRNN with four features also have improvement (95% for normal class and 63.5% for abnormal class) compared to only three features

    Application of computational intelligence methods for the automated identification of paper-ink samples based on LIBS

    Get PDF
    Laser-induced breakdown spectroscopy (LIBS) is an important analysis technique with applications in many industrial branches and fields of scientific research. Nowadays, the advantages of LIBS are impaired by the main drawback in the interpretation of obtained spectra and identification of observed spectral lines. This procedure is highly time-consuming since it is essentially based on the comparison of lines present in the spectrum with the literature database. This paper proposes the use of various computational intelligence methods to develop a reliable and fast classification of quasi-destructively acquired LIBS spectra into a set of predefined classes. We focus on a specific problem of classification of paper-ink samples into 30 separate, predefined classes. For each of 30 classes (10 pens of each of 5 ink types combined with 10 sheets of 5 paper types plus empty pages), 100 LIBS spectra are collected. Four variants of preprocessing, seven classifiers (decision trees, random forest, k-nearest neighbor, support vector machine, probabilistic neural network, multi-layer perceptron, and generalized regression neural network), 5-fold stratified cross-validation, and a test on an independent set (for methods evaluation) scenarios are employed. Our developed system yielded an accuracy of 99.08%, obtained using the random forest classifier. Our results clearly demonstrates that machine learning methods can be used to identify the paper-ink samples based on LIBS reliably at a faster rate

    REVIEW ON USING BIOMETRIC SIGNALS IN RANDOM NUMBER GENERATORS.

    Get PDF
    Random numbers play an important role in digital security and are used in encryption, public key cryptography to ensure the safe and unchanged transmission. Random number generators are required to generate these random numbers, but true randomness is difficult to achieve and requires a true random source to generate the number which cannot be predicted from the knowledge of previous inputs. This paper discusses about incorporating biometrics and cryptography for stronger security and to generate random numbers with true randomness. Biometric systems are used to uniquely identify individuals in the security but uses a sophisticated procedure. Biometric signals are non-deterministic processes that are unpredictable and good source of randomness. This paper reviews the feasibility of using biometric signals in Random Number Generator (RNG) discuss whether biometric signals such as heartbeats, vascular patterns, iris scans and human Galvanic Skin Response (GSR) can be used in nearby future to generate reliable Random numbers. This paper will also review the work done towards generating random numbers using these biometric signals and the result of them, verified with statistical test suites such as NIST

    On the Development of Machine Learning Based Real-Time Stress Monitoring : A Pilot Study

    Get PDF
    During specific environmental changes, the human body regulates itself through emotional, physical or mental responses. One such response is stress. The psychological and physical stability of an individual may be affected by recurrent occurrences of acute stress. This often leads to anxiety disorder, other psychological illnesses, hypertension, and other physiological disorders. The work performance of the individual is also negatively affected due to long-term stress. Across various age groups, the global population is primarily influenced by anxiety, depression and psychological stress. The long-term adverse effects of stress can be mitigated by effectively monitoring and managing stress through a cost-efficient and reliable stress detection system.  This paper mainly focuses on stress detection using a machine-learning approach. Wearable sensor data from electroencephalogram (EEG) and electrocardiogram (ECG) are considered during exposure to stress and the level of stress undergone by the participant is further analyzed. This approach helps in stress detection, analysis and mitigation, which in turn improves the quality life of people. Machining Learning technique k-means clustering algorithm is used after removal of artifacts to obtain case-specific clusters that segregate features pointing to non-stress and stress periods.  The results of the proposed K-means clustering algorithm are compared to state-of-the-art techniques such as Artificial Neural Network (ANN), Decision Tree (DT), Random Forest (RF) and Support Vector Machine (SVM). From the results, it was concluded that the proposed algorithm outperformed the other with an accuracy of 96% in the overall analysis

    Detecting Moments of Stress from Measurements of Wearable Physiological Sensors

    Get PDF
    There is a rich repertoire of methods for stress detection using various physiological signals and algorithms. However, there is still a gap in research efforts moving from laboratory studies to real-world settings. A small number of research has verified when a physiological response is a reaction to an extrinsic stimulus of the participant’s environment in real-world settings. Typically, physiological signals are correlated with the spatial characteristics of the physical environment, supported by video records or interviews. The present research aims to bridge the gap between laboratory settings and real-world field studies by introducing a new algorithm that leverages the capabilities of wearable physiological sensors to detect moments of stress (MOS). We propose a rule-based algorithm based on galvanic skin response and skin temperature, combing empirical findings with expert knowledge to ensure transferability between laboratory settings and real-world field studies. To verify our algorithm, we carried out a laboratory experiment to create a “gold standard” of physiological responses to stressors. We validated the algorithm in real-world field studies using a mixed-method approach by spatially correlating the participant’s perceived stress, geo-located questionnaires, and the corresponding real-world situation from the video. Results show that the algorithm detects MOS with 84% accuracy, showing high correlations between measured (by wearable sensors), reported (by questionnaires and eDiary entries), and recorded (by video) stress events. The urban stressors that were identified in the real-world studies originate from traffic congestion, dangerous driving situations, and crowded areas such as tourist attractions. The presented research can enhance stress detection in real life and may thus foster a better understanding of circumstances that bring about physiological stress in humans

    Recursive backpropagation algorithm applied to a globally recurrent neural network

    Full text link
    In general, recursive neural networks can yield a smaller structure than purely feedforward neural network in the same way infinite impulse response (IIR) filters can replace longer finite impulse response (FIR) filters. This thesis presents a new adaptive algorithm that trains recursive neural networks. This algorithm is based on least mean square (LMS) algorithms designed for other adaptive architectures. This algorithm overcomes several of the limitations of current recursive neural network algorithms, such as epoch training and the requirement for large amounts of memory storage; To demonstrate this new algorithm, adaptive architectures constructed with a recursive neural network and trained with the new algorithm are applied to the four adaptive systems and the results are compared to adaptive systems constructed with other adaptive filters. In these examples, this new algorithm shows the ability to perform linear and nonlinear transformations and, in some cases, significantly outperforms the other adaptive filters. This thesis also discusses the possible avenues for future exploration of adaptive systems constructed of recursive neural networks

    Novel Computationally Intelligent Machine Learning Algorithms for Data Mining and Knowledge Discovery

    Get PDF
    This thesis addresses three major issues in data mining regarding feature subset selection in large dimensionality domains, plausible reconstruction of incomplete data in cross-sectional applications, and forecasting univariate time series. For the automated selection of an optimal subset of features in real time, we present an improved hybrid algorithm: SAGA. SAGA combines the ability to avoid being trapped in local minima of Simulated Annealing with the very high convergence rate of the crossover operator of Genetic Algorithms, the strong local search ability of greedy algorithms and the high computational efficiency of generalized regression neural networks (GRNN). For imputing missing values and forecasting univariate time series, we propose a homogeneous neural network ensemble. The proposed ensemble consists of a committee of Generalized Regression Neural Networks (GRNNs) trained on different subsets of features generated by SAGA and the predictions of base classifiers are combined by a fusion rule. This approach makes it possible to discover all important interrelations between the values of the target variable and the input features. The proposed ensemble scheme has two innovative features which make it stand out amongst ensemble learning algorithms: (1) the ensemble makeup is optimized automatically by SAGA; and (2) GRNN is used for both base classifiers and the top level combiner classifier. Because of GRNN, the proposed ensemble is a dynamic weighting scheme. This is in contrast to the existing ensemble approaches which belong to the simple voting and static weighting strategy. The basic idea of the dynamic weighting procedure is to give a higher reliability weight to those scenarios that are similar to the new ones. The simulation results demonstrate the validity of the proposed ensemble model
    corecore