71 research outputs found

    Ensemble residual network-based gender and activity recognition method with signals

    Get PDF
    Nowadays, deep learning is one of the popular research areas of the computer sciences, and many deep networks have been proposed to solve artificial intelligence and machine learning problems. Residual networks (ResNet) for instance ResNet18, ResNet50 and ResNet101 are widely used deep network in the literature. In this paper, a novel ResNet-based signal recognition method is presented. In this study, ResNet18, ResNet50 and ResNet101 are utilized as feature extractor and each network extracts 1000 features. The extracted features are concatenated, and 3000 features are obtained. In the feature selection phase, 1000 most discriminative features are selected using ReliefF, and these selected features are used as input for the third-degree polynomial (cubic) activation-based support vector machine. The proposed method achieved 99.96% and 99.61% classification accuracy rates for gender and activity recognitions, respectively. These results clearly demonstrate that the proposed pre-trained ensemble ResNet-based method achieved high success rate for sensors signals. © 2020, Springer Science+Business Media, LLC, part of Springer Nature

    Improving sentiment classification using a RoBERTa-based hybrid model

    Get PDF
    IntroductionSeveral attempts have been made to enhance text-based sentiment analysis’s performance. The classifiers and word embedding models have been among the most prominent attempts. This work aims to develop a hybrid deep learning approach that combines the advantages of transformer models and sequence models with the elimination of sequence models’ shortcomings.MethodsIn this paper, we present a hybrid model based on the transformer model and deep learning models to enhance sentiment classification process. Robustly optimized BERT (RoBERTa) was selected for the representative vectors of the input sentences and the Long Short-Term Memory (LSTM) model in conjunction with the Convolutional Neural Networks (CNN) model was used to improve the suggested model’s ability to comprehend the semantics and context of each input sentence. We tested the proposed model with two datasets with different topics. The first dataset is a Twitter review of US airlines and the second is the IMDb movie reviews dataset. We propose using word embeddings in conjunction with the SMOTE technique to overcome the challenge of imbalanced classes of the Twitter dataset.ResultsWith an accuracy of 96.28% on the IMDb reviews dataset and 94.2% on the Twitter reviews dataset, the hybrid model that has been suggested outperforms the standard methods.DiscussionIt is clear from these results that the proposed hybrid RoBERTa–(CNN+ LSTM) method is an effective model in sentiment classification

    Application of computational intelligence methods for the automated identification of paper-ink samples based on LIBS

    Get PDF
    Laser-induced breakdown spectroscopy (LIBS) is an important analysis technique with applications in many industrial branches and fields of scientific research. Nowadays, the advantages of LIBS are impaired by the main drawback in the interpretation of obtained spectra and identification of observed spectral lines. This procedure is highly time-consuming since it is essentially based on the comparison of lines present in the spectrum with the literature database. This paper proposes the use of various computational intelligence methods to develop a reliable and fast classification of quasi-destructively acquired LIBS spectra into a set of predefined classes. We focus on a specific problem of classification of paper-ink samples into 30 separate, predefined classes. For each of 30 classes (10 pens of each of 5 ink types combined with 10 sheets of 5 paper types plus empty pages), 100 LIBS spectra are collected. Four variants of preprocessing, seven classifiers (decision trees, random forest, k-nearest neighbor, support vector machine, probabilistic neural network, multi-layer perceptron, and generalized regression neural network), 5-fold stratified cross-validation, and a test on an independent set (for methods evaluation) scenarios are employed. Our developed system yielded an accuracy of 99.08%, obtained using the random forest classifier. Our results clearly demonstrates that machine learning methods can be used to identify the paper-ink samples based on LIBS reliably at a faster rate

    A genetic programming approach to development of clinical prediction models: A case study in symptomatic cardiovascular disease

    Get PDF
    BACKGROUND:Genetic programming (GP) is an evolutionary computing methodology capable of identifying complex, non-linear patterns in large data sets. Despite the potential advantages of GP over more typical, frequentist statistical approach methods, its applications to survival analyses are rare, at best. The aim of this study was to determine the utility of GP for the automatic development of clinical prediction models. METHODS:We compared GP against the commonly used Cox regression technique in terms of the development and performance of a cardiovascular risk score using data from the SMART study, a prospective cohort study of patients with symptomatic cardiovascular disease. The composite endpoint was cardiovascular death, non-fatal stroke, and myocardial infarction. A total of 3,873 patients aged 19-82 years were enrolled in the study 1996-2006. The cohort was split 70:30 into derivation and validation sets. The derivation set was used for development of both GP and Cox regression models. These models were then used to predict the discrete hazards at t = 1, 3, and 5 years. The predictive ability of both models was evaluated in terms of their risk discrimination and calibration using the validation set. RESULTS:The discrimination of both models was comparable. At time points t = 1, 3, and 5 years the C-index was 0.59, 0.69, 0.64 and 0.66, 0.70, 0.70 for the GP and Cox regression models respectively. At the same time points, the calibration of both models, which was assessed using calibration plots and the generalization of the Hosmer-Lemeshow test statistic, was also comparable, but with the Cox model being better calibrated to the validation data. CONCLUSION:Using empirical data, we demonstrated that a prediction model developed automatically by GP has predictive ability comparable to that of manually tuned Cox regression. The GP model was more complex, but it was developed in a fully automated way and comprised fewer covariates. Furthermore, it did not require the expertise normally needed for its derivation, thereby alleviating the knowledge elicitation bottleneck. Overall, GP demonstrated considerable potential as a method for the automated development of clinical prediction models for diagnostic and prognostic purposes

    ECG signals (744 fragments)

    No full text
    For research purposes, the ECG signals were obtained from the PhysioNet service (http://www.physionet.org) from the MIT-BIH Arrhythmia database. The created database with ECG signals is described below. 1) The ECG signals were from 29 patients: 15 female (age: 23-89) and 14 male (age: 32-89). 2) The ECG signals contained 17 classes: normal sinus rhythm, pacemaker rhythm, and 15 types of cardiac dysfunctions (for each of which at least 10 signal fragments were collected). 3) All ECG signals were recorded at a sampling frequency of 360 [Hz] and a gain of 200 [adu / mV]. 4) For the analysis, 744, 10-second (3600 samples) fragments of the ECG signal (not overlapping) were randomly selected. 5) Only signals derived from one lead, the MLII, were used. 6) Data are in mat format (Matlab)

    Novel methodology of cardiac health recognition based on ECG signals and evolutionary-neural system

    No full text
    <p>This article presents an innovative research methodology that enables the efficient classification of cardiac disorders (17 classes) based on ECG signal analysis and an evolutionary-neural system.</p><p>From a social point of view, it is extremely important to prevent heart diseases, which are the most common cause of death worldwide. According to statistical data, 50 million people are at risk for cardiac diseases worldwide. The subject of ECG signal analysis is very popular. However, due to the great difficulty of the task undertaken, and high computational complexity of existing methods, there remains substantial work to perform.</p><p>This research collected 1000 fragments of ECG signals from the MIH-BIH Arrhythmia database for one lead, MLII, from 45 patients. An original methodology that consisted of the analysis of longer (10-s) fragments of the ECG signal was used (an average of 13 times less classifications). To enhance the characteristic features of the ECG signal, the spectral power density was estimated (using Welch’s method and a discrete Fourier transform). Genetic optimization of parameters and genetic selection of features were tested. Pre-processing, normalization, feature extraction and selection, cross-validation and machine learning algorithms (SVM, kNN, PNN, and RBFNN) were used.</p><p>The best evolutionary-neural system, based on the SVM classifier, obtained a recognition sensitivity of 17 myocardium dysfunctions at a level of 90.20% (98 errors per 1000 classifications, accuracy = 98.85%, specificity = 99.39%, time for classification of one sample = 0.0023 [s]). Against the background of the current scientific literature, these results are some of the best results to date.</p

    Porównanie systemów analizy danych opartych na metodach sztucznej inteligencji w zastosowaniu do przetwarzania sygnałów z e-nosów

    No full text
    <div>Opracowanie systemów analizy danych opartych na metodach sztucznej inteligencji w zastosowaniu do przetwarzania sygnałów z e-nosa. Zaprojektowano 23 systemy wykorzystujące inteligencję obliczeniową do analizy jakościowej (klasyfikowania 10 gatunków herbaty, 12 systemów) i ilościowej (aproksymowania 5 poziomów stężenia fenolu, 11 systemów) mieszanin gazowych.</div

    An estimation of the state of consumption of a positive displacement pump based on dynamic pressure or vibrations using neural networks

    No full text
    This paper describes the algorithms used to estimate the state of consumption of a pump based on dynamic pressure or vibrations. To create algorithms, the author used computational intelligence methods in the form of neural networks. In order to perform the analysis, data analysis systems were designed based on three neural networks: multilayer perceptron neural network (MLP), generalized regression neural network (GRNN) and probabilistic neural network (PNN). Processing of the input signal in the final result of the analysis consisted of several steps. First, the measurement data were preprocessed (delete constant component, normalization, standardization, reduction, fast Fourier transform (FFT), etc.), and training and test sets were prepared using the matrices with the expected system answers. The last step was the analysis, consisting of design data analysis systems based on artificial neural networks and their learning and testing. On the basis of the obtained results the effectiveness of neural networks and the methods of pre-processing of the signals applied to approximate the state of consumption of the displacement pump were evaluated. Design systems were evaluated based on accuracy (generated error) and complexity (number of parameters and training time) criteria. The main contribution of the paper is to design and compare methods for pre-processing the signal, and to design and compare the effectiveness of the three neural networks in the diagnosis consumption of a positive displacement pump
    corecore