588 research outputs found

    EENED: End-to-End Neural Epilepsy Detection based on Convolutional Transformer

    Full text link
    Recently Transformer and Convolution neural network (CNN) based models have shown promising results in EEG signal processing. Transformer models can capture the global dependencies in EEG signals through a self-attention mechanism, while CNN models can capture local features such as sawtooth waves. In this work, we propose an end-to-end neural epilepsy detection model, EENED, that combines CNN and Transformer. Specifically, by introducing the convolution module into the Transformer encoder, EENED can learn the time-dependent relationship of the patient's EEG signal features and notice local EEG abnormal mutations closely related to epilepsy, such as the appearance of spikes and the sprinkling of sharp and slow waves. Our proposed framework combines the ability of Transformer and CNN to capture different scale features of EEG signals and holds promise for improving the accuracy and reliability of epilepsy detection. Our source code will be released soon on GitHub.Comment: Accepted by IEEE CAI 202

    Investigating the Process of Developing a KDD Model for the Classification of Cases with Cardiovascular Disease Based on a Canadian Database

    Get PDF
    Medicine and health domains are information intensive fields as data volume has been increasing constantly from them. In order to make full use of the data, the technique of Knowledge Discovery in Databases (KDD) has been developed as a comprehensive pathway to discover valid and unsuspected patterns and trends that are both understandable and useful to data analysts. The present study aimed to investigate the entire KDD process of developing a classification model for cardiovascular disease (CVD) from a Canadian dataset for the first time. The research data source was Canadian Heart Health Database, which contains 265 easily collected variables and 23,129 instances from ten Canadian provinces. Many practical issues involving in different steps of the integrated process were addressed, and possible solutions were suggested based on the experimental results. Five specific learning schemes representing five distinct KDD approaches were employed, as they were never compared with one another. In addition, two improving approaches including cost-sensitive learning and ensemble learning were also examined. The performance of developed models was measured in many aspects. The data set was prepared through data cleaning and missing value imputation. Three pairs of experiments demonstrated that the dataset balancing and outlier removal exerted positive influence to the classifier, but the variable normalization was not helpful. Three combinations of subset generation method and evaluation function were tested in variable subset selection phase, and the combination of Best-First search and Correlation-based Feature Selection showed comparable goodness and was maintained for other benefits. Among the five learning schemes investigated, C4.5 decision tree achieved the best performance on the classification of CVD, followed by Multilayer Feed-forward Network, KNearest Neighbor, Logistic Regression, and Naïve Bayes. Cost-sensitive learning exemplified by the MetaCost algorithm failed to outperform the single C4.5 decision tree when varying the cost matrix from 5:1 to 1:7. In contrast, the models developed from ensemble modeling, especially AdaBoost M1 algorithm, outperformed other models. Although the model with the best performance might be suitable for CVD screening in general Canadian population, it is not ready to use in practice. I propose some criteria to improve the further evaluation of the model. Finally, I describe some of the limitations of the study and propose potential solutions to address such limitations through out the KDD process. Such possibilities should be explored in further research

    A Study of Feature Selection Methods for Classification

    Full text link
    [ES] En este proyecto se implementarán varias técnicas de selección de características que se aplican para mejorar el rendimiento de la clasificación automática. Se analizarán varios problemas de clasificación de un repositorio de bases de datos disponible públicamente correspondiente al análisis de datos biofísicos. Esos problemas podrían incluir los siguientes temas médicos: arritmia; cáncer de mama; insuficiencia cardiaca; y virus de la hepatitis C. Los datos consisten en características extraídas de señales electrocardiográficas (ECG), señales electroencefalográficas (EEG), imágenes médicas, anamnesis, etc. En algunos casos, podría ser necesario un paso de preprocesamiento para tratar la normalización, la eliminación de artefactos y los datos faltantes. El objetivo de los métodos de selección de características es obtener una lista ordenada del conjunto completo de características de acuerdo con algunos criterios definidos. A partir de esta clasificación de características se puede obtener un conjunto más pequeño de características que permite mejorar el rendimiento de la clasificación y evitar un posible sobreajuste del modelo de clasificación entrenado. En este proyecto, se realiza una comparación de los métodos SF, incluidos los métodos de selección de características secuenciales (SFS) y basados ​​en relieve. Como clasificadores, consideraremos el análisis discriminante lineal (LDA), el análisis discriminante cuadrático (QDA) y la máquina de vectores de soporte (SVM), entre otros. La calidad de los resultados de la clasificación se evaluará utilizando diferentes rangos de la lista de clasificación de características y varios índices como precisión, precisión equilibrada, matriz de confusión, ¿ Además, se estimará el costo computacional de los diferentes casos de clasificación.[EN] This project will implement several techniques of feature selection applied to improve automatic classification performance. Several classification problems from a publicly available database repository corresponding to biophysical data analysis will be analyzed. Those problems could include the following medical subjects: arrhythmia; breast cancer; heart failure; and hepatitis C virus. The data consist of features extracted from electrocardiographic (ECG) signals, electroencephalographic (EEG) signals, medical images, anamnesis, etc. In some cases, a preprocessing step could be required to deal with normalization, artifact removing, and missing data. The objective of feature selection methods is to obtain a sorted list of the full set of features according to some defined criteria. From this ranking of features a smaller set of features can be obtained that allows the classification performance be improved and avoid possible overfitting of the trained classification model. In this project, a comparison of SF methods is made including Relief-based and sequential feature selection (SFS) methods. As classifiers we will consider linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM) among others. The quality of classification results will be evaluated using different ranges of the feature ranking list and several indices such as accuracy, balanced accuracy, confusion matrix, ¿ Besides, computational cost of the different cases of classification will be estimated.Liu, C. (2022). A Study of Feature Selection Methods for Classification. Universitat Politècnica de València. http://hdl.handle.net/10251/182468TFG

    Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors

    Full text link
    Real-world time series is characterized by intrinsic non-stationarity that poses a principal challenge for deep forecasting models. While previous models suffer from complicated series variations induced by changing temporal distribution, we tackle non-stationary time series with modern Koopman theory that fundamentally considers the underlying time-variant dynamics. Inspired by Koopman theory of portraying complex dynamical systems, we disentangle time-variant and time-invariant components from intricate non-stationary series by Fourier Filter and design Koopman Predictor to advance respective dynamics forward. Technically, we propose Koopa as a novel Koopman forecaster composed of stackable blocks that learn hierarchical dynamics. Koopa seeks measurement functions for Koopman embedding and utilizes Koopman operators as linear portraits of implicit transition. To cope with time-variant dynamics that exhibits strong locality, Koopa calculates context-aware operators in the temporal neighborhood and is able to utilize incoming ground truth to scale up forecast horizon. Besides, by integrating Koopman Predictors into deep residual structure, we ravel out the binding reconstruction loss in previous Koopman forecasters and achieve end-to-end forecasting objective optimization. Compared with the state-of-the-art model, Koopa achieves competitive performance while saving 77.3% training time and 76.0% memory

    Interpretable and Robust AI in EEG Systems: A Survey

    Full text link
    The close coupling of artificial intelligence (AI) and electroencephalography (EEG) has substantially advanced human-computer interaction (HCI) technologies in the AI era. Different from traditional EEG systems, the interpretability and robustness of AI-based EEG systems are becoming particularly crucial. The interpretability clarifies the inner working mechanisms of AI models and thus can gain the trust of users. The robustness reflects the AI's reliability against attacks and perturbations, which is essential for sensitive and fragile EEG signals. Thus the interpretability and robustness of AI in EEG systems have attracted increasing attention, and their research has achieved great progress recently. However, there is still no survey covering recent advances in this field. In this paper, we present the first comprehensive survey and summarize the interpretable and robust AI techniques for EEG systems. Specifically, we first propose a taxonomy of interpretability by characterizing it into three types: backpropagation, perturbation, and inherently interpretable methods. Then we classify the robustness mechanisms into four classes: noise and artifacts, human variability, data acquisition instability, and adversarial attacks. Finally, we identify several critical and unresolved challenges for interpretable and robust AI in EEG systems and further discuss their future directions

    Research on Application of Single Chip Microcomputer in Modern Communication System

    Get PDF
    The application of single chip microcomputer in modern communication system is deeply studied. Firstly, the main types and characteristics of microcontroller are described in detail, including microcontroller classified according to microprocessor architecture, memory type and use environment. Then, it discusses the main application fields of microcontroller in wireless communication, wired communication and optical communication, and analyzes its practical application in these fields. On this basis, the main challenges and problems encountered in modern communication systems are discussed, such as the complexity of design and production, power consumption, compatibility and expansibility. Finally, the solutions to these challenges and problems are put forward, and the future development trend of single-chip microcomputer in modern communication system is discussed
    • …
    corecore