2,126 research outputs found

    AUTOMATED ARTIFACT REMOVAL AND DETECTION OF MILD COGNITIVE IMPAIRMENT FROM SINGLE CHANNEL ELECTROENCEPHALOGRAPHY SIGNALS FOR REAL-TIME IMPLEMENTATIONS ON WEARABLES

    Get PDF
    Electroencephalogram (EEG) is a technique for recording asynchronous activation of neuronal firing inside the brain with non-invasive scalp electrodes. EEG signal is well studied to evaluate the cognitive state, detect brain diseases such as epilepsy, dementia, coma, autism spectral disorder (ASD), etc. In this dissertation, the EEG signal is studied for the early detection of the Mild Cognitive Impairment (MCI). MCI is the preliminary stage of Dementia that may ultimately lead to Alzheimers disease (AD) in the elderly people. Our goal is to develop a minimalistic MCI detection system that could be integrated to the wearable sensors. This contribution has three major aspects: 1) cleaning the EEG signal, 2) detecting MCI, and 3) predicting the severity of the MCI using the data obtained from a single-channel EEG electrode. Artifacts such as eye blink activities can corrupt the EEG signals. We investigate unsupervised and effective removal of ocular artifact (OA) from single-channel streaming raw EEG data. Wavelet transform (WT) decomposition technique was systematically evaluated for effectiveness of OA removal for a single-channel EEG system. Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), is studied with four WT basis functions: haar, coif3, sym3, and bior4.4. The performance of the artifact removal algorithm was evaluated by the correlation coefficients (CC), mutual information (MI), signal to artifact ratio (SAR), normalized mean square error (NMSE), and time-frequency analysis. It is demonstrated that WT can be an effective tool for unsupervised OA removal from single channel EEG data for real-time applications.For the MCI detection from the clean EEG data, we collected the scalp EEG data, while the subjects were stimulated with five auditory speech signals. We extracted 590 features from the Event-Related Potential (ERP) of the collected EEG signals, which included time and spectral domain characteristics of the response. The top 25 features, ranked by the random forest method, were used for classification models to identify subjects with MCI. Robustness of our model was tested using leave-one-out cross-validation while training the classifiers. Best results (leave-one-out cross-validation accuracy 87.9%, sensitivity 84.8%, specificity 95%, and F score 85%) were obtained using support vector machine (SVM) method with Radial Basis Kernel (RBF) (sigma = 10, cost = 102). Similar performances were also observed with logistic regression (LR), further validating the results. Our results suggest that single-channel EEG could provide a robust biomarker for early detection of MCI. We also developed a single channel Electro-encephalography (EEG) based MCI severity monitoring algorithm by generating the Montreal Cognitive Assessment (MoCA) scores from the features extracted from EEG. We performed multi-trial and single-trail analysis for the algorithm development of the MCI severity monitoring. We studied Multivariate Regression (MR), Ensemble Regression (ER), Support Vector Regression (SVR), and Ridge Regression (RR) for multi-trial and deep neural regression for the single-trial analysis. In the case of multi-trial, the best result was obtained from the ER. In our single-trial analysis, we constructed the time-frequency image from each trial and feed it to the convolutional deep neural network (CNN). Performance of the regression models was evaluated by the RMSE and the residual analysis. We obtained the best accuracy with the deep neural regression method

    A novel Auto-ML Framework for Sarcasm Detection

    Get PDF
    Many domains have sarcasm or verbal irony presented in the text of reviews, tweets, comments, and dialog discussions. The purpose of this research is to classify sarcasm for multiple domains using the deep learning based AutoML framework. The proposed AutoML framework has five models in the model search pipeline, these five models are the combination of convolutional neural network (CNN), Long Short-Term Memory (LSTM), deep neural network (DNN), and Bidirectional Long Short-Term Memory (BiLSTM). The hybrid combination of CNN, LSTM, and DNN models are presented as CNN-LSTM-DNN, LSTM-DNN, BiLSTM-DNN, and CNN-BiLSTM-DNN. This work has proposed the algorithms that contrast polarities between terms and phrases, which are categorized into implicit and explicit incongruity categories. The incongruity and pragmatic features like punctuation, exclamation marks, and others integrated into the AutoML DeepConcat framework models. That integration was possible when the DeepConcat AutoML framework initiate a model search pipeline for five models to achieve better performance. Conceptually, DeepConcat means that model will integrate with generalized features. It was evident that the pretrain model BiLSTM achieved a better performance of 0.98 F1 when compared with the other five model performances. Similarly, the AutoML based BiLSTM-DNN model achieved the best performance of 0.98 F1, which is better than core approaches and existing state-of-the-art Tweeter tweet dataset, Amazon reviews, and dialog discussion comments. The proposed AutoML framework has compared performance metrics F1 and AUC and discovered that F1 is better than AUC. The integration of all feature categories achieved a better performance than the individual category of pragmatic and incongruity features. This research also evaluated the performance of the dropout layer hyperparameter and it achieved better performance than the fixed percentage like 10% of dropout parameter of the AutoML based Bayesian optimization. Proposed AutoML framework DeepConcat evaluated best pretrain models BiLSTM-DNN and CNN-CNN-DNN to transfer knowledge across domains like Amazon reviews and Dialog discussion comments (text) using the last strategy, full layer, and our fade-out freezing strategies. In the transfer learning fade-out strategy outperformed the existing state-of-the-art model BiLSTM-DNN, the performance is 0.98 F1 on tweets, 0.85 F1 on Amazon reviews, and 0.87 F1 on the dialog discussion SCV2-Gen dataset. Further, all strategies with various domains can be compared for the best model selection

    Time-Resolved Method for Spectral Analysis based on Linear Predictive Coding, with Application to EEG Analysis

    Get PDF
    EEG (Electroencephalogram) signal is a biological signal in BCI (Brain-Computer Interface) systems to realise the information exchange between the brain and the external environment. It is characterised by a poor signal-to-noise ratio, is time-varying, is intermittent and contains multiple frequency components. This research work has developed a new parameterised time-frequency method called the Linear Predictive Coding Pole Processing (LPCPP) method which can be used for identifying and tracking the dominant frequency components of an EEG signal. The LPCPP method further processes LPC (Linear Predictive Coding) poles to produce a series of reduced-order filter transfer functions to estimate the dominant frequencies. It is suited for processing high-noise multi-component signals and can directly give the corresponding frequency estimates unlike transform-based methods. Furthermore, a new EEG spectral analysis framework involving the LPCPP method is proposed to describe the EEG spectral activity. The EEG signal has been divided into different frequency bands (i.e. Delta, Theta, Alpha, Beta and Gamma). However, there is no consensus on the definitions of these band boundaries. A series of EEG centre frequencies are proposed in this thesis instead of fixed frequency boundaries, as they are better suited to describe the dominant EEG spectral activity

    Hardware Trojan Detection Using Machine Learning

    Get PDF
    The cyber-physical system’s security depends on the software and underlying hardware. In today’s times, securing hardware is difficult because of the globalization of the Integrated circuit’s manufacturing process. The main attack is to insert a “backdoor” that maliciously alters the original circuit’s behaviour. Such a malicious insertion is called a hardware trojan. In this thesis, the Random Forest Model has proposed for hardware trojan detection and this research focuses on improving the detection accuracy of the Random Forest model. The detection technique used the random forest machine learning model, which was trained by using the power traces of the circuit behaviour. The data required for training was obtained from an extensive database by simulating the circuit behaviours with various input vectors. The machine learning model was then compared with the state-of-art models in terms of accuracy in detecting malicious hardware. Our results show that the Random Forest classifier achieves an accuracy of 99.80 percent with a false positive rate (FPR)of 0.009 and a false negative rate (FNR) of 0.038 when the model is created to detect hardware trojans. Furthermore, our research shows that a trained model takes less training time and can be applied to large and complex datasets

    Artificial Intelligence for Multimedia Signal Processing

    Get PDF
    Artificial intelligence technologies are also actively applied to broadcasting and multimedia processing technologies. A lot of research has been conducted in a wide variety of fields, such as content creation, transmission, and security, and these attempts have been made in the past two to three years to improve image, video, speech, and other data compression efficiency in areas related to MPEG media processing technology. Additionally, technologies such as media creation, processing, editing, and creating scenarios are very important areas of research in multimedia processing and engineering. This book contains a collection of some topics broadly across advanced computational intelligence algorithms and technologies for emerging multimedia signal processing as: Computer vision field, speech/sound/text processing, and content analysis/information mining

    Optimal sparsity allows reliable system-aware restoration of fluorescence microscopy images

    Get PDF
    Incluye: artículo, material suplementario, videos y software.Fluorescence microscopy is one of the most indispensable and informative driving forces for biological research, but the extent of observable biological phenomena is essentially determined by the content and quality of the acquired images. To address the different noise sources that can degrade these images, we introduce an algorithm for multiscale image restoration through optimally sparse representation (MIRO). MIRO is a deterministic framework that models the acquisition process and uses pixelwise noise correction to improve image quality. Our study demonstrates that this approach yields a remarkable restoration of the fluorescence signal for a wide range of microscopy systems, regardless of the detector used (e.g., electron-multiplying charge-coupled device, scientific complementary metal-oxide semiconductor, or photomultiplier tube). MIRO improves current imaging capabilities, enabling fast, low-light optical microscopy, accurate image analysis, and robust machine intelligence when integrated with deep neural networks. This expands the range of biological knowledge that can be obtained from fluorescence microscopy.We acknowledge the support of the National Institutes of Health grants R35GM124846 (to S.J.) and R01AA028527 (to C.X.), the National Science Foundation grants BIO2145235 and EFMA1830941 (to S.J.), and Marvin H. and Nita S. Floyd Research Fund (to S.J.). This research project was supported, in part, by the Emory University Integrated Cellular Imaging Microscopy Core and by PHS Grant UL1TR000454 from the Clinical and Translational Science Award Program, National Institutes of Health, and National Center for Advancing Translational Sciences.S

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Digitizing arquetypal human expereience through physiological signals

    Get PDF
    The problem of capturing human experience is relevant in many application domains. In fact, the process of describing and sharing individual experience lies at the heart of human culture. This advancement came at a price of losing some of the multidimensional aspects of primary, bodily experience during its projection into the symbolic formThroughout the courses of our lives we learn a great deal of information about the world from other people's experience. Besides the ability to share utilitarian experience such as whether a particular plant is poisonous, humans have developed a sophisticated competency of social signaling that enables us to express and decode emotional experience. The natural way of sharing emotional experiences requires those who share to be co-present during this event. However, people have overcome the limitation of physical presence by creating a symbolic system of representations.Recent research in the field of affective computing has addressed the question of digitization and transmission of emotional experience through monitoring and interpretation of physiological signals. Although the outcomes of this research represent a great step forward in developing a technology that supports sharing of emotional experiences, they do not seem to help in preserving the original phenomenological experience during the aforementioned projection. This circumstance is explained by the fact that in affective computing the focus of investigation has been aimed at emotional experiences which can be consciously evaluated and described by individuals themselves. Therefore, generally speaking, applying an affective computing technique for capturing emotions of an individual is not a deeper or more precise way to project her experience into the symbolic form than asking this person to write down a description of her emotions on a piece of paper. One can say that so far the research in affective computing has aimed at delivering technology that could automate the projection but it has not considered the problem of improving the projection in order to preserve more of the multidimensional aspects of human experience.This dissertation examines whether human experience, which individuals are not able to consciously transpose into the symbolic representation, can still be captured using the techniques of affective computing.First, a theoretical framework for description of human experience which is not accessible for conscious awareness was formulated. This framework was based on the work of Carl Jung who introduced a model of a psyche that includes three levels: consciousness, the personal unconscious and the collective unconscious. Consciousness is the external layer of the psyche that consists of those thoughts and emotions which are available for one¿s conscious recollection. The personal unconscious represents a repository for all of an individual¿s feelings, memories, knowledge and thoughts that are not conscious at a given moment of time.The collective unconscious is a repository of universal modes and behaviors that are similar in all individuals. According to Jung, the collective unconscious is populated with archetypes. Archetypes are prototypical categories of objects, people and situations that existed across evolutionary time and in different cultures.Esta tesis doctoral examina si la experiencia humana, que los individuos no pueden transponer conscientemente a la representación simbólica, aún puede capturarse utilizando las técnicas de computación afectiva. Primero, se formula un marco teórico para la descripción de la experiencia humana que no es accesible para la conciencia consciente. Este marco se basó en el trabajo de Carl Jung, quien introdujo un modelo de psique que incluye tres niveles: la conciencia, el inconsciente personal y el inconsciente colectivo. Habiendo definido nuestro marco teórico, realizamos un experimento en el que se mostraron a los sujetos estímulos visuales y auditivos de bases de datos estandarizadas para la obtención de emociones conscientes. Aparte de los estímulos para las emociones conscientes, los sujetos fueron expuestos a estímulos que representaban el arquetipo del yo. Durante la presentación de los estímulos cardiovasculares se registraron las señales de los sujetos. Los resultados experimentales indicaron que las respuestas de la frecuencia cardíaca de los participantes fueron únicas para cada categoría de estímulos, incluido el arquetípico. Estos hallazgos dieron impulso a realizar otro estudio en el que se examinó un espectro más amplio de experiencias arquetípicas. En nuestro segundo estudio, hicimos un cambio de estímulos visuales y auditivos a estímulos audiovisuales porque se esperaba que los videos fueran más eficientes en la obtención de emociones conscientes y experiencias arquetípicas que las imágenes fijas o los sonidos. La cantidad de arquetipos aumentó y los sujetos en general fueron estimulados a sentir ocho experiencias arquetípicas diferentes. También preparamos estímulos para emociones conscientes. En este experimento, las señales fisiológicas incluyeron actividades cardiovasculares, electrodérmicas, respiratorias y temperatura de la piel. El análisis estadístico sugirió que las experiencias arquetípicas podrían diferenciarse en función de las activaciones fisiológicas. Además, se construyeron varios modelos de predicción basados en los datos fisiológicos recopilados. Estos modelos demostraron la capacidad de clasificar los arquetipos con una precisión que era considerablemente más alta que el nivel de probabilidad. Como los resultados del segundo estudio sugirieron una relación positiva entre las experiencias arquetípicas y las activaciones de señales fisiológicas, parecía razonable realizar otro estudio para confirmar la generalización de nuestros hallazgos. Sin embargo, antes de comenzar un nuevo experimento, se decidió construir una herramienta que pudiera facilitar la recopilación de datos fisiológicos y el reconocimiento de experiencias arquetípicas, así como de emociones conscientes. Tal herramienta nos ayudaría a nosotros y a otros investigadores a realizar experimentos sobre la experiencia humana. Nuestra herramienta funciona en "tablets" y admite la recopilación y el análisis de datos de sensores fisiológicos. El último estudio se realizó utilizando una metodología similar al segundo experimento con varias modificaciones que tenían como objetivo obtener resultados más sólidos. El esfuerzo de realizar este estudio se redujo considerablemente al usar la herramienta desarrollada. Durante el experimento, sólo medimos las actividades cardiovasculares y electrodérmicas de los sujetos porque nuestros experimentos anteriores mostraron que estas dos señales contribuyeron significativamente a la clasificación de las emociones conscientes y las experiencias arquetípicas. El análisis estadístico indicó una relación significativa entre los arquetipos retratados en los videos y las respuestas fisiológicas de los sujetos. Además, utilizando métodos de minería de datos, creamos modelos de predicción que fueron capaces de reconocer las experiencias arquetípicas con una precisión menor que en el segundo estudio, pero todavía considerablemente..

    Digitizing arquetypal human expereience through physiological signals

    Get PDF
    The problem of capturing human experience is relevant in many application domains. In fact, the process of describing and sharing individual experience lies at the heart of human culture. This advancement came at a price of losing some of the multidimensional aspects of primary, bodily experience during its projection into the symbolic formThroughout the courses of our lives we learn a great deal of information about the world from other people's experience. Besides the ability to share utilitarian experience such as whether a particular plant is poisonous, humans have developed a sophisticated competency of social signaling that enables us to express and decode emotional experience. The natural way of sharing emotional experiences requires those who share to be co-present during this event. However, people have overcome the limitation of physical presence by creating a symbolic system of representations.Recent research in the field of affective computing has addressed the question of digitization and transmission of emotional experience through monitoring and interpretation of physiological signals. Although the outcomes of this research represent a great step forward in developing a technology that supports sharing of emotional experiences, they do not seem to help in preserving the original phenomenological experience during the aforementioned projection. This circumstance is explained by the fact that in affective computing the focus of investigation has been aimed at emotional experiences which can be consciously evaluated and described by individuals themselves. Therefore, generally speaking, applying an affective computing technique for capturing emotions of an individual is not a deeper or more precise way to project her experience into the symbolic form than asking this person to write down a description of her emotions on a piece of paper. One can say that so far the research in affective computing has aimed at delivering technology that could automate the projection but it has not considered the problem of improving the projection in order to preserve more of the multidimensional aspects of human experience.This dissertation examines whether human experience, which individuals are not able to consciously transpose into the symbolic representation, can still be captured using the techniques of affective computing.First, a theoretical framework for description of human experience which is not accessible for conscious awareness was formulated. This framework was based on the work of Carl Jung who introduced a model of a psyche that includes three levels: consciousness, the personal unconscious and the collective unconscious. Consciousness is the external layer of the psyche that consists of those thoughts and emotions which are available for one¿s conscious recollection. The personal unconscious represents a repository for all of an individual¿s feelings, memories, knowledge and thoughts that are not conscious at a given moment of time.The collective unconscious is a repository of universal modes and behaviors that are similar in all individuals. According to Jung, the collective unconscious is populated with archetypes. Archetypes are prototypical categories of objects, people and situations that existed across evolutionary time and in different cultures.Esta tesis doctoral examina si la experiencia humana, que los individuos no pueden transponer conscientemente a la representación simbólica, aún puede capturarse utilizando las técnicas de computación afectiva. Primero, se formula un marco teórico para la descripción de la experiencia humana que no es accesible para la conciencia consciente. Este marco se basó en el trabajo de Carl Jung, quien introdujo un modelo de psique que incluye tres niveles: la conciencia, el inconsciente personal y el inconsciente colectivo. Habiendo definido nuestro marco teórico, realizamos un experimento en el que se mostraron a los sujetos estímulos visuales y auditivos de bases de datos estandarizadas para la obtención de emociones conscientes. Aparte de los estímulos para las emociones conscientes, los sujetos fueron expuestos a estímulos que representaban el arquetipo del yo. Durante la presentación de los estímulos cardiovasculares se registraron las señales de los sujetos. Los resultados experimentales indicaron que las respuestas de la frecuencia cardíaca de los participantes fueron únicas para cada categoría de estímulos, incluido el arquetípico. Estos hallazgos dieron impulso a realizar otro estudio en el que se examinó un espectro más amplio de experiencias arquetípicas. En nuestro segundo estudio, hicimos un cambio de estímulos visuales y auditivos a estímulos audiovisuales porque se esperaba que los videos fueran más eficientes en la obtención de emociones conscientes y experiencias arquetípicas que las imágenes fijas o los sonidos. La cantidad de arquetipos aumentó y los sujetos en general fueron estimulados a sentir ocho experiencias arquetípicas diferentes. También preparamos estímulos para emociones conscientes. En este experimento, las señales fisiológicas incluyeron actividades cardiovasculares, electrodérmicas, respiratorias y temperatura de la piel. El análisis estadístico sugirió que las experiencias arquetípicas podrían diferenciarse en función de las activaciones fisiológicas. Además, se construyeron varios modelos de predicción basados en los datos fisiológicos recopilados. Estos modelos demostraron la capacidad de clasificar los arquetipos con una precisión que era considerablemente más alta que el nivel de probabilidad. Como los resultados del segundo estudio sugirieron una relación positiva entre las experiencias arquetípicas y las activaciones de señales fisiológicas, parecía razonable realizar otro estudio para confirmar la generalización de nuestros hallazgos. Sin embargo, antes de comenzar un nuevo experimento, se decidió construir una herramienta que pudiera facilitar la recopilación de datos fisiológicos y el reconocimiento de experiencias arquetípicas, así como de emociones conscientes. Tal herramienta nos ayudaría a nosotros y a otros investigadores a realizar experimentos sobre la experiencia humana. Nuestra herramienta funciona en "tablets" y admite la recopilación y el análisis de datos de sensores fisiológicos. El último estudio se realizó utilizando una metodología similar al segundo experimento con varias modificaciones que tenían como objetivo obtener resultados más sólidos. El esfuerzo de realizar este estudio se redujo considerablemente al usar la herramienta desarrollada. Durante el experimento, sólo medimos las actividades cardiovasculares y electrodérmicas de los sujetos porque nuestros experimentos anteriores mostraron que estas dos señales contribuyeron significativamente a la clasificación de las emociones conscientes y las experiencias arquetípicas. El análisis estadístico indicó una relación significativa entre los arquetipos retratados en los videos y las respuestas fisiológicas de los sujetos. Además, utilizando métodos de minería de datos, creamos modelos de predicción que fueron capaces de reconocer las experiencias arquetípicas con una precisión menor que en el segundo estudio, pero todavía considerablemente..

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore