172 research outputs found

    Implementation and evaluation of different time and frequency domain feature extraction methods for a two class motor imagery BCI applications : a performance comparison between GPU and CPU

    Get PDF
    OpenCL platform is widely used in high-performance computing such as multicore CPUs, GPUs, or other accelerators [1] which employed heterogeneous computing concept resulting in execution acceleration. As the advantages of parallel computing, it has been applied to brain-computer interface (BCI) applications especially speeding up signal processing pipelines such as feature selection [2]. In this study, we used OpenCL to implement some feature extraction methods on a IEEE open-access dataset [3] which provides 2-class motor imagery EEG recordings. Different feature extraction methods including template matching, statistical moments, selective bandpower and fast Fourier transform power spectrum were selected to evaluate their computational performance on both CPU and GPU using OpenCL. This study used an open-access dataset that contains data presenting a 2-class motor imagery tasks. The dataset used to compare the performance of proposed feature extraction approaches in terms of accuracy and computation time. The study processed following a standard signal processing pipeline including pre-processing for artifact rejection, feature extraction, and classification. The preliminary results show that running feature extraction methods on GPU yields a computing speed up at least to five times compared to CPU. In addition, amending parameters of parallel computing such as the number of work-items or work-groups could reduce computing time more. The complexity of the proposed algorithm can be assessed by the heterogeneous computing concept. Fine-tuning the parameters of parallel computing and system optimization could increase the performance

    Energy-aware Load Balancing of Parallel Evolutionary Algorithms with Heavy Fitness Functions in Heterogeneous CPU-GPU Architectures

    Get PDF
    By means of the availability of mechanisms such as Dynamic Voltage and Frequency Scaling (DVFS) and heterogeneous architectures including processors with different power consumption profiles, it is possible to devise scheduling algorithms aware of both runtime and energy consumption in parallel programs. In this paper, we propose and evaluate a multi-objective (more specifically, a bi-objective) approach to distribute the workload among the processing cores in a given heterogeneous parallel CPU-GPU architecture. The aim of this distribution may be either to save energy without increasing the running time or to reach a trade-off among time and energy consumption. The parallel programs considered here are master-worker evolutionary algorithms where the evaluation of the fitness function for the individuals in the population demands the most part of the computing time. As many useful bioinformatics and data mining applications exhibit this kind of parallel profile, the proposed energy-aware approach for workload scheduling could be frequently applied.Spanish Ministerio de Economía y Competitividad under grant TIN2015-67020-PERDF fun

    Deep learning for EEG-based Motor Imagery classification: Accuracy-cost trade-off

    Get PDF
    Electroencephalography (EEG) datasets are often small and high dimensional, owing to cumbersome recording processes. In these conditions, powerful machine learning techniques are essential to deal with the large amount of information and overcome the curse of dimensionality. Artificial Neural Networks (ANNs) have achieved promising performance in EEG-based Brain-Computer Interface (BCI) applications, but they involve computationally intensive training algorithms and hyperparameter optimization methods. Thus, an awareness of the quality-cost trade-off, although usually overlooked, is highly beneficial. In this paper, we apply a hyperparameter optimization procedure based on Genetic Algorithms to Convolutional Neural Networks (CNNs), Feed-Forward Neural Networks (FFNNs), and Recurrent Neural Networks (RNNs), all of them purposely shallow. We compare their relative quality and energy-time cost, but we also analyze the variability in the structural complexity of networks of the same type with similar accuracies. The experimental results show that the optimization procedure improves accuracy in all models, and that CNN models with only one hidden convolutional layer can equal or slightly outperform a 6-layer Deep Belief Network. FFNN and RNN were not able to reach the same quality, although the cost was significantly lower. The results also highlight the fact that size within the same type of network is not necessarily correlated with accuracy, as smaller models can and do match, or even surpass, bigger ones in performance. In this regard, overfitting is likely a contributing factor since deep learning approaches struggle with limited training examples

    Deep learning for EEG-based Motor Imagery classification: Accuracy-cost trade-off

    Get PDF
    Electroencephalography (EEG) datasets are often small and high dimensional, owing to cumbersome recording processes. In these conditions, powerful machine learning techniques are essential to deal with the large amount of information and overcome the curse of dimensionality. Artificial Neural Networks (ANNs) have achieved promising performance in EEG-based Brain-Computer Interface (BCI) applications, but they involve computationally intensive training algorithms and hyperparameter optimization methods. Thus, an awareness of the quality-cost trade-off, although usually overlooked, is highly beneficial. In this paper, we apply a hyperparameter optimization procedure based on Genetic Algorithms to Convolutional Neural Networks (CNNs), Feed-Forward Neural Networks (FFNNs), and Recurrent Neural Networks (RNNs), all of them purposely shallow. We compare their relative quality and energy-time cost, but we also analyze the variability in the structural complexity of networks of the same type with similar accuracies. The experimental results show that the optimization procedure improves accuracy in all models, and that CNN models with only one hidden convolutional layer can equal or slightly outperform a 6-layer Deep Belief Network. FFNN and RNN were not able to reach the same quality, although the cost was significantly lower. The results also highlight the fact that size within the same type of network is not necessarily correlated with accuracy, as smaller models can and do match, or even surpass, bigger ones in performance. In this regard, overfitting is likely a contributing factor since deep learning approaches struggle with limited training examples.Spanish Ministerio de Ciencia, Innovacion y Universidades PGC2018-098813-B-C31 PGC2018-098813-B-C32 PSI201565848-

    Algoritmo Evolutivo Multiobjetivo con Paralelismo Multinivel para Clasificación de EEGs: Análisis Energía-tiempo en Clústeres Heterogéneos

    Get PDF
    Acceso a través de la plataforma ZENODO: https://zenodo.org/record/7181229/#.Y71LhHbMKUkToday's heterogeneous architectures interconnect nodes with multiple microprocessors and multicore accelerators that allow different strategies to accelerate applications and optimize their power consumption. In this work, a multilevel parallel procedure is proposed that takes advantage of all the nodes of a heterogeneous CPU-GPU cluster. Three different versions have been implemented, which have been analyzed in terms of execution time and energy consumption. Although the work considers an evolutionary master-worker algorithm for feature selection and EEG classification, the conclusions of the experimental analysis can be extrapolated to other applications in bioinformatics and data mining with the same computational profile as the problem considered here. The proposed parallel approach allows to reduce the execution time by a factor of up to 83 with only 4.9% of the energy consumed by the sequential procedure.Las arquitecturas heterogéneas actuales interconectan nodos con múltiples microprocesadores y aceleradores multinúcleo que permiten diferentes estrategias para acelerar las aplicaciones y optimizar su consumo de energía. En este trabajo se propone un procedimiento paralelo multinivel que aprovecha todos los nodos de un clúster CPU-GPU heterogéneo. Se han implementado tres versiones diferentes, que han sido analizadas en términos de tiempo de ejecución y consumo energético. Aunque el trabajo considera un algoritmo maestro-trabajador evolutivo para selección de características y clasificación de EEGs, las conclusiones del análisis experimental se pueden extrapolar a otras aplicaciones en bioinformática y minería de datos con el mismo perfil de cómputo que el problema considerado aquí. El enfoque paralelo propuesto permite reducir el tiempo de ejecución en un factor de hasta 83 con sólo un 4,9% de la energía consumida por el procedimiento secuencial.Investigación financiada parcialmente por el Ministerio de Ciencia, Innovación y Universidades (MICIU) junto con el Fondo Europeo de Desarrollo Regional (FEDER), proyecto PGC2018-098813-B-C31

    Distributed Parallel Cooperative Coevolutionary Multi-Objective Large-Scale Immune Algorithm for Deployment of Wireless Sensor Networks

    Get PDF
    Using immune algorithms is generally a time-intensive process especially for problems with a large number of variables. In this paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm that is implemented using the message passing interface (MPI). The proposed algorithm is composed of three layers: objective, group and individual layers. First, for each objective in the multi-objective problem to be addressed, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives. Second, the large number of variables are divided into several groups. Finally, individual evaluations are allocated across many core processing units, and calculations are performed in parallel. Consequently, the computation time is greatly reduced. The proposed algorithm integrates the idea of immune algorithms, which tend to explore sparse areas in the objective space and use simulated binary crossover for mutation. The proposed algorithm is employed to optimize the 3D terrain deployment of a wireless sensor network, which is a self-organization network. In experiments, compared with several state-of-the-art multi-objective evolutionary algorithms the Cooperative Coevolutionary Generalized Differential Evolution 3, the Cooperative Multi-objective Differential Evolution and the Nondominated Sorting Genetic Algorithm III, the proposed algorithm addresses the deployment optimization problem efficiently and effectively

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    Computational Approaches to Explainable Artificial Intelligence:Advances in Theory, Applications and Trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications
    corecore