9 research outputs found

    Classification of people who suffer schizophrenia and healthy people by EEG signals using Deep Learning

    Get PDF
    Más de 21 millones de personas en todo el mundo sufren de esquizofrenia. Este grave trastorno mental expone a las personas a estigmatización, discriminación y la violación de sus derechos humanos. Diferentes trabajos sobre clasificación y diagnóstico de enfermedades mentales usan señales de electroencefalograma (EEG), ya que refleja el funcionamiento del cerebro y cómo estas enfermedades lo afectan. Debido a la información proporcionada por las señales de EEG y el rendimiento demostrado por los algoritmos de Aprendizaje Profundo, el presente trabajo propone un modelo para la clasificación de personas esquizofrénicas y personas saludables a través de señales EEG utilizando métodos de Aprendizaje Profundo. Teniendo en cuenta las propiedades de un EEG, de alta dimensión y multicanal, aplicamos el coeficiente de correlación de Pearson (PCC) para representar las relaciones entre los canales, de esta manera, en lugar de utilizar la gran cantidad de datos que proporciona un EEG, utilizamos una matriz más corta como entrada de una red neuronal convolucional (CNN). Finalmente, los resultados demostraron que el modelo de clasificación basado en EEG propuesto logró una precisión, especificidad y sensibilidad del 90%, 90% y 90%, respectivamente

    Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection

    Get PDF
    The detection and monitoring of emotions are important in various applications, e.g. to enable naturalistic and personalised human-robot interaction. Emotion detection often require modelling of various data inputs from multiple modalities, including physiological signals (e.g.EEG and GSR), environmental data (e.g. audio and weather), videos (e.g. for capturing facial expressions and gestures) and more recently motion and location data. Many traditional machine learning algorithms have been utilised to capture the diversity of multimodal data at the sensors and features levels for human emotion classification. While the feature engineering processes often embedded in these algorithms are beneficial for emotion modelling, they inherit some critical limitations which may hinder the development of reliable and accurate models. In this work, we adopt a deep learning approach for emotion classification through an iterative process by adding and removing large number of sensor signals from different modalities. Our dataset was collected in a real-world study from smart-phones and wearable devices. It merges local interaction of three sensor modalities: on-body, environmental and location into global model that represents signal dynamics along with the temporal relationships of each modality. Our approach employs a series of learning algorithms including a hybrid approach using Convolutional Neural Network and Long Short-term Memory Recurrent Neural Network (CNN-LSTM) on the raw sensor data, eliminating the needs for manual feature extraction and engineering. The results show that the adoption of deep-learning approaches is effective in human emotion classification when large number of sensors input is utilised (average accuracy 95% and F-Measure=%95) and the hybrid models outperform traditional fully connected deep neural network (average accuracy 73% and F-Measure=73%). Furthermore, the hybrid models outperform previously developed Ensemble algorithms that utilise feature engineering to train the model average accuracy 83% and F-Measure=82%

    EEG system design for VR viewers and emotions recognition

    Get PDF
    BACKGROUND: Taking advantage of virtual reality today is within everyone's reach and this has led the large commercial companies and research centers to re-evaluate their methodologies. In this context the interest in proposing the Brain Computer Interfaces (BCIs) as an interpreter of the personal experience induced by virtual reality viewers is increasing more and more. OBJECTIVE: The present work aims to describe the design of an electroencephalographic system (EEG) that can easily be integrated with virtual reality viewers currently on the market. The final applications of such system are several, but our intention, inspired by Neuromarketing, wants to analyze the possibility of recognize the mental state of like and dislike. METHODS: The design process involved two phases: the first relating to the development of the hardware system that led to the analysis of techniques to obtain the most possible clean signals; the second one concerns the analysis of the acquired signals to determine the possible presence of characteristics which belong and distinguish the two mental states of like and dislike, through basic statistical analysis techniques. RESULTS: Our analysis shows that differences between the like and dislike state of mind can be found analyzing the power in the different frequencies band relative to the brain's activity classification (Theta, Alpha, Beta and Gamma): in the like case the power is slightly higher respect the dislike one. Moreover we have found through the use or logistic regression that the EEG channels F7, F8 and Fp1 are the most determinant component in the detection, along with the frequencies in the Beta-high band (20-30 Hz)

    Features of mobile apps for people with autism in a post covid-19 scenario: current status and recommendations for apps using AI

    Get PDF
    The new ‘normal’ defined during the COVID-19 pandemic has forced us to re-assess how people with special needs thrive in these unprecedented conditions, such as those with Autism Spectrum Disorder (ASD). These changing/challenging conditions have instigated us to revisit the usage of telehealth services to improve the quality of life for people with ASD. This study aims to identify mobile applications that suit the needs of such individuals. This work focuses on identifying features of a number of highly-rated mobile applications (apps) that are designed to assist people with ASD, specifically those features that use Artificial Intelligence (AI) technologies. In this study, 250 mobile apps have been retrieved using keywords such as autism, autism AI, and autistic. Among 250 apps, 46 were identified after filtering out irrelevant apps based on defined elimination criteria such as ASD common users, medical staff, and non-medically trained people interacting with people with ASD. In order to review common functionalities and features, 25 apps were downloaded and analysed based on eye tracking, facial expression analysis, use of 3D cartoons, haptic feedback, engaging interface, text-to-speech, use of Applied Behaviour Analysis therapy, Augmentative and Alternative Communication techniques, among others were also deconstructed. As a result, software developers and healthcare professionals can consider the identified features in designing future support tools for autistic people. This study hypothesises that by studying these current features, further recommendations of how existing applications for ASD people could be enhanced using AI for (1) progress tracking, (2) personalised content delivery, (3) automated reasoning, (4) image recognition, and (5) Natural Language Processing (NLP). This paper follows the PRISMA methodology, which involves a set of recommendations for reporting systematic reviews and meta-analyses

    Computer Game Innovation

    Get PDF
    Faculty of Technical Physics, Information Technology and Applied Mathematics. Institute of Information TechnologyWydział Fizyki Technicznej, Informatyki i Matematyki Stosowanej. Instytut InformatykiThe "Computer Game Innovations" series is an international forum designed to enable the exchange of knowledge and expertise in the field of video game development. Comprising both academic research and industrial needs, the series aims at advancing innovative industry-academia collaboration. The monograph provides a unique set of articles presenting original research conducted in the leading academic centres which specialise in video games education. The goal of the publication is, among others, to enhance networking opportunities for industry and university representatives seeking to form R&D partnerships. This publication covers the key focus areas specified in the GAMEINN sectoral programme supported by the National Centre for Research and Development

    Applications of realtime fMRI for non-invasive brain computer interface-decoding and neurofeedback

    Get PDF
    Non-invasive brain-computer interfaces (BCIs) seek to enable or restore brain function by using neuroimaging e.g. functional magnetic resonance imaging (fMRI), to engage brain activations without the need for explicit behavioural output or surgical implants. Brain activations are converted into output signals, for use in communication interfaces, motor prosthetics, or to directly shape brain function via a feedback loop. The aim of this thesis was to develop cognitive BCIs using realtime fMRI (rt-fMRI), with the potential for use as a communication interface, or for initiating neural plasticity to facilitate neurorehabilitation. Rt-fMRI enables brain activation to be manipulated directly to produce changes in function, such as perception. Univariate and multivariate classification approaches were used to decode brain activations produced by the deployment of covert spatial attention to simple visual stimuli. Primary and higher order visual areas were examined, as well as potential control regions. The classification platform was then developed to include the use of real-world visual stimuli, exploiting the use of category-specific visual areas, and demonstrating real-world applicability as a communications interface. Online univariate classification of spatial attention was successfully achieved, with individual classification accuracies for 4-quadrant spatial attention reaching 70%. Further, a novel implementation of m-sequences enabled the use of the timing of stimuli presentation to enhance signal characterisation. An established rt-fMRI analysis loop was then used for neurofeedback-led manipulation of category-specific visual brain regions, modulating their functioning, and, as a result, biasing visual perception during binocular rivalry. These changes were linked with functional and effective connectivity changes in trained regions, as well as in a putative top-down control region. The work presented provides proof-of-principle for non-invasive BCIs using rt-fMRI, with the potential for translation into the clinical environment. Decoding and 4 neurofeedback applied to non-invasive and implantable BCIs form an evolving continuum of options for enabling and restoring brain function

    7th International Conference on Higher Education Advances (HEAd'21)

    Full text link
    Information and communication technologies together with new teaching paradigms are reshaping the learning environment.The International Conference on Higher Education Advances (HEAd) aims to become a forum for researchers and practitioners to exchange ideas, experiences,opinions and research results relating to the preparation of students and the organization of educational systems.Doménech I De Soria, J.; Merello Giménez, P.; Poza Plaza, EDL. (2021). 7th International Conference on Higher Education Advances (HEAd'21). Editorial Universitat Politècnica de València. https://doi.org/10.4995/HEAD21.2021.13621EDITORIA
    corecore