511 research outputs found

    Brain-Computer Interfaces for Artistic Expression

    Get PDF
    Artists have been using BCIs for artistic expression since the 1960s. Their interest and creativity is now increasing because of the availability of affordable BCI devices and software that does not require them to invest extensive time in getting the BCI to work or tuning it to their application. Designers of artistic BCIs are often ahead of more traditional BCI researchers in ideas on using BCIs in multimodal and multiparty contexts, where multiple users are involved, and where robustness and efficiency are not the main matters of concern. The aim of this workshop is to look at current (research) activities in BCIs for artistic expression and to identify research areas that are of interest for both BCI and HCI researchers as well as artists/designers of BCI applications

    Mapping Brain Signals to Music via Executable Graphs

    Get PDF
    (Abstract to follow

    Affective brain–computer music interfacing

    Get PDF
    We aim to develop and evaluate an affective brain–computer music interface (aBCMI) for modulating the affective states of its users. Approach. An aBCMI is constructed to detect a userʼs current affective state and attempt to modulate it in order to achieve specific objectives (for example, making the user calmer or happier) by playing music which is generated according to a specific affective target by an algorithmic music composition system and a casebased reasoning system. The system is trained and tested in a longitudinal study on a population of eight healthy participants, with each participant returning for multiple sessions. Main results. The final online aBCMI is able to detect its users current affective states with classification accuracies of up to 65% (3 class, p < 0.01) and modulate its userʼs affective states significantly above chance level (p < 0.05). Significance. Our system represents one of the first demonstrations of an online aBCMI that is able to accurately detect and respond to userʼs affective states. Possible applications include use in music therapy and entertainmen

    Designing musical soundtracks for Brain Controlled Interface (BCI) systems

    Get PDF
    This paper presents research based on the creation and development of two Brain Controlled Interface (BCI) based film experiences. The focus of this research is primarily on the audio in the films; the way that the overall experiences were designed, the ways in which the soundtracks were specifically developed for the experiences and the ways in which the audience perceived the use of the soundtrack in the film. Unlike traditional soundtracks the adaptive nature of the audio means that there are multiple parts that can be interacted with and combined at specific moments. The design of such adaptive audio systems is something that is yet to be fully understood and this paper goes someway to presenting our initial findings. We think that this research will be of interest and excite the Audio-HCI community

    Brain Music : Sistema generativo para la creación de música simbólica a partir de respuestas neuronales afectivas

    Get PDF
    gráficas, tablasEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap DatasetMaestríaMagíster en Ingeniería - Automatización IndustrialInvestigación en Aprendizaje Profundo y señales BiológicasEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizale

    Towards the recognition of the emotions of people with visual disabilities through brain-computer interfaces

    Get PDF
    This article belongs to the Section Intelligent Sensors.A brain&-computer interface is an alternative for communication between people and computers, through the acquisition and analysis of brain signals. Research related to this field has focused on serving people with different types of motor, visual or auditory disabilities. On the other hand, affective computing studies and extracts information about the emotional state of a person in certain situations, an important aspect for the interaction between people and the computer. In particular, this manuscript considers people with visual disabilities and their need for personalized systems that prioritize their disability and the degree that affects them. In this article, a review of the state of the techniques is presented, where the importance of the study of the emotions of people with visual disabilities, and the possibility of representing those emotions through a brain&-computer interface and affective computing, are discussed. Finally, the authors propose a framework to study and evaluate the possibility of representing and interpreting the emotions of people with visual disabilities for improving their experience with the use of technology and their integration into today's society.This work was supported by the Consejo Nacional de Ciencia y Tecnología CONACyT, through the number 709656 and by the Research Program of the Ministry of Economy and Competitiveness—Government of Spain, (DeepEMR project TIN2017-87548-C2-1-R)
    • …
    corecore