215 research outputs found
Recommended from our members
Real-Time Electroencephalogram Sonification for Neurofeedback
Electroencephalography (EEG) is the measurement via the scalp of the electrical activity of the brain. The established therapeutic intervention of neurofeedback involves presenting people with their own EEG in real-time to enable them to modify their EEG for purposes of improving performance or health.
The aim of this research is to develop and validate real-time sonifications of EEG for use in neurofeedback and methods for assessing such sonifications. Neurofeedback generally uses a visual display. Where auditory feedback is used, it is mostly limited to pre-recorded sounds triggered by the EEG activity crossing a threshold. However, EEG generates time-series data with meaningful detail at fine temporal resolution and with complex temporal dynamics. Human hearing has a much higher temporal resolution than human vision, and auditory displays do not require people to focus on a screen with their eyes open for extended periods of time – e.g. if they are engaged in some other task. Sonification of EEG could allow more rapid, contingent, salient and temporally detailed feedback. This could improve the efficiency of neurofeedback training and reduce the number and duration of sessions for successful neurofeedback.
The same two deliberately simple sonification techniques were used in all three experiments of this research: Amplitude Modulation (AM) sonification, which maps the fluctuations in the power of the EEG to the volume of a pure tone; and Frequency Modulation (FM) sonification, which uses the changes in the EEG power to modify the frequency. Measures included, a listening task, NASA task load index; a measure of how much work it was to do the task, Pre & post measures of mood, and EEG.
The first experiment used pre-recorded single channel EEG and participants were asked to listen to the sound of the sonified EEG and try and track the activity that they could hear by moving a slider on a computer screen using a computer mouse. This provided a quantitative assessment of how well people could perceive the sonified fluctuations in EEG level. The tracking accuracy scores were higher for the FM sonification but self-assessments of task load rated the AM sonification as easier to track.
The second experiment used the same two sonifications, in a real neurofeedback task using participants own live EEG. Unbeknownst to the participants the neurofeedback task was designed to improve mood. A Pre-Post questionnaire showed that participants changed their self-rated mood in the intended direction with the EEG training, but there was no statistically significant change in EEG. Again the FM sonification showed a better performance but AM was rated as less effortful. The performance of sonifications in the tracking task in experiment 1 was found to predict their relative efficacy at blind self-rated mood modification in experiment 2.
The third experiment used both the tracking as in experiment 1 and neurofeedback tasks as in experiment 2, but with modified versions of the AM and FM sonifications to allow two-channel EEG sonifications. This experiment introduced a physical slider as opposed to a mouse for the tracking task. Tracking accuracy increased, but this time no significant difference was found between the two sonification techniques on the tracking task. In the training task, once more the blind self-rated mood did improve in the intended direction with the EEG training, but as again there was no significant change in EEG, this cannot necessarily be attributed to the neurofeedback. There was only a slight difference between the two sonification techniques in the effort measure.
In this way, a prototype method has been devised and validated for the quantitative assessment of real-time EEG sonifications. Conventional evaluations of neurofeedback techniques are expensive and time consuming. By contrast, this method potentially provides a rapid, objective and efficient method for evaluating the suitability of candidate sonifications for EEG neurofeedback
Music, computing and health: A roadmap for the current and future roles of music technology for health care and well-being.
Health and self-regulatio
Multi-Listener Auditory Displays
This thesis investigates how team working principles can be applied to Auditory Displays (AD). During this work it was established that there the level of collaboration and team work within the AD community was low and that this community would benefit from a enhanced collaborative approach. The increased use of collaborative techniques will benefit the AD community by increasing quality, knowledge transfer, synergy, and enhancing innovation.
The reader is introduced to a novel approach to collaborative AD entitled Multi-listener Auditory Displays (MLAD). This work focused upon two areas of MLAD distributed AD teams and virtual AD teams. A distributed AD team is a team of participants who work upon a common task at different times and locations. The distributed approach was found to work effectively when designing ADs that work upon large scale data sets such as that found in big data. A virtual AD team is a group of participants who work upon a common task simultaneously and in separate locations. A virtual AD team is assisted by computer technology such as video conferencing and email. The virtual auditory display team was found to work well by enabling a team to work more effectively together who were geographically spread.
Two pilot studies are included; SonicSETI is an example of a distributed AD team, where a remote group of listeners have background white noise playing, and use passive listening to detect anomalous candidate signals; and a geographically diverse virtual AD team that collaborates through electronic technology on an auditory display which sonifies a database of red wine measurements. A workshop was organised at a conference which focused upon ensemble auditory displays with a group of participants who were co- located
Line Harp: Importance-Driven Sonification for Dense Line Charts
Masteroppgave i informatikkINF399MAMN-PROGMAMN-IN
BRAIN-COMPUTER MUSIC INTERFACING: DESIGNING PRACTICAL SYSTEMS FOR CREATIVE APPLICATIONS
Brain-computer music interfacing (BCMI) presents a novel approach to music making, as it requires only the brainwaves of a user to control musical parameters. This presents immediate benefits for users with motor disabilities that may otherwise prevent them from engaging in traditional musical activities such as composition, performance or collaboration with other musicians. BCMI systems with active control, where a user can make cognitive choices that are detected within brain signals, provide a platform for developing new approaches towards accomplishing these activities. BCMI systems that use passive control present an interesting alternate to active control, where control over music is accomplished by harnessing brainwave patterns that are associated with subconscious mental states. Recent developments in brainwave measuring technologies, in particular electroencephalography (EEG), have made brainwave interaction with computer systems more affordable and accessible and the time is ripe for research into the potential such technologies can offer for creative applications for users of all abilities.
This thesis presents an account of BCMI development that investigates methods of active, passive and hybrid (multiple control methods) control that include control over electronic music, acoustic instrumental music, multi-brain systems and combining methods of brainwave control.
In practice there are many obstacles associated with detecting useful brainwave signals, in particular when scaling systems otherwise designed for medical studies for use outside of laboratory settings. Two key areas are addressed throughout this thesis. Firstly, improving the accuracy of meaningful brain signal detection in BCMI, and secondly, exploring the creativity available in user control through ways in which brainwaves can be mapped to musical features.
Six BCMIs are presented in this thesis, each with the objective of exploring a unique aspect of user control. Four of these systems are designed for live BCMI concert performance, one evaluates a proof-of-concept through end-user testing and one is designed as a musical composition tool.
The thesis begins by exploring the field of brainwave detection and control and identifies the steady-state visually evoked potential (SSVEP) method of eliciting brainwave control as a suitable technique for use in BCMI. In an attempt to improve signal accuracy of the SSVEP technique a new modular hardware unit is presented that provides accurate SSVEP stimuli, suitable for live music performance. Experimental data confirms the performance of the unit in tests across three different EEG hardware platforms. Results across 11 users indicate that a mean accuracy of 96% and an average response time of 3.88 seconds are attainable with the system. These results contribute to the development of the BCMI for Activating Memory, a multi-user system. Once a stable SSVEP platform is developed, control is extended through the integration of two more brainwave control techniques: affective (emotional) state detection and motor imagery response. In order to ascertain the suitability of the former an experiment confirms the accuracy of EEG when measuring affective states in response to music in a pilot study.
This thesis demonstrates how a range of brainwave detection methods can be used for creative control in musical applications. Video and audio excerpts of BCMI pieces are also included in the Appendices
Investigation into Stand-alone Brain-computer Interfaces for Musical Applications
Brain-computer interfaces (BCIs) aim to establish a communication medium that is independent of muscle control. This project investigates how BCIs can be harnessed for musical applications. The impact of such systems is twofold — (i) it offers a novel mechanism of control for musicians during performance and (ii) it is beneficial for patients who are suffering from motor disabilities. Several challenges are encountered when attempting to move these technologies from laboratories to real-world scenarios. Additionally, BCIs are significantly different from conventional computer interfaces and realise low communication rates. This project considers these challenges and uses a dry and wireless electroencephalogram (EEG) headset to detect neural activity. It adopts a paradigm called steady state visually evoked potential (SSVEP) to provide the user with control. It aims to encapsulate all braincomputer music interface (BCMI)-based operations into a stand-alone application, which would improve the portability of BCMIs.
This projects addresses various engineering problems that are faced while developing a stand-alone BCMI. In order to efficiently present the visual stimulus for SSVEP, it requires hardware-accelerated rendering. EEG data is received from the headset through Bluetooth and thus, a dedicated thread is designed to receive signals. As this thesis is not using medical-grade equipment to detect EEG, signal processing techniques need to be examined to improve the signal to noise ratio (SNR) of brain waves. This projects adopts canonical correlation analysis (CCA), which is multi-variate statistical technique and explores filtering
algorithms to improve communication rates of BCMIs.
Furthermore, this project delves into optimising biomedical engineering-based parameters, such as placement of the EEG headset and size of the visual stimulus. After implementing the optimisations, for a time window of 4s and 2s, the mean accuracies of the BCMI are 97.92±2.22% and 88.02±9.30% respectively. The obtained information transfer rate (ITR)
is 36.56±9.17 bits min-1, which surpasses communication rates of earlier BCMIs. This thesis concludes by building a system which encompasses a novel control flow, which allows the user to play a musical instrument by gazing at it.The School of Humanities and Performing Arts, University of Plymout
Bacteria Hunt: Evaluating multi-paradigm BCI interaction
The multimodal, multi-paradigm brain-computer interfacing (BCI) game Bacteria Hunt was used to evaluate two aspects of BCI interaction in a gaming context. One goal was to examine the effect of feedback on the ability of the user to manipulate his mental state of relaxation. This was done by having one condition in which the subject played the game with real feedback, and another with sham feedback. The feedback did not seem to affect the game experience (such as sense of control and tension) or the objective indicators of relaxation, alpha activity and heart rate. The results are discussed with regard to clinical neurofeedback studies. The second goal was to look into possible interactions between the two BCI paradigms used in the game: steady-state visually-evoked potentials (SSVEP) as an indicator of concentration, and alpha activity as a measure of relaxation. SSVEP stimulation activates the cortex and can thus block the alpha rhythm. Despite this effect, subjects were able to keep their alpha power up, in compliance with the instructed relaxation task. In addition to the main goals, a new SSVEP detection algorithm was developed and evaluated
Using Sound to Represent Uncertainty in Spatial Data
There is a limit to the amount of spatial data that can be shown visually in an effective manner, particularly when the data sets are extensive or complex. Using sound to represent some of these data (sonification) is a way of avoiding visual overload. This thesis creates a conceptual model showing how sonification can be used to represent spatial data and evaluates a number of elements within the conceptual model. These are examined in three different case studies to assess the effectiveness of the sonifications.
Current methods of using sonification to represent spatial data have been restricted by the technology available and have had very limited user testing. While existing research shows that sonification can be done, it does not show whether it is an effective and useful method of representing spatial data to the end user. A number of prototypes show how spatial data can be sonified, but only a small handful of these have performed any user testing beyond the authors’ immediate colleagues (where n > 4). This thesis creates and evaluates sonification prototypes, which represent uncertainty using three different case studies of spatial data. Each case study is evaluated by a significant user group (between 45 and 71 individuals) who completed a task based evaluation with the sonification tool, as well as reporting qualitatively their views on the effectiveness and usefulness of the sonification method.
For all three case studies, using sound to reinforce information shown visually results in more effective performance from the majority of the participants than traditional visual methods. Participants who were familiar with the dataset were much more effective at using the sonification than those who were not and an interactive sonification which requires significant involvement from the user was much more effective than a static sonification, which did not provide significant user engagement. Using sounds with a clear and easily understood scale (such as piano notes) was important to achieve an effective sonification. These findings are used to improve the conceptual model developed earlier in this thesis and highlight areas for future research
- …