80 research outputs found
Investigating the generalizability of EEG-based Cognitive Load Estimation Across Visualizations
We examine if EEG-based cognitive load (CL) estimation is generalizable
across the character, spatial pattern, bar graph and pie chart-based
visualizations for the nback~task. CL is estimated via two recent approaches:
(a) Deep convolutional neural network, and (b) Proximal support vector
machines. Experiments reveal that CL estimation suffers across visualizations
motivating the need for effective machine learning techniques to benchmark
visual interface usability for a given analytic task
Recommended from our members
Real-Time Electroencephalogram Sonification for Neurofeedback
Electroencephalography (EEG) is the measurement via the scalp of the electrical activity of the brain. The established therapeutic intervention of neurofeedback involves presenting people with their own EEG in real-time to enable them to modify their EEG for purposes of improving performance or health.
The aim of this research is to develop and validate real-time sonifications of EEG for use in neurofeedback and methods for assessing such sonifications. Neurofeedback generally uses a visual display. Where auditory feedback is used, it is mostly limited to pre-recorded sounds triggered by the EEG activity crossing a threshold. However, EEG generates time-series data with meaningful detail at fine temporal resolution and with complex temporal dynamics. Human hearing has a much higher temporal resolution than human vision, and auditory displays do not require people to focus on a screen with their eyes open for extended periods of time â e.g. if they are engaged in some other task. Sonification of EEG could allow more rapid, contingent, salient and temporally detailed feedback. This could improve the efficiency of neurofeedback training and reduce the number and duration of sessions for successful neurofeedback.
The same two deliberately simple sonification techniques were used in all three experiments of this research: Amplitude Modulation (AM) sonification, which maps the fluctuations in the power of the EEG to the volume of a pure tone; and Frequency Modulation (FM) sonification, which uses the changes in the EEG power to modify the frequency. Measures included, a listening task, NASA task load index; a measure of how much work it was to do the task, Pre & post measures of mood, and EEG.
The first experiment used pre-recorded single channel EEG and participants were asked to listen to the sound of the sonified EEG and try and track the activity that they could hear by moving a slider on a computer screen using a computer mouse. This provided a quantitative assessment of how well people could perceive the sonified fluctuations in EEG level. The tracking accuracy scores were higher for the FM sonification but self-assessments of task load rated the AM sonification as easier to track.
The second experiment used the same two sonifications, in a real neurofeedback task using participants own live EEG. Unbeknownst to the participants the neurofeedback task was designed to improve mood. A Pre-Post questionnaire showed that participants changed their self-rated mood in the intended direction with the EEG training, but there was no statistically significant change in EEG. Again the FM sonification showed a better performance but AM was rated as less effortful. The performance of sonifications in the tracking task in experiment 1 was found to predict their relative efficacy at blind self-rated mood modification in experiment 2.
The third experiment used both the tracking as in experiment 1 and neurofeedback tasks as in experiment 2, but with modified versions of the AM and FM sonifications to allow two-channel EEG sonifications. This experiment introduced a physical slider as opposed to a mouse for the tracking task. Tracking accuracy increased, but this time no significant difference was found between the two sonification techniques on the tracking task. In the training task, once more the blind self-rated mood did improve in the intended direction with the EEG training, but as again there was no significant change in EEG, this cannot necessarily be attributed to the neurofeedback. There was only a slight difference between the two sonification techniques in the effort measure.
In this way, a prototype method has been devised and validated for the quantitative assessment of real-time EEG sonifications. Conventional evaluations of neurofeedback techniques are expensive and time consuming. By contrast, this method potentially provides a rapid, objective and efficient method for evaluating the suitability of candidate sonifications for EEG neurofeedback
Hear me Flying! Does Visual Impairment Improve Auditory Display Usability during a Simulated Flight?
SoniïŹcation refers to systems that convey information into the non-speech audio modality [1], . This technique has been largely invested in developing guidance systems for visually impaired individuals. In 2008, more than 140 systems of this type used in various application areas were referenced [2], . In aeronautics, such a system ânamely the Sound Flyerâ is currently used by visually impaired pilots in real ïŹight context to control the aircraft attitude. However, it is unclear if this system would be acceptable for sighted individuals. Indeed, early visual deprivation leads to compensatory mechanisms which often result in better auditory attentional skills [7,8]. In the present study we assessed this issue. Two groups of pilots (blind vs. sighted) took part in a ïŹight simulator experiment. They were all blindfolded to avoid potential visual information acquisition (i.e. some blind individuals had residual visual capacities). Participants had to perform successive aircraft maneuvers on the sole basis of auditory information provided by the sound flyer. Maneuvers difïŹculty varied with the number of parameters to apply: easy (none), medium (one: pitch or bank) or hard (two: pitch and bank). The Sound Flyer generated a pure tone (53 dB SPL) modulated as a function of pitch (tonal variation) and bank (inter-aural and rhythmic variations). We assessed ïŹight performance along with subjective (NASA-TLX) and neurological (irrelevant auditory-probe technique; [9], ) measures of cognitive workload. We hypothesized that the automatic cerebral reaction to deviant auditory stimuli (10% âtiâ among 90% âtaâ; 56db SPL) would be affected by the difïŹculty [10,11] and participantsâ auditory attention. Preliminary data analyses revealed that blind and sighted participants reached target-attitudes with good accuracy (mean error of 2.04°). Globally, subjective cognitive workload and brain responses to the auditory probe were inïŹuenced by the difïŹculty of the maneuver but not by the visual impairment. These initial results provide evidence that auditory displays are effective, not only for maintaining straight and level ïŹight [6], , but also for attaining precise aircraft attitudes. Results also suggest that flight maneuvers should remain quite simple to avoid too high cognitive workload. In other words, attitude soniïŹcation can provide robust information and, along with Brungart and Simpson [3] speciïŹcations, could contribute to the ïŹght against spatial disorientation in the cockpit
Reflecting on the Use of Sonification for Network Monitoring
In Security Operations Centres (SOCs), computer networks are generally monitored using a combination of anomaly detection techniques, Intrusion Detection Systems (IDS) and data presented in visual and text-based forms. In the last two decades significant progress has been made in developing novel sonification systems to further support network monitoring tasks. A range of systems has been proposed in which sonified network data is presented for incorporation into the network monitoring process. Unfortunately, many of these have not been sufficiently validated and there is a lack of uptake in SOCs. In this paper, we describe and reflect critically on the shortcomings of traditional network-monitoring methods and identify the key role that sonification, if implemented correctly, could play in improving current monitoring capabilities. The core contribution of this position paper is in the outline of a research agenda for sonification for network monitoring, based on a review of prior research. In particular, we identify requirements for an aesthetic approach that is suitable for continuous real-time network monitoring; formalisation of an approach to designing sonifications in this space; and refinement and validation through comprehensive user testing
Busy and confused? High risk of missed alerts in the cockpit: An electrophysiological study
The ability to react to unexpected auditory stimuli is critical in complex settings such as aircraft cockpits or air traffic control towers, characterized by high mental load and highly complex auditory environments (i.e., many different auditory alerts). Evidences show that both factors can negatively impact auditory attention and prevent appropriate reactions. In the present study, 60 participants performed a simulated aviation task varying in terms of mental load (no, low, high) concurrently to a tone detection paradigm in which the complexity of the auditory environment (i.e., auditory load) was manipulated (1, 2 or 3 different tones). We measured both detection performance (miss, false alarm, dâ) and brain activity (event-related potentials) associated with the target tone. Our results showed that both mental and auditory loads affected target tone detection performance. Importantly, their combined effects had a large impact on the percentage of missed target tones. While, in the no mental load condition, miss rate was very low with 1 (0.53%) and 2 tones (1.11%), it increased drastically with 3 tones (24.44%), and this effect was accentuated as mental load increased, yielding to the higher miss rate in the 3-tone paradigm under high mental load conditions (68.64%). Increased mental and auditory loads and miss rates were associated with disrupted brain responses to the target tone, as shown by reduced P3b amplitude. In sum, our results highlight the importance of balancing mental and auditory loads to maintain efficient reactions to alarms in complex working environment
Internal representations of auditory frequency: behavioral studies of format and malleability by instructions
Research has suggested that representational and perceptual systems draw upon some of the same processing structures, and evidence also has accumulated to suggest that representational formats are malleable by instructions. Very little research, however, has considered how nonspeech sounds are internally represented, and the use of audio in systems will often proceed under the assumption that separation of information by modality is sufficient for eliminating information processing conflicts. Three studies examined the representation of nonspeech sounds in working memory. In Experiment 1, a mental scanning paradigm suggested that nonspeech sounds can be flexibly represented in working memory, but also that a universal per-item scanning cost persisted across encoding strategies. Experiment 2 modified the sentence-picture verification task to include nonspeech sounds (i.e., a sound-sentence-picture verification task) and found evidence generally supporting three distinct formats of representation as well as a lingering effect of auditory stimuli for verification times across representational formats. Experiment 3 manipulated three formats of internal representation (verbal, visuospatial imagery, and auditory imagery) for a point estimation sonification task in the presence of three types of interference tasks (verbal, visuospatial, and auditory) in an effort to induce selective processing code (i.e., domain-specific working memory) interference. Results showed no selective interference but instead suggested a general performance decline (i.e., a general representational resource) for the sonification task in the presence of an interference task, regardless of the sonification encoding strategy or the qualitative interference task demands. Results suggested a distinct role of internal representations for nonspeech sounds with respect to cognitive theory. The predictions of the processing codes dimension of the multiple resources construct were not confirmed; possible explanations are explored. The practical implications for the use of nonspeech sounds in applications include a possible response time advantage when an external stimulus and the format of internal representation match.Ph.D.Committee Chair: Walker, Bruce; Committee Member: Bonebright, Terri; Committee Member: Catrambone, Richard; Committee Member: Corso, Gregory; Committee Member: Rogers, Wend
The Berlin Brain-Computer Interface: Progress Beyond Communication and Control
The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.EC/FP7/611570/EU/Symbiotic Mind Computer Interaction for Information Seeking/MindSeeEC/FP7/625991/EU/Hyperscanning 2.0 Analyses of Multimodal Neuroimaging Data: Concept, Methods and Applications/HYPERSCANNING 2.0DFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen Systeme
Extended Abstracts
Presented at the 21st International Conference on Auditory Display (ICAD2015), July 6-10, 2015, Graz, Styria, Austria.Mark Ballora âTwo examples of sonification for viewer engagement: Hurricanes and squirrel hibernation cyclesâ /
Stephen Barrass, â Diagnostic Singing Bowlsâ /
Natasha Barrett, Kristian Nymoen. âInvestigations in coarticulated performance gestures using interactive parameter-mapping 3D sonificationâ /
Lapo Boschi, Arthur PatĂ©, Benjamin Holtzman, Jean-LoĂŻc le Carrou. âCan auditory display help us categorize seismic signals?â /
CĂ©dric Camier, François-Xavier FĂ©ron, Julien Boissinot, Catherine Guastavino. âTracking moving sounds: Perception of spatial figuresâ /
Coralie Diatkine, StĂ©phanie Bertet, Miguel Ortiz. âTowards the holistic spatialization of multiple sound sources in 3D, implementation using ambisonics to binaural techniqueâ /
S. Maryam FakhrHosseini, Paul Kirby, Myounghoon Jeon. âRegulating Driversâ Aggressiveness by Sonifying Emotional Dataâ /
Wolfgang Hauer, Katharina Vogt. âSonification of a streaming-server logfileâ /
Thomas Hermann, Tobias Hildebrandt, Patrick Langeslag, Stefanie Rinderle-Ma. âOptimizing aesthetics and precision in sonification for peripheral process-monitoringâ /
Minna Huotilainen, Matti Gröhn, Iikka Yli-Kyyny, Jussi Virkkala, Tiina Paunio. âSleep Enhancement by Sound Stimulationâ /
Steven Landry, Jayde Croschere, Myounghoon Jeon. âSubjective Assessment of In-Vehicle Auditory Warnings for Rail Grade Crossingsâ /
Rick McIlraith, Paul Walton, Jude Brereton. âThe Spatialised Sonification of Drug-Enzyme Interactionsâ /
George Mihalas, Minodora Andor, Sorin Paralescu, Anca Tudor, Adrian Neagu, Lucian Popescu, Antoanela Naaji. âAdding Sound to Medical Data Representationâ /
Rainer Mittmannsgruber, Katharina Vogt. âAuditory assistance for timing presentationsâ /
Joseph W. Newbold, Andy Hunt, Jude Brereton. âChemical Spectral Analysis through Sonificationâ /
S. Camille Peres, Daniel Verona, Paul Ritchey. âThe Effects of Various Parameter Combinations in Parameter-Mapping Sonifications: A Pilot Studyâ /
Eva Sjuve. âMetopia: Experiencing Complex Environmental Data Through Soundâ /
Benjamin Stahl, Katharina Vogt. âThe Effect of Audiovisual Congruency on Short-Term Memory of Serial Spatial Stimuli: A Pilot Testâ /
David Worrall. âRealtime sonification and visualisation of network metadata (The NetSon Project)â /
Bernhard Zeller, Katharina Vogt. âAuditory graph evolution by the example of spurious correlationsâ /The compiled collection of extended abstracts included in the ICAD 2015 Proceedings. Extended abstracts include, but are not limited to, late-breaking results, works in early stages of progress, novel methodologies, unique or controversial theoretical positions, and discussions of unsuccessful research or null findings
Safe and Sound: Proceedings of the 27th Annual International Conference on Auditory Display
Complete proceedings of the 27th International Conference on Auditory Display (ICAD2022), June 24-27. Online virtual conference
- âŠ