80 research outputs found

    Investigating the generalizability of EEG-based Cognitive Load Estimation Across Visualizations

    Full text link
    We examine if EEG-based cognitive load (CL) estimation is generalizable across the character, spatial pattern, bar graph and pie chart-based visualizations for the nback~task. CL is estimated via two recent approaches: (a) Deep convolutional neural network, and (b) Proximal support vector machines. Experiments reveal that CL estimation suffers across visualizations motivating the need for effective machine learning techniques to benchmark visual interface usability for a given analytic task

    Hear me Flying! Does Visual Impairment Improve Auditory Display Usability during a Simulated Flight?

    Get PDF
    SoniïŹcation refers to systems that convey information into the non-speech audio modality [1], . This technique has been largely invested in developing guidance systems for visually impaired individuals. In 2008, more than 140 systems of this type used in various application areas were referenced [2], . In aeronautics, such a system –namely the Sound Flyer– is currently used by visually impaired pilots in real ïŹ‚ight context to control the aircraft attitude. However, it is unclear if this system would be acceptable for sighted individuals. Indeed, early visual deprivation leads to compensatory mechanisms which often result in better auditory attentional skills [7,8]. In the present study we assessed this issue. Two groups of pilots (blind vs. sighted) took part in a ïŹ‚ight simulator experiment. They were all blindfolded to avoid potential visual information acquisition (i.e. some blind individuals had residual visual capacities). Participants had to perform successive aircraft maneuvers on the sole basis of auditory information provided by the sound flyer. Maneuvers difïŹculty varied with the number of parameters to apply: easy (none), medium (one: pitch or bank) or hard (two: pitch and bank). The Sound Flyer generated a pure tone (53 dB SPL) modulated as a function of pitch (tonal variation) and bank (inter-aural and rhythmic variations). We assessed ïŹ‚ight performance along with subjective (NASA-TLX) and neurological (irrelevant auditory-probe technique; [9], ) measures of cognitive workload. We hypothesized that the automatic cerebral reaction to deviant auditory stimuli (10% “ti” among 90% “ta”; 56db SPL) would be affected by the difïŹculty [10,11] and participants’ auditory attention. Preliminary data analyses revealed that blind and sighted participants reached target-attitudes with good accuracy (mean error of 2.04°). Globally, subjective cognitive workload and brain responses to the auditory probe were inïŹ‚uenced by the difïŹculty of the maneuver but not by the visual impairment. These initial results provide evidence that auditory displays are effective, not only for maintaining straight and level ïŹ‚ight [6], , but also for attaining precise aircraft attitudes. Results also suggest that flight maneuvers should remain quite simple to avoid too high cognitive workload. In other words, attitude soniïŹcation can provide robust information and, along with Brungart and Simpson [3] speciïŹcations, could contribute to the ïŹght against spatial disorientation in the cockpit

    Reflecting on the Use of Sonification for Network Monitoring

    Get PDF
    In Security Operations Centres (SOCs), computer networks are generally monitored using a combination of anomaly detection techniques, Intrusion Detection Systems (IDS) and data presented in visual and text-based forms. In the last two decades significant progress has been made in developing novel sonification systems to further support network monitoring tasks. A range of systems has been proposed in which sonified network data is presented for incorporation into the network monitoring process. Unfortunately, many of these have not been sufficiently validated and there is a lack of uptake in SOCs. In this paper, we describe and reflect critically on the shortcomings of traditional network-monitoring methods and identify the key role that sonification, if implemented correctly, could play in improving current monitoring capabilities. The core contribution of this position paper is in the outline of a research agenda for sonification for network monitoring, based on a review of prior research. In particular, we identify requirements for an aesthetic approach that is suitable for continuous real-time network monitoring; formalisation of an approach to designing sonifications in this space; and refinement and validation through comprehensive user testing

    Busy and confused? High risk of missed alerts in the cockpit: An electrophysiological study

    Get PDF
    The ability to react to unexpected auditory stimuli is critical in complex settings such as aircraft cockpits or air traffic control towers, characterized by high mental load and highly complex auditory environments (i.e., many different auditory alerts). Evidences show that both factors can negatively impact auditory attention and prevent appropriate reactions. In the present study, 60 participants performed a simulated aviation task varying in terms of mental load (no, low, high) concurrently to a tone detection paradigm in which the complexity of the auditory environment (i.e., auditory load) was manipulated (1, 2 or 3 different tones). We measured both detection performance (miss, false alarm, d’) and brain activity (event-related potentials) associated with the target tone. Our results showed that both mental and auditory loads affected target tone detection performance. Importantly, their combined effects had a large impact on the percentage of missed target tones. While, in the no mental load condition, miss rate was very low with 1 (0.53%) and 2 tones (1.11%), it increased drastically with 3 tones (24.44%), and this effect was accentuated as mental load increased, yielding to the higher miss rate in the 3-tone paradigm under high mental load conditions (68.64%). Increased mental and auditory loads and miss rates were associated with disrupted brain responses to the target tone, as shown by reduced P3b amplitude. In sum, our results highlight the importance of balancing mental and auditory loads to maintain efficient reactions to alarms in complex working environment

    Internal representations of auditory frequency: behavioral studies of format and malleability by instructions

    Get PDF
    Research has suggested that representational and perceptual systems draw upon some of the same processing structures, and evidence also has accumulated to suggest that representational formats are malleable by instructions. Very little research, however, has considered how nonspeech sounds are internally represented, and the use of audio in systems will often proceed under the assumption that separation of information by modality is sufficient for eliminating information processing conflicts. Three studies examined the representation of nonspeech sounds in working memory. In Experiment 1, a mental scanning paradigm suggested that nonspeech sounds can be flexibly represented in working memory, but also that a universal per-item scanning cost persisted across encoding strategies. Experiment 2 modified the sentence-picture verification task to include nonspeech sounds (i.e., a sound-sentence-picture verification task) and found evidence generally supporting three distinct formats of representation as well as a lingering effect of auditory stimuli for verification times across representational formats. Experiment 3 manipulated three formats of internal representation (verbal, visuospatial imagery, and auditory imagery) for a point estimation sonification task in the presence of three types of interference tasks (verbal, visuospatial, and auditory) in an effort to induce selective processing code (i.e., domain-specific working memory) interference. Results showed no selective interference but instead suggested a general performance decline (i.e., a general representational resource) for the sonification task in the presence of an interference task, regardless of the sonification encoding strategy or the qualitative interference task demands. Results suggested a distinct role of internal representations for nonspeech sounds with respect to cognitive theory. The predictions of the processing codes dimension of the multiple resources construct were not confirmed; possible explanations are explored. The practical implications for the use of nonspeech sounds in applications include a possible response time advantage when an external stimulus and the format of internal representation match.Ph.D.Committee Chair: Walker, Bruce; Committee Member: Bonebright, Terri; Committee Member: Catrambone, Richard; Committee Member: Corso, Gregory; Committee Member: Rogers, Wend

    The Berlin Brain-Computer Interface: Progress Beyond Communication and Control

    Get PDF
    The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.EC/FP7/611570/EU/Symbiotic Mind Computer Interaction for Information Seeking/MindSeeEC/FP7/625991/EU/Hyperscanning 2.0 Analyses of Multimodal Neuroimaging Data: Concept, Methods and Applications/HYPERSCANNING 2.0DFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen Systeme

    Enabling the effective application of spatial auditory displays in modern flight decks

    Get PDF

    Extended Abstracts

    Get PDF
    Presented at the 21st International Conference on Auditory Display (ICAD2015), July 6-10, 2015, Graz, Styria, Austria.Mark Ballora “Two examples of sonification for viewer engagement: Hurricanes and squirrel hibernation cycles” / Stephen Barrass, “ Diagnostic Singing Bowls” / Natasha Barrett, Kristian Nymoen. “Investigations in coarticulated performance gestures using interactive parameter-mapping 3D sonification” / Lapo Boschi, Arthur PatĂ©, Benjamin Holtzman, Jean-LoĂŻc le Carrou. “Can auditory display help us categorize seismic signals?” / CĂ©dric Camier, François-Xavier FĂ©ron, Julien Boissinot, Catherine Guastavino. “Tracking moving sounds: Perception of spatial figures” / Coralie Diatkine, StĂ©phanie Bertet, Miguel Ortiz. “Towards the holistic spatialization of multiple sound sources in 3D, implementation using ambisonics to binaural technique” / S. Maryam FakhrHosseini, Paul Kirby, Myounghoon Jeon. “Regulating Drivers’ Aggressiveness by Sonifying Emotional Data” / Wolfgang Hauer, Katharina Vogt. “Sonification of a streaming-server logfile” / Thomas Hermann, Tobias Hildebrandt, Patrick Langeslag, Stefanie Rinderle-Ma. “Optimizing aesthetics and precision in sonification for peripheral process-monitoring” / Minna Huotilainen, Matti Gröhn, Iikka Yli-Kyyny, Jussi Virkkala, Tiina Paunio. “Sleep Enhancement by Sound Stimulation” / Steven Landry, Jayde Croschere, Myounghoon Jeon. “Subjective Assessment of In-Vehicle Auditory Warnings for Rail Grade Crossings” / Rick McIlraith, Paul Walton, Jude Brereton. “The Spatialised Sonification of Drug-Enzyme Interactions” / George Mihalas, Minodora Andor, Sorin Paralescu, Anca Tudor, Adrian Neagu, Lucian Popescu, Antoanela Naaji. “Adding Sound to Medical Data Representation” / Rainer Mittmannsgruber, Katharina Vogt. “Auditory assistance for timing presentations” / Joseph W. Newbold, Andy Hunt, Jude Brereton. “Chemical Spectral Analysis through Sonification” / S. Camille Peres, Daniel Verona, Paul Ritchey. “The Effects of Various Parameter Combinations in Parameter-Mapping Sonifications: A Pilot Study” / Eva Sjuve. “Metopia: Experiencing Complex Environmental Data Through Sound” / Benjamin Stahl, Katharina Vogt. “The Effect of Audiovisual Congruency on Short-Term Memory of Serial Spatial Stimuli: A Pilot Test” / David Worrall. “Realtime sonification and visualisation of network metadata (The NetSon Project)” / Bernhard Zeller, Katharina Vogt. “Auditory graph evolution by the example of spurious correlations” /The compiled collection of extended abstracts included in the ICAD 2015 Proceedings. Extended abstracts include, but are not limited to, late-breaking results, works in early stages of progress, novel methodologies, unique or controversial theoretical positions, and discussions of unsuccessful research or null findings

    Safe and Sound: Proceedings of the 27th Annual International Conference on Auditory Display

    Get PDF
    Complete proceedings of the 27th International Conference on Auditory Display (ICAD2022), June 24-27. Online virtual conference
    • 

    corecore