1,036 research outputs found

    Neural dynamics of change detection in crowded acoustic scenes

    Get PDF
    Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~ 50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection

    Detecting and representing predictable structure during auditory scene analysis

    Get PDF
    We use psychophysics and MEG to test how sensitivity to input statistics facilitates auditory-scene-analysis (ASA). Human subjects listened to ‘scenes’ comprised of concurrent tonepip streams (sources). On occasional trials a new source appeared partway. Listeners were more accurate and quicker to detect source appearance in scenes comprised of temporally-regular (REG), rather than random (RAND), sources. MEG in passive listeners and those actively detecting appearance events revealed increased sustained activity in auditory and parietal cortex in REG relative to RAND scenes, emerging ~400 ms of scene-onset. Over and above this, appearance in REG scenes was associated with increased responses relative to RAND scenes. The effect of temporal structure on appearance-evoked responses was delayed when listeners were focused on the scenes relative to when listening passively, consistent with the notion that attention reduces ‘surprise’. Overall, the results implicate a mechanism that tracks predictability of multiple concurrent sources to facilitate active and passive ASA

    The effect of healthy aging on change detection and sensitivity to predictable structure in crowded acoustic scenes

    Get PDF
    The auditory system plays a critical role in supporting our ability to detect abrupt changes in our surroundings. Here we study how this capacity is affected in the course of healthy ageing. Artifical acoustic ‘scenes’, populated by multiple concurrent streams of pure tones (‘sources’) were used to capture the challenges of listening in complex acoustic environments. Two scene conditions were included: REG scenes consisted of sources characterized by a regular temporal structure. Matched RAND scenes contained sources which were temporally random. Changes, manifested as the abrupt disappearance of one of the sources, were introduced to a subset of the trials and participants (‘young’ group N = 41, age 20-38 years; ‘older’ group N = 41, age 60-82 years) were instructed to monitor the scenes for these events. Previous work demonstrated that young listeners exhibit better change detection performance in REG scenes, reflecting sensitivity to temporal structure. Here we sought to determine: (1) Whether ‘baseline’ change detection ability (i.e. in RAND scenes) is affected by age. (2) Whether aging affects listeners’ sensitivity to temporal regularity. (3) How change detection capacity relates to listeners’ hearing and cognitive profile (a battery of tests that capture hearing and cognitive abilities hypothesized to be affected by aging). The results demonstrated that healthy aging is associated with reduced sensitivity to abrupt scene changes in RAND scenes but that performance does not correlate with age or standard audiological measures such as pure tone audiometry or speech in noise performance. Remarkably older listeners’ change detection performance improved substantially (up to the level exhibited by young listeners) in REG relative to RAND scenes. This suggests that the ability to extract and track the regularity associated with scene sources, even in crowded acoustic environments, is relatively preserved in older listeners

    Sensitivity to temporal structure facilitates perceptual analysis of complex auditory scenes

    Get PDF
    The notion that sensitivity to the statistical structure of the environment is pivotal to perception has recently garnered considerable attention. Here we investigated this issue in the context of hearing. Building on previous work (Sohoglu and Chait, 2016a; elife), stimuli were artificial 'soundscapes' populated by multiple (up to 14) simultaneous streams ('auditory objects') comprised of tone-pip sequences, each with a distinct frequency and pattern of amplitude modulation. Sequences were either temporally regular or random. We show that listeners' ability to detect abrupt appearance or disappearance of a stream is facilitated when scene streams were characterized by a temporally regular fluctuation pattern. The regularity of the changing stream as well as that of the background (non-changing) streams contribute independently to this effect. Remarkably, listeners benefit from regularity even when they are not consciously aware of it. These findings establish that perception of complex acoustic scenes relies on the availability of detailed representations of the regularities automatically extracted from multiple concurrent streams

    The effect of distraction on change detection in crowded acoustic scenes

    Get PDF
    In this series of behavioural experiments we investigated the effect of distraction on the maintenance of acoustic scene information in short-term memory. Stimuli are artificial acoustic ‘scenes’ composed of several (up to twelve) concurrent tone-pip streams (‘sources’). A gap (1000 ms) is inserted partway through the ‘scene’; Changes in the form of an appearance of a new source or disappearance of an existing source, occur after the gap in 50% of the trials. Listeners were instructed to monitor the unfolding ‘soundscapes’ for these events. Distraction was measured by presenting distractor stimuli during the gap. Experiments 1 and 2 used a dual task design where listeners were required to perform a task with varying attentional demands (‘High Demand’ vs. ‘Low Demand’) on brief auditory (Experiment 1a) or visual (Experiment 1b) signals presented during the gap. Experiments 2 and 3 required participants to ignore distractor sounds and focus on the change detection task. Our results demonstrate that the maintenance of scene information in short-term memory is influenced by the availability of attentional and/or processing resources during the gap, and that this dependence appears to be modality specific. We also show that these processes are susceptible to bottom up driven distraction even in situations when the distractors are not novel, but occur on each trial. Change detection performance is systematically linked with the, independently determined, perceptual salience of the distractor sound. The findings also demonstrate that the present task may be a useful objective means for determining relative perceptual salience

    On the Emergence and Awareness of Auditory Objects

    Get PDF
    How do humans successfully navigate the sounds of music and the voice of a friend in the midst of a noisy cocktail party? Two recent articles inPLoS Biology provide psychoacoustic and neuronal clues about where to search for the answers

    Attention, Awareness, and the Perception of Auditory Scenes

    Get PDF
    Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences

    Is predictability salient? A study of attentional capture by auditory patterns.

    Get PDF
    In this series of behavioural and electroencephalography (EEG) experiments, we investigate the extent to which repeating patterns of sounds capture attention. Work in the visual domain has revealed attentional capture by statistically predictable stimuli, consistent with predictive coding accounts which suggest that attention is drawn to sensory regularities. Here, stimuli comprised rapid sequences of tone pips, arranged in regular (REG) or random (RAND) patterns. EEG data demonstrate that the brain rapidly recognizes predictable patterns manifested as a rapid increase in responses to REG relative to RAND sequences. This increase is reminiscent of the increase in gain on neural responses to attended stimuli often seen in the neuroimaging literature, and thus consistent with the hypothesis that predictable sequences draw attention. To study potential attentional capture by auditory regularities, we used REG and RAND sequences in two different behavioural tasks designed to reveal effects of attentional capture by regularity. Overall, the pattern of results suggests that regularity does not capture attention.This article is part of the themed issue 'Auditory and visual scene analysis'

    Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification

    Get PDF
    The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks
    corecore