9,990 research outputs found

    Similarities in face and voice cerebral processing

    Get PDF
    In this short paper I illustrate by a few selected examples several compelling similarities in the functional organization of face and voice cerebral processing: (1) Presence of cortical areas selective to face or voice stimuli, also observed in non-human primates, and causally related to perception; (2) Coding of face or voice identity using a “norm-based” scheme; (3) Personality inferences from faces and voices in a same Trustworthiness–Dominance “social space”

    Idealized computational models for auditory receptive fields

    Full text link
    This paper presents a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to enable invariance of receptive field responses under natural sound transformations and ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or the combination of a time-causal generalized Gammatone filter over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals.Comment: 55 pages, 22 figures, 3 table

    Cortical transformation of spatial processing for solving the cocktail party problem: a computational model(1,2,3).

    Get PDF
    In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.R01 DC000100 - NIDCD NIH HHSPublished versio

    Infants segment words from songs - an EEG study

    No full text
    Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech

    Adaptive Resonance Theory: Self-Organizing Networks for Stable Learning, Recognition, and Prediction

    Full text link
    Adaptive Resonance Theory (ART) is a neural theory of human and primate information processing and of adaptive pattern recognition and prediction for technology. Biological applications to attentive learning of visual recognition categories by inferotemporal cortex and hippocampal system, medial temporal amnesia, corticogeniculate synchronization, auditory streaming, speech recognition, and eye movement control are noted. ARTMAP systems for technology integrate neural networks, fuzzy logic, and expert production systems to carry out both unsupervised and supervised learning. Fast and slow learning are both stable response to large non stationary databases. Match tracking search conjointly maximizes learned compression while minimizing predictive error. Spatial and temporal evidence accumulation improve accuracy in 3-D object recognition. Other applications are noted.Office of Naval Research (N00014-95-I-0657, N00014-95-1-0409, N00014-92-J-1309, N00014-92-J4015); National Science Foundation (IRI-94-1659

    Interactions between visual and semantic processing during object recognition revealed by modulatory effects of age of acquisition

    Get PDF
    The age of acquisition (AoA) of objects and their names is a powerful determinant of processing speed in adulthood, with early-acquired objects being recognized and named faster than late-acquired objects. Previous research using fMRI (Ellis et al., 2006. Traces of vocabulary acquisition in the brain: evidence from covert object naming. NeuroImage 33, 958–968) found that AoA modulated the strength of BOLD responses in both occipital and left anterior temporal cortex during object naming. We used magnetoencephalography (MEG) to explore in more detail the nature of the influence of AoA on activity in those two regions. Covert object naming recruited a network within the left hemisphere that is familiar from previous research, including visual, left occipito-temporal, anterior temporal and inferior frontal regions. Region of interest (ROI) analyses found that occipital cortex generated a rapid evoked response (~ 75–200 ms at 0–40 Hz) that peaked at 95 ms but was not modulated by AoA. That response was followed by a complex of later occipital responses that extended from ~ 300 to 850 ms and were stronger to early- than late-acquired items from ~ 325 to 675 ms at 10–20 Hz in the induced rather than the evoked component. Left anterior temporal cortex showed an evoked response that occurred significantly later than the first occipital response (~ 100–400 ms at 0–10 Hz with a peak at 191 ms) and was stronger to early- than late-acquired items from ~ 100 to 300 ms at 2–12 Hz. A later anterior temporal response from ~ 550 to 1050 ms at 5–20 Hz was not modulated by AoA. The results indicate that the initial analysis of object forms in visual cortex is not influenced by AoA. A fastforward sweep of activation from occipital and left anterior temporal cortex then results in stronger activation of semantic representations for early- than late-acquired objects. Top-down re-activation of occipital cortex by semantic representations is then greater for early than late acquired objects resulting in delayed modulation of the visual response
    • …
    corecore