16,949 research outputs found

    A Neural Model of How the Brain Represents and Compares Multi-Digit Numbers: Spatial and Categorical Processes

    Full text link
    Both animals and humans are capable of representing and comparing numerical quantities, but only humans seem to have evolved multi-digit place-value number systems. This article develops a neural model, called the Spatial Number Network, or SpaN model, which predicts how these shared numerical capabilities are computed using a spatial representation of number quantities in the Where cortical processing stream, notably the Inferior Parietal Cortex. Multi-digit numerical representations that obey a place-value principle are proposed to arise through learned interactions between categorical language representations in the What cortical processing stream and the Where spatial representation. It is proposed that learned semantic categories that symbolize separate digits, as well as place markers like "tens," "hundreds," "thousands," etc., are associated through learning with the corresponding spatial locations of the Where representation, leading to a place-value number system as an emergent property of What-Where information fusion. The model quantitatively simulates error rates in quantification and numerical comparison tasks, and reaction times for number priming and numerical assessment and comparison tasks. In the Where cortical process, it is proposed that transient responses to inputs are integrated before they activate an ordered spatial map that selectively responds to the number of events in a sequence. Neural mechanisms are defined which give rise to an ordered spatial numerical map ordering and Weber law characteristics as emergent properties. The dynamics of numerical comparison are encoded in activity pattern changes within this spatial map. Such changes cause a "directional comparison wave" whose properties mimic data about numerical comparison. These model mechanisms are variants of neural mechanisms that have elsewhere been used to explain data about motion perception, attention shifts, and target tracking. Thus, the present model suggests how numerical representations may have emerged as specializations of more primitive mechanisms in the cortical Where processing stream. The model's What-Where interactions can explain human psychophysical data, such as error rates and reaction times, about multi-digit (base 10) numerical stimuli, and describe how such a competence can develop through learning. The SpaN model and its explanatory range arc compared with other models of numerical representation.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333

    Region-Referenced Spectral Power Dynamics of EEG Signals: A Hierarchical Modeling Approach

    Full text link
    Functional brain imaging through electroencephalography (EEG) relies upon the analysis and interpretation of high-dimensional, spatially organized time series. We propose to represent time-localized frequency domain characterizations of EEG data as region-referenced functional data. This representation is coupled with a hierarchical modeling approach to multivariate functional observations. Within this familiar setting, we discuss how several prior models relate to structural assumptions about multivariate covariance operators. An overarching modeling framework, based on infinite factorial decompositions, is finally proposed to balance flexibility and efficiency in estimation. The motivating application stems from a study of implicit auditory learning, in which typically developing (TD) children, and children with autism spectrum disorder (ASD) were exposed to a continuous speech stream. Using the proposed model, we examine differential band power dynamics as brain function is interrogated throughout the duration of a computer-controlled experiment. Our work offers a novel look at previous findings in psychiatry, and provides further insights into the understanding of ASD. Our approach to inference is fully Bayesian and implemented in a highly optimized Rcpp package

    FaceFilter: Audio-visual speech separation using still images

    Full text link
    The objective of this paper is to separate a target speaker's speech from a mixture of two speakers using a deep audio-visual speech separation network. Unlike previous works that used lip movement on video clips or pre-enrolled speaker information as an auxiliary conditional feature, we use a single face image of the target speaker. In this task, the conditional feature is obtained from facial appearance in cross-modal biometric task, where audio and visual identity representations are shared in latent space. Learnt identities from facial images enforce the network to isolate matched speakers and extract the voices from mixed speech. It solves the permutation problem caused by swapped channel outputs, frequently occurred in speech separation tasks. The proposed method is far more practical than video-based speech separation since user profile images are readily available on many platforms. Also, unlike speaker-aware separation methods, it is applicable on separation with unseen speakers who have never been enrolled before. We show strong qualitative and quantitative results on challenging real-world examples.Comment: Under submission as a conference paper. Video examples: https://youtu.be/ku9xoLh62
    • …
    corecore