17,115 research outputs found
What can developmental disorders tell us about the neurocomputational constraints that shape development? the case of Williams syndrome
The uneven cognitive phenotype in the adult outcome of Williams syndrome has led some researchers to make strong claims about the modularity of the brain and the purported genetically determined, innate specification of cognitive modules. Such arguments have particularly been marshaled with respect to language. We challenge this direct generalization from adult phenotypic outcomes to genetic specification and consider instead how genetic disorders provide clues to the constraints on plasticity that shape the outcome of development. We specifically examine behavioral studies, brain imaging, and computational modeling of language in Williams syndrome but contend that our theoretical arguments apply equally to other cognitive domains and other developmental disorders. While acknowledging that selective deficits in normal adult patients might justify claims about cognitive modularity, we question whether similar, seemingly selective deficits found in genetic disorders can be used to argue that such cognitive modules are prespecified in infant brains. Cognitive modules are, in our view, the outcome of development, not its starting point. We note that most work on genetic disorders ignores one vital factor, the actual process of ontogenetic development, and argue that it is vital to view genetic disorders as proceeding under different neurocomputational constraints, not as demonstrations of static modularity
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
Recommended from our members
Dissociating visuo-spatial and verbal working memory: It’s all in the features
Echoing many of the themes of the seminal work of Atkinson and Shiffrin (1968), this paper uses the Feature Model (Nairne, 1988, 1990; Neath & Nairne, 1995) to account for performance in working memory tasks. The Brooks verbal and visuo-spatial matrix tasks were performed alone, with articulatory suppression, or with a spatial suppression task; the results produced the expected dissociation. We used Approximate Bayesian Computation techniques to fit the Feature Model to the data and showed that the similarity-based interference process implemented in the model accounted for the data patterns well. We then fit the model to data from Guérard and Tremblay (2008); the latter study produced a double dissociation while calling upon more typical order reconstruction tasks. Again, the model performed well. The findings show that a double dissociation can be modelled without appealing to separate systems for verbal and visuo-spatial processing. The latter findings are significant as the Feature Model had not been used to model this type of dissociation before; importantly, this is also the first time the model is quantitatively fit to data. For the demonstration provided here, modularity was unnecessary if two assumptions were made: (1) the main difference between spatial and verbal working memory tasks is the features that are encoded; (2) secondary tasks selectively interfere with primary tasks to the extent that both tasks involve similar features. It is argued that a feature-based view is more parsimonious (see Morey, 2018) and offers flexibility in accounting for multiple benchmark effects in the field
Presentation modality influences behavioral measures of alerting, orienting, and executive control
The Attention Network Test (ANT) uses visual stimuli to separately assess the attentional skills of alerting (improved performance following a warning cue), spatial orienting (an additional benefit when the warning cue also cues target location), and executive control (impaired performance when a target stimulus contains conflicting information). This study contrasted performance on auditory and visual versions of the ANT to determine whether the measures it obtains are influenced by presentation modality. Forty healthy volunteers completed both auditory and visual tests. Reaction-time measures of executive control were of a similar magnitude and significantly correlated, suggesting that executive control might be a supramodal resource. Measures of alerting were also comparable across tasks. In contrast, spatial-orienting benefits were obtained only in the visual task. Auditory spatial cues did not improve response times to auditory targets presented at the cued location. The different spatial-orienting measures could reflect either separate orienting resources for each perceptual modality, or an interaction between a supramodal orienting resource and modality-specific perceptual processing
ARSTREAM: A Neural Network Model of Auditory Scene Analysis and Source Segregation
Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-92-J-0225); Office of Naval Research (N00014-01-1-0624); Advanced Research Projects Agency (N00014-92-J-4015); British Petroleum (89A-1204); National Science Foundation (IRI-90-00530); American Society of Engineering Educatio
ARSTREAM: A Neural Network Model of Auditory Scene Analysis and Source Segregation
Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-92-J-0225); Office of Naval Research (N00014-01-1-0624); Advanced Research Projects Agency (N00014-92-J-4015); British Petroleum (89A-1204); National Science Foundation (IRI-90-00530); American Society of Engineering Educatio
- …