236 research outputs found
Near-infrared spectroscopy for functional studies of brain activity in human infants: promise, prospects, and challenges
A recent workshop brought together a mix of researchers with expertise in optical physics, cerebral hemodynamics, cognitive neuroscience, and developmental psychology to review the potential utility of near-IR spectroscopy (NIRS) for studies of brain activity underlying cognitive processing in human infants. We summarize the key findings that emerged from this workshop and outline the pros and cons of NIRS for studying the brain correlates of perceptual, cognitive, and language development in human infants
The Goldilocks Effect: Human Infants Allocate Attention to Visual Sequences That Are Neither Too Simple Nor Too Complex
Human infants, like immature members of any species, must be highly selective in sampling information from their environment to learn efficiently. Failure to be selective would waste precious computational resources on material that is already known (too simple) or unknowable (too complex). In two experiments with 7- and 8-month-olds, we measure infantsâ visual attention to sequences of events varying in complexity, as determined by an ideal learner model. Infantsâ probability of looking away was greatest on stimulus items whose complexity (negative log probability) according to the model was either very low or very high. These results suggest a principle of infant attention that may have broad applicability: infants implicitly seek to maintain intermediate rates of information absorption and avoid wasting cognitive resources on overly simple or overly complex events
Recommended from our members
Statistical Cues in Language Acquisition: Word Segmentation by Infants
A critical component of language acquisition is the ability to learn from the information present in the language input. In particular, young language learners would benefit from leaming mechanisms capable of utilizing the myriad statistical cues to linguistic structure available in the input. The present study examines eight-month-old infants' use of statistical cues in discovering word boundaries. Computational models suggest that one of the most useful cues in segmenting words out of continuous speech is distributional information: the detection of consistent orderings of sounds. In this paper, we present results suggesting that eight-month-old infants can in fact make use of the order in which sounds occur to discover word-like sequences. The implications of this early ability to detect statistical information in the language input will be discussed with regard to theoretical issues in the field of language acquisition
Infants' goal anticipation during failed and successful reaching actions
The ability to interpret and predict the actions of others is crucial to social interaction and to social, cognitive, and linguistic development. The current study provided a strong test of this predictive ability by assessing (1) whether infants are capable of prospectively processing actions that fail to achieve their intended outcome, and (2) how infants respond to events in which their initial predictions are not confirmed. Using eye tracking, 8âmonthâolds, 10âmonthâolds, and adults watched an actor repeatedly reach over a barrier to either successfully or unsuccessfully retrieve a ball. Tenâmonthâolds and adults produced anticipatory looks to the ball, even when the action was unsuccessful and the actor never achieved his goal. Moreover, they revised their initial predictions in response to accumulating evidence of the actor's failure. Eightâmonthâolds showed anticipatory looking only after seeing the actor successfully grasp and retrieve the ball. Results support a flexible, prospective social information processing ability that emerges during the first year of life. The ability to make predictions about the actions of others is crucial to social interaction and to social, cognitive, and linguistic development. The current study examined this ability in infancy by assessing (1) whether infants can prospectively process actions that fail to achieve their intended outcome, and (2) how infants respond to events in which their initial predictions are not confirmed. Using eye tracking, 8âmonthâolds, 10âmonthâolds, and adults watched an actor repeatedly reach over a barrier to successfully or unsuccessfully retrieve a ball. Results provide support for a flexible, prospective social information processing ability that emerges during the first year.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/102162/1/desc12095.pd
No Evidence That Abstract Structure Learning Disrupts Novel-Event Learning in 8- to 11-Month-Olds
Although infants acquire specific information (e.g., motion of a specific toy) and abstract information (e.g., likelihood of events repeating), it is unclear whether extraction of abstract information interferes with specific learning. In the present study, 8- to 11-month-old infants were shown four audio-visual movies, either with a mixed or uniform presentation structure. Learning of abstract information was operationally defined as the looking time to changes in presentation structure of the movies (mixed vs. uniform blocks), and learning of specific information was defined as the looking time to changes in content in the four movies (object properties and identities). We found evidence of both specific and abstract learning, but did not find evidence that extraction of the presentation structure (i.e., abstract learning) impacts specific learning of the events. We discuss the implications of the costs and benefits of the interaction between abstract and specific learning for infants
Time-resolved multivariate pattern analysis of infant EEG data: A practical tutorial
Time-resolved multivariate pattern analysis (MVPA), a popular technique for analyzing magneto- and electro-encephalography (M/EEG) neuroimaging data, quantifies the extent and time-course by which neural representations support the discrimination of relevant stimuli dimensions. As EEG is widely used for infant neuroimaging, time-resolved MVPA of infant EEG data is a particularly promising tool for infant cognitive neuroscience. MVPA has recently been applied to common infant imaging methods such as EEG and fNIRS. In this tutorial, we provide and describe code to implement time-resolved, within-subject MVPA with infant EEG data. An example implementation of time-resolved MVPA based on linear SVM classification is described, with accompanying code in Matlab and Python. Results from a test dataset indicated that in both infants and adults this method reliably produced above-chance accuracy for classifying stimuli images. Extensions of the classification analysis are presented including both geometric- and accuracy-based representational similarity analysis, implemented in Python. Common choices of implementation are presented and discussed. As the amount of artifact-free EEG data contributed by each participant is lower in studies of infants than in studies of children and adults, we also explore and discuss the impact of varying participant-level inclusion thresholds on resulting MVPA findings in these datasets
Effects of Probabilistic Contingencies on Word Processing in an Artificial Lexicon
Artificial lexicons have been used with eye-tracking to study the integration of contextual (top-down)Â and bottom-up information in lexical processing. The present study utilized these techniques to study the role of probabilistic information in lexical processing. Participants were trained to associate novel nouns and modifiers, with certain combinations occurring more frequently than others. Participants heard a modifier-noun phrase and were asked to select the words in a display. We predicted that participants would make anticipatory eye movements to nouns based on the probabilities they previously learned. While no anticipatory effects were found, delayed effects consistent with our predictions were found
Recommended from our members
The Lateral Occipital Cortex Is Selective for Object Shape, Not Texture/Color, at Six Months
Understanding howthe human visual system develops is crucialto understandingthe nature and organization of our complex and varied visual representations. However, previous investigations of the development of the visual system using fMRI are primarily confined to a subset of the visual system (high-level vision: faces, scenes) and relatively late in visual development (starting at 4 â5 years of age). The current study extends our understanding of human visual development by presenting the first systematic investigation of a mid-level visual region [the lateral occipital cortex (LOC)] in a population much younger than has been investigated in the past: 6 month olds. We use functional near-infrared spectroscopy (fNIRS), an emerging optical method for recording cortical hemodynamics, to perform neuroimaging withthis very young population. Whereas previous fNIRS studies have suffered from imprecise neuroanatomical localization, we rely onthemost rigorousMR coregistration offNIRS datato datetoimagetheinfant LOC.Wefind surprising evidencethat at 6months the LOC has functional specialization that is highly similar to adults. Following Cant and Goodale (2007), we investigate whether the LOC tracks shapeinformation and not other cuesto objectidentity (e.g.,texture/material). Thisfinding extends evidence of LOC specialization from early childhood into infancy and earlier than developmental trajectories of high-level visual regions
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks
- âŚ