236 research outputs found

    Near-infrared spectroscopy for functional studies of brain activity in human infants: promise, prospects, and challenges

    Get PDF
    A recent workshop brought together a mix of researchers with expertise in optical physics, cerebral hemodynamics, cognitive neuroscience, and developmental psychology to review the potential utility of near-IR spectroscopy (NIRS) for studies of brain activity underlying cognitive processing in human infants. We summarize the key findings that emerged from this workshop and outline the pros and cons of NIRS for studying the brain correlates of perceptual, cognitive, and language development in human infants

    The Goldilocks Effect: Human Infants Allocate Attention to Visual Sequences That Are Neither Too Simple Nor Too Complex

    Get PDF
    Human infants, like immature members of any species, must be highly selective in sampling information from their environment to learn efficiently. Failure to be selective would waste precious computational resources on material that is already known (too simple) or unknowable (too complex). In two experiments with 7- and 8-month-olds, we measure infants’ visual attention to sequences of events varying in complexity, as determined by an ideal learner model. Infants’ probability of looking away was greatest on stimulus items whose complexity (negative log probability) according to the model was either very low or very high. These results suggest a principle of infant attention that may have broad applicability: infants implicitly seek to maintain intermediate rates of information absorption and avoid wasting cognitive resources on overly simple or overly complex events

    Infants' goal anticipation during failed and successful reaching actions

    Full text link
    The ability to interpret and predict the actions of others is crucial to social interaction and to social, cognitive, and linguistic development. The current study provided a strong test of this predictive ability by assessing (1) whether infants are capable of prospectively processing actions that fail to achieve their intended outcome, and (2) how infants respond to events in which their initial predictions are not confirmed. Using eye tracking, 8‐month‐olds, 10‐month‐olds, and adults watched an actor repeatedly reach over a barrier to either successfully or unsuccessfully retrieve a ball. Ten‐month‐olds and adults produced anticipatory looks to the ball, even when the action was unsuccessful and the actor never achieved his goal. Moreover, they revised their initial predictions in response to accumulating evidence of the actor's failure. Eight‐month‐olds showed anticipatory looking only after seeing the actor successfully grasp and retrieve the ball. Results support a flexible, prospective social information processing ability that emerges during the first year of life. The ability to make predictions about the actions of others is crucial to social interaction and to social, cognitive, and linguistic development. The current study examined this ability in infancy by assessing (1) whether infants can prospectively process actions that fail to achieve their intended outcome, and (2) how infants respond to events in which their initial predictions are not confirmed. Using eye tracking, 8‐month‐olds, 10‐month‐olds, and adults watched an actor repeatedly reach over a barrier to successfully or unsuccessfully retrieve a ball. Results provide support for a flexible, prospective social information processing ability that emerges during the first year.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/102162/1/desc12095.pd

    No Evidence That Abstract Structure Learning Disrupts Novel-Event Learning in 8- to 11-Month-Olds

    Get PDF
    Although infants acquire specific information (e.g., motion of a specific toy) and abstract information (e.g., likelihood of events repeating), it is unclear whether extraction of abstract information interferes with specific learning. In the present study, 8- to 11-month-old infants were shown four audio-visual movies, either with a mixed or uniform presentation structure. Learning of abstract information was operationally defined as the looking time to changes in presentation structure of the movies (mixed vs. uniform blocks), and learning of specific information was defined as the looking time to changes in content in the four movies (object properties and identities). We found evidence of both specific and abstract learning, but did not find evidence that extraction of the presentation structure (i.e., abstract learning) impacts specific learning of the events. We discuss the implications of the costs and benefits of the interaction between abstract and specific learning for infants

    Time-resolved multivariate pattern analysis of infant EEG data: A practical tutorial

    Get PDF
    Time-resolved multivariate pattern analysis (MVPA), a popular technique for analyzing magneto- and electro-encephalography (M/EEG) neuroimaging data, quantifies the extent and time-course by which neural representations support the discrimination of relevant stimuli dimensions. As EEG is widely used for infant neuroimaging, time-resolved MVPA of infant EEG data is a particularly promising tool for infant cognitive neuroscience. MVPA has recently been applied to common infant imaging methods such as EEG and fNIRS. In this tutorial, we provide and describe code to implement time-resolved, within-subject MVPA with infant EEG data. An example implementation of time-resolved MVPA based on linear SVM classification is described, with accompanying code in Matlab and Python. Results from a test dataset indicated that in both infants and adults this method reliably produced above-chance accuracy for classifying stimuli images. Extensions of the classification analysis are presented including both geometric- and accuracy-based representational similarity analysis, implemented in Python. Common choices of implementation are presented and discussed. As the amount of artifact-free EEG data contributed by each participant is lower in studies of infants than in studies of children and adults, we also explore and discuss the impact of varying participant-level inclusion thresholds on resulting MVPA findings in these datasets

    Effects of Probabilistic Contingencies on Word Processing in an Artificial Lexicon

    Get PDF
    Artificial lexicons have been used with eye-tracking to study the integration of contextual (top-down) and bottom-up information in lexical processing. The present study utilized these techniques to study the role of probabilistic information in lexical processing. Participants were trained to associate novel nouns and modifiers, with certain combinations occurring more frequently than others. Participants heard a modifier-noun phrase and were asked to select the words in a display. We predicted that participants would make anticipatory eye movements to nouns based on the probabilities they previously learned. While no anticipatory effects were found, delayed effects consistent with our predictions were found

    Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

    Get PDF
    Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks
    • …
    corecore