23 research outputs found

    Touch-and-feel features in “first words” picture books hinder infants’ word learning

    Get PDF
    Little is known about the role of book features in infant word learning from picture books. We conducted a preregistered study to assess the role of touch-and-feel features in infants’ ability to learn new words from picture books. A total of 48 infants (Mage = 16.75 months, SD = 1.85) were assigned to a touch-and-feel picture-book condition or a standard picture-book condition (no touch-and-feel features) and were taught a novel label for an unfamiliar animal by the researcher during a book-reading session. Infants were then tested on their ability to recognize the label (i.e., choose the target from a choice of two pictures on hearing it named) and to generalize this knowledge to other types of pictures and real-world objects (scale model animals and stuffed animals). Infants in the no touch-and-feel condition performed above chance when choosing the target picture, whereas infants in the touch-and-feel condition did not. Infants in both conditions failed to generalize this knowledge to other pictures and objects. This study extends our knowledge about the role of tactile features in infant word learning from picture books. Although manipulative features like touch-and-feel patches might be engaging for infants, they may detract from learning. Depending on the purpose of the activity, parents and practitioners might find it useful to consider such book features when selecting books to read with their infants

    A computational account of threat-related attentional bias

    Get PDF
    Visual selective attention acts as a filter on perceptual information, facilitating learning and inference about important events in an agent’s environment. A role for visual attention in reward-based decisions has previously been demonstrated, but it remains unclear how visual attention is recruited during aversive learning, particularly when learning about multiple stimuli concurrently. This question is of particular importance in psychopathology, where enhanced attention to threat is a putative feature of pathological anxiety. Using an aversive reversal learning task that required subjects to learn, and exploit, predictions about multiple stimuli, we show that the allocation of visual attention is influenced significantly by aversive value but not by uncertainty. Moreover, this relationship is bidirectional in that attention biases value updates for attended stimuli, resulting in heightened value estimates. Our findings have implications for understanding biased attention in psychopathology and support a role for learning in the expression of threat-related attentional biases in anxiety

    Response selection difficulty modulates the behavioral impact of rapidly learnt action effects

    Get PDF
    It is well established that we can pick up action effect associations when acting in a free-choice intentional mode. However, it is less clear whether and when action effect associations are learnt and actually affect behavior if we are acting in a forced-choice mode, applying a specific stimulus-response (S-R) rule. In the present study, we investigated whether response selection difficulty imposed by S-R rules influences the initial rapid learning and the behavioral expression of previously learnt but weakly practiced action effect associations when those are re-activated by effect exposure. Experiment 1 showed that the rapid acquisition of action effect associations is not directly influenced by response selection difficulty. By contrast, the behavioral expression of re-activated action effect associations is prevented when actions are directly activated by highly over-learnt response cues and thus response selection difficulty is low. However, all three experiments showed that if response selection difficulty is sufficiently high during re-activation, the same action effect associations do influence behavior. Experiment 2 and 3 revealed that the effect of response selection difficulty cannot be fully reduced to giving action effects more time to prime an action, but seems to reflect competition during response selection. Finally, the present data suggest that when multiple novel rules are rapidly learnt in succession, which requires a lot of flexibility, action effect associations continue to influence behavior only if response selection difficulty is sufficiently high. Thus, response selection difficulty might modulate the impact of experiencing multiple learning episodes on action effect expression and learning, possibly via inducing different strategies

    Plasticity of muscle synergies through fractionation and merging during development and training of human runners

    Get PDF
    Complex motor commands for human locomotion are generated through the combination of motor modules representable as muscle synergies. Recent data have argued that muscle synergies are inborn or determined early in life, but development of the neuromusculoskeletal system and acquisition of new skills may demand fine-tuning or reshaping of the early synergies. We seek to understand how locomotor synergies change during development and training by studying the synergies for running in preschoolers and diverse adults from sedentary subjects to elite marathoners, totaling 63 subjects assessed over 100 sessions. During development, synergies are fractionated into units with fewer muscles. As adults train to run, specific synergies coalesce to become merged synergies. Presences of specific synergy-merging patterns correlate with enhanced or reduced running efficiency. Fractionation and merging of muscle synergies may be a mechanism for modifying early motor modules (Nature) to accommodate the changing limb biomechanics and influences from sensorimotor training (Nurture)

    Robot Navigation

    Get PDF

    Ecological active vision: four bio-inspired principles to integrate bottom-up and adaptive top-down attention tested with a simple camera-arm robot

    Get PDF
    Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture ("BITPIC") to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob "objects." The results show that the architecture solves the problems, and hence the tasks, very ef?ciently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions

    Building concepts one episode at a time: The hippocampus and concept formation

    Get PDF
    Concepts organize our experiences and allow for meaningful inferences in novel situations. Acquiring new concepts requires extracting regularities across multiple learning experiences, a process formalized in mathematical models of learning. These models posit a computational framework that has increasingly aligned with the expanding repertoire of functions associated with the hippocampus. Here, we propose the Episodes-to-Concepts (EpCon) theoretical model of hippocampal function in concept learning and review evidence for the hippocampal computations that support concept formation including memory integration, attentional biasing, and memory-based prediction error. We focus on recent studies that have directly assessed the hippocampal role in concept learning with an innovative approach that combines computational modeling and sophisticated neuroimaging measures. Collectively, this work suggests that the hippocampus does much more than encode individual episodes; rather, it adaptively transforms initially-encoded episodic memories into organized conceptual knowledge that drives novel behavior

    Learning to Select State Machines using Expert Advice on an Autonomous Robot

    Full text link
    corecore