237 research outputs found

    Body perception in newborns

    Get PDF
    Body ownership and awareness has recently become an active topic of research in adults using paradigms such as the “rubber hand illusion” and “enfacement” [1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 11]. These studies show that visual, tactile, postural, and anatomical information all contribute to the sense of body ownership in adults [12]. While some hypothesize body perception from birth [13], others have speculated on the importance of postnatal experience [14 and 15]. Through studying body perception in newborns, we can directly investigate the factors involved prior to significant postnatal experience. To address this issue, we measured the looking behavior of newborns presented with visual-tactile synchronous and asynchronous cues, under conditions in which the visual information was either an upright (body-related stimulus; experiment 1) or inverted (non-body-related stimulus; experiment 2) infant face. We found that newborns preferred to look at the synchronous condition compared to the asynchronous condition, but only when the visual stimulus was body related. These results are in line with findings from adults and demonstrate that human newborns detect intersensory synchrony when related to their own bodies, consistent with the basic processes underlying body perception being present at birth

    Designing Engaging Learning Experiences in Programming

    Get PDF
    In this paper we describe work to investigate the creation of engaging programming learning experiences. Background research informed the design of four fieldwork studies to explore how programming tasks could be framed to motivate learners. Our empirical findings from these four field studies are summarized here, with a particular focus upon one – Whack a Mole – which compared the use of a physical interface with the use of a screen-based equivalent interface to obtain insights into what made for an engaging learning experience. Emotions reported by two sets of participant undergraduate students were analyzed, identifying the links between the emotions experienced during programming and their origin. Evidence was collected of the very positive emotions experienced by learners programming with a physical interface (Arduino) in comparison with a similar program developed using a screen-based equivalent interface. A follow-up study provided further evidence of the motivation of personalized design of programming tangible physical artefacts. Collating all the evidence led to the design of a set of ‘Learning Dimensions’ which may provide educators with insights to support key design decisions for the creation of engaging programming learning experiences

    Audio-visual speech perception in infants and toddlers with Down syndrome, fragile X syndrome, and Williams syndrome

    Get PDF
    Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants. We probed for the McGurk effect, which occurs when the auditory component of one sound (/ba/) is paired with the visual component of another sound (/ga/), leading to the perception of an illusory third sound (/da/ or /tha/). We measured AV integration in 95 infants/toddlers with Down, fragile X, or Williams syndrome, whom we matched on Chronological and Mental Age to 25 TD infants. We also assessed a more basic AV perceptual ability: sensitivity to matching vs. mismatching AV speech stimuli. Infants with Williams syndrome failed to demonstrate a McGurk effect, indicating poor AV speech integration. Moreover, while the TD children discriminated between matching and mismatching AV stimuli, none of the other groups did, hinting at a basic deficit or delay in AV speech processing, which is likely to constrain subsequent language development

    Motor skill learning in the middle-aged: limited development of motor chunks and explicit sequence knowledge

    Get PDF
    The present study examined whether middle-aged participants, like young adults, learn movement patterns by preparing and executing integrated sequence representations (i.e., motor chunks) that eliminate the need for external guidance of individual movements. Twenty-four middle-aged participants (aged 55–62) practiced two fixed key press sequences, one including three and one including six key presses in the discrete sequence production task. Their performance was compared with that of 24 young adults (aged 18–28). In the middle-aged participants motor chunks as well as explicit sequence knowledge appeared to be less developed than in the young adults. This held especially with respect to the unstructured 6-key sequences in which most middle-aged did not develop independence of the key-specific stimuli and learning seems to have been based on associative learning. These results are in line with the notion that sequence learning involves several mechanisms and that aging affects the relative contribution of these mechanisms

    Second Language Processing Shows Increased Native-Like Neural Responses after Months of No Exposure

    Get PDF
    Although learning a second language (L2) as an adult is notoriously difficult, research has shown that adults can indeed attain native language-like brain processing and high proficiency levels. However, it is important to then retain what has been attained, even in the absence of continued exposure to the L2—particularly since periods of minimal or no L2 exposure are common. This event-related potential (ERP) study of an artificial language tested performance and neural processing following a substantial period of no exposure. Adults learned to speak and comprehend the artificial language to high proficiency with either explicit, classroom-like, or implicit, immersion-like training, and then underwent several months of no exposure to the language. Surprisingly, proficiency did not decrease during this delay. Instead, it remained unchanged, and there was an increase in native-like neural processing of syntax, as evidenced by several ERP changes—including earlier, more reliable, and more left-lateralized anterior negativities, and more robust P600s, in response to word-order violations. Moreover, both the explicitly and implicitly trained groups showed increased native-like ERP patterns over the delay, indicating that such changes can hold independently of L2 training type. The results demonstrate that substantial periods with no L2 exposure are not necessarily detrimental. Rather, benefits may ensue from such periods of time even when there is no L2 exposure. Interestingly, both before and after the delay the implicitly trained group showed more native-like processing than the explicitly trained group, indicating that type of training also affects the attainment of native-like processing in the brain. Overall, the findings may be largely explained by a combination of forgetting and consolidation in declarative and procedural memory, on which L2 grammar learning appears to depend. The study has a range of implications, and suggests a research program with potentially important consequences for second language acquisition and related fields

    The Neural Basis of Cognitive Efficiency in Motor Skill Performance from Early Learning to Automatic Stages

    Get PDF

    Predicting episodic memory formation for movie events

    Get PDF
    Episodic memories are long lasting and full of detail, yet imperfect and malleable. We quantitatively evaluated recollection of short audiovisual segments from movies as a proxy to real-life memory formation in 161 subjects at 15 minutes up to a year after encoding. Memories were reproducible within and across individuals, showed the typical decay with time elapsed between encoding and testing, were fallible yet accurate, and were insensitive to low-level stimulus manipulations but sensitive to high-level stimulus properties. Remarkably, memorability was also high for single movie frames, even one year post-encoding. To evaluate what determines the efficacy of long-term memory formation, we developed an extensive set of content annotations that included actions, emotional valence, visual cues and auditory cues. These annotations enabled us to document the content properties that showed a stronger correlation with recognition memory and to build a machine-learning computational model that accounted for episodic memory formation in single events for group averages and individual subjects with an accuracy of up to 80%. These results provide initial steps towards the development of a quantitative computational theory capable of explaining the subjective filtering steps that lead to how humans learn and consolidate memories
    corecore