141 research outputs found

    On the use of visual information in infant reaching actions

    Get PDF
    Savelsbergh, G.J.P. [Promotor]Kamp, G.J. van der [Copromotor

    Attention to the model's face when learning from video modeling examples in adolescents with and without autism spectrum disorder

    Get PDF
    We investigated the effects of seeing the instructor's (i.e., the model's) face in video modeling examples on students' attention and their learning outcomes. Research with university students suggested that the model's face attracts students' attention away from what the model is doing, but this did not hamper learning. We aimed to investigate whether we would replicate this finding in adolescents (prevocational education) and to establish how adolescents with autism spectrum disorder, who have been found to look less at faces generally, would process video examples in which the model's face is visible. Results showed that typically developing adolescents who did see the model's face paid significantly less attention to the task area than typically developing adolescents who did not see the model's face. Adolescents with autism spectrum disorder paid less attention to the model's face and more to the task demonstration area than typically developing adolescents who saw the model's face. These differences in viewing behavior, however, did not affect learning outcomes. This study provides further evidence that seeing the model's face in video examples affects students' attention but not their learning outcomes

    Do social cues in instructional videos affect attention allocation, perceived cognitive load, and learning outcomes under different visual complexity conditions?

    Get PDF
    Background:There are only few guidelines on how instructional videos should be designed to optimize learning. Recently, the effects of social cues on attention allocation and learning in instructional videos have been investigated. Due to inconsistent results, it has been suggested that the visual complexity of a video influences the effect of social cues on learning.Objectives:Therefore, this study compared the effects of social cues (i.e., gaze & gesture) in low and high visual complexity videos on attention, perceived cognitive load,and learning outcomes.Methods:Participants (N=71) were allocated to a social cue or no social cue condition and watched both a low and a high visual complexity video. After each video, participants completed a knowledge test.Results and Conclusions: Results showed that participants looked faster at referenced information and had higher learning outcomes in the low visual complexity condition. Social cues did not affect any of the dependent variables, except when including prior knowledge in the analysis: In this exploratory analysis, the inclusion of gaze and gesture cues in the videos did lead to better learning outcomes.Takeaways: Our results show that the visual complexity of instructional videos and prior knowledge are important to take into account in future research on attention and learning from instructional videos

    Anticipatory reaching of seven- to eleven-month-old infants in occlusion situations

    Get PDF
    The present study examined 7- to 11-month-old infants' anticipatory and reactive reaching for temporarily occluded objects. Infants were presented with laterally approaching objects that moved at different velocities (10, 20, and 40. cm/s) in different occlusion situations (no-, 20. cm-, and 40. cm-occlusion), resulting in occlusion durations ranging between 0 and 4. s. Results show that except for object velocity and occlusion distance, occlusion duration was a critical constraint for infants' reaching behaviors. We found that the older infants reached more often, but that an increase in occlusion duration resulted in a decline in reaching frequency that was similar across age groups. Anticipatory reaching declined with increasing occlusion duration, but the adverse effects for longer occlusion durations diminished with age. It is concluded that with increasing age infants are able to retain and use information to guide reaching movements over longer periods of non-visibility, providing support for the graded representation hypothesis (Jonsson & von Hofsten, 2003) and the two-visual systems model (Milner & Goodale, 1995). © 2010 Elsevier Inc

    Task Experience as a Boundary Condition for the Negative Effects of Irrelevant Information on Learning

    Get PDF
    Research on multimedia learning has shown that learning is hampered when a multimedia message includes extraneou

    Do social cues in instructional videos affect attention allocation, perceived cognitive load, and learning outcomes under different visual complexity conditions?

    Get PDF
    Background: There are only few guidelines on how instructional videos should be designed to optimize learning. Recently, the effects of social cues on attention allocation and learning in instructional videos have been investigated. Due to inconsistent results, it has been suggested that the visual complexity of a video influences the effect of social cues on learning. Objectives: Therefore, this study compared the effects of social cues (i.e., gaze & gesture) in low and high visual complexity videos on attention, perceived cognitive load, and learning outcomes. Methods: Participants (N = 71) were allocated to a social cue or no social cue condition and watched both a low and a high visual complexity video. After each video, participants completed a knowledge test. Results and Conclusions: Results showed that participants looked faster at referenced information and had higher learning outcomes in the low visual complexity condition. Social cues did not affect any of the dependent variables, except when including prior knowledge in the analysis: In this exploratory analysis, the inclusion of gaze and gesture cues in the videos did lead to better learning outcomes. Takeaways: Our results show that the visual complexity of instructional videos and prior knowledge are important to take into account in future research on attention and learning from instructional videos

    On the relation between action selection and movement control in 5- to 9-month-old infants

    Get PDF
    Although 5-month-old infants select action modes that are adaptive to the size of the object (i.e., one- or two-handed reaching), it has largely remained unclear whether infants of this age control the ensuing movement to the size of the object (i.e., scaling of the aperture between hands). We examined 5-, 7-, and 9-month-olds’ reaching behaviors to gain more insight into the developmental changes occurring in the visual guidance of action mode selection and movement control, and the relationship between these processes. Infants were presented with a small set of objects (i.e., 2, 3, 7, and 8 cm) and a large set of objects (i.e., 6, 9, 12, and 15 cm). For the first set of objects, it was found that the infants more often performed two-handed reaches for the larger objects based on visual information alone (i.e., before making contact with the object), thus showing adaptive action mode selection relative to object size. Kinematical analyses of the two-handed reaches for the second set of objects revealed that inter-trial variance in aperture between the hands decreased with the approach toward the object, indicating that infants’ reaching is constrained by the object. Subsequent analysis showed that between hand aperture scaled to object size, indicating that visual control of the movement is adjusted to object size in infants as young as 5 months. Individual analyses indicated that the two processes were not dependent and followed distinct developmental trajectories. That is, adaptive selection of an action mode was not a prerequisite for appropriate aperture scaling, and vice versa. These findings are consistent with the idea of two separate and independent visual systems (Milner and Goodale in Neuropsychologia 46:774–785, 2008) during early infancy
    corecore