122,536 research outputs found

    Gaze tracking variables as indicators of learning

    Full text link
    The process of learning contains multiple aspects, whose intricacies have yet to be fully understood. The current experiment utilized gaze tracking technology to observe whether variables in subjects' gaze (e.g total time spent on a given image, percentage of time spent on cognitively salient features of the image) was predictive of the subject learning the material. Subjects consisted of students from a Medical Gross Anatomy course, who were tested twice-once before they had learned the course and once after they had learned a certain amount of material. Following the baseline testing, subjects were broken up into three groups, A, B and C, each representing a later visit in the course. Results indicated that groups B and C tended to spend more time on cognitively salient areas of interest, but this was moderated by familiarity. Moreover, results indicated that groups B and C tended to spend more time on images overall (end time) compared with group A or the baseline group. Overall, the results obtained were ambiguous, and warrant further study in order to arrive at a clear conclusion. Future directions of study may want to consider other gaze tracking variables, such as the time at which a subject first looks at a cognitively salient area of interest

    Jointly structuring triadic spaces of meaning and action:book sharing from 3 months on

    Get PDF
    This study explores the emergence of triadic interactions through the example of book sharing. As part of a naturalistic study, 10 infants were visited in their homes from 3-12 months. We report that (1) book sharing as a form of infant-caregiver-object interaction occurred from as early as 3 months. Using qualitative video analysis at a micro-level adapting methodologies from conversation and interaction analysis, we demonstrate that caregivers and infants practiced book sharing in a highly co-ordinated way, with caregivers carving out interaction units and shaping actions into action arcs and infants actively participating and co-ordinating their attention between mother and object from the beginning. We also (2) sketch a developmental trajectory of book sharing over the first year and show that the quality and dynamics of book sharing interactions underwent considerable change as the ecological situation was transformed in parallel with the infants' development of attention and motor skills. Social book sharing interactions reached an early peak at 6 months with the infants becoming more active in the coordination of attention between caregiver and book. From 7-9 months, the infants shifted their interest largely to solitary object exploration, in parallel with newly emerging postural and object manipulation skills, disrupting the social coordination and the cultural frame of book sharing. In the period from 9-12 months, social book interactions resurfaced, as infants began to effectively integrate object actions within the socially shared activity. In conclusion, to fully understand the development and qualities of triadic cultural activities such as book sharing, we need to look especially at the hitherto overlooked early period from 4-6 months, and investigate how shared spaces of meaning and action are structured together in and through interaction, creating the substrate for continuing cooperation and cultural learning

    Ostensive signals support learning from novel attention cues during infancy

    Get PDF
    Social attention cues (e.g., head turning, gaze direction) highlight which events young infants should attend to in a busy environment and, recently, have been shown to shape infants' likelihood of learning about objects and events. Although studies have documented which social cues guide attention and learning during early infancy, few have investigated how infants learn to learn from attention cues. Ostensive signals, such as a face addressing the infant, often precede social attention cues. Therefore, it is possible that infants can use ostensive signals to learn from other novel attention cues. In this training study, 8-month-olds were cued to the location of an event by a novel non-social attention cue (i.e., flashing square) that was preceded by an ostensive signal (i.e., a face addressing the infant). At test, infants predicted the appearance of specific multimodal events cued by the flashing squares, which were previously shown to guide attention to but not inform specific predictions about the multimodal events (Wu and Kirkham, 2010). Importantly, during the generalization phase, the attention cue continued to guide learning of these events in the absence of the ostensive signal. Subsequent experiments showed that learning was less successful when the ostensive signal was absent even if an interesting but non-ostensive social stimulus preceded the same cued events

    AVEID: Automatic Video System for Measuring Engagement In Dementia

    Get PDF
    Engagement in dementia is typically measured using behavior observational scales (BOS) that are tedious and involve intensive manual labor to annotate, and are therefore not easily scalable. We propose AVEID, a low cost and easy-to-use video-based engagement measurement tool to determine the engagement level of a person with dementia (PwD) during digital interaction. We show that the objective behavioral measures computed via AVEID correlate well with subjective expert impressions for the popular MPES and OME BOS, confirming its viability and effectiveness. Moreover, AVEID measures can be obtained for a variety of engagement designs, thereby facilitating large-scale studies with PwD populations

    Learning Generative Models with Visual Attention

    Full text link
    Attention has long been proposed by psychologists as important for effectively dealing with the enormous sensory stimulus available in the neocortex. Inspired by the visual attention models in computational neuroscience and the need of object-centric data for generative models, we describe for generative learning framework using attentional mechanisms. Attentional mechanisms can propagate signals from region of interest in a scene to an aligned canonical representation, where generative modeling takes place. By ignoring background clutter, generative models can concentrate their resources on the object of interest. Our model is a proper graphical model where the 2D Similarity transformation is a part of the top-down process. A ConvNet is employed to provide good initializations during posterior inference which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our model can robustly attend to face regions of novel test subjects. More importantly, our model can learn generative models of new faces from a novel dataset of large images where the face locations are not known.Comment: In the proceedings of Neural Information Processing Systems, 201

    The eye gaze direction of an observed person can bias perception, memory, and attention in adolescents with and without autism spectrum disorder

    Get PDF
    The reported experiments aimed to investigate whether a person and his or her gaze direction presented in the context of a naturalistic scene cause perception, memory, and attention to be biased in typically developing adolescents and high-functioning adolescents with autism spectrum disorder (ASD). A novel computerized image manipulation program presented a series of photographic scenes, each containing a person. The program enabled participants to laterally maneuver the scenes behind a static window, the borders of which partially occluded the scenes. The gaze direction of the person in the scenes spontaneously cued attention of both groups in the direction of gaze, affecting judgments of preference (Experiment 1a) and causing memory biases (Experiment 1b). Experiment 2 showed that the gaze direction of a person cues visual search accurately to the exact location of gaze in both groups. These findings suggest that biases in preference, memory, and attention are caused by another person's gaze direction when viewed in a complex scene in adolescents with and without ASD (C) 2009 Elsevier Inc. All rights reserved
    corecore