2,674 research outputs found

    Attention to the model's face when learning from video modeling examples in adolescents with and without autism spectrum disorder

    Get PDF
    We investigated the effects of seeing the instructor's (i.e., the model's) face in video modeling examples on students' attention and their learning outcomes. Research with university students suggested that the model's face attracts students' attention away from what the model is doing, but this did not hamper learning. We aimed to investigate whether we would replicate this finding in adolescents (prevocational education) and to establish how adolescents with autism spectrum disorder, who have been found to look less at faces generally, would process video examples in which the model's face is visible. Results showed that typically developing adolescents who did see the model's face paid significantly less attention to the task area than typically developing adolescents who did not see the model's face. Adolescents with autism spectrum disorder paid less attention to the model's face and more to the task demonstration area than typically developing adolescents who saw the model's face. These differences in viewing behavior, however, did not affect learning outcomes. This study provides further evidence that seeing the model's face in video examples affects students' attention but not their learning outcomes

    Rethinking Pedagogical Use of Eye Trackers for Visual Problems with Eye Gaze Interpretation Tasks

    Get PDF
    Eye tracking technology enables the visualisation of a problem solver's eye movement while working on a problem. The eye movement of experts has been used to draw attention to expert problem solving processes in a bid to teach procedural skills to learners. Such affordances appear as eye movement modelling examples (EMME) in the literature. This work intends to further this line of work by suggesting how eye gaze data can not only guide attention but also scaffold learning through constructive engagement with the problem solving process of another human. Inferring the models’ problem solving process, be it that of an expert or novice, from their eye gaze display would require a learner to make interpretations that are rooted in the knowledge elements relevant to such problem solving. Such tasks, if designed properly, are expected to probe or foster a deeper understanding of a topic as their solutions would require not only following the expert gaze to learn a particular skill, but also interpreting the solution process as evident from the gaze pattern of an expert or even of a novice. This position paper presents a case for such tasks, which we call eye gaze interpretation (EGI) tasks. We start with the theoretical background of these tasks, followed by a conceptual example and representation to elucidate the concept of EGI tasks. Thereafter, we discuss design considerations and pedagogical affordances, using a domain-specific (chemistry) spectral graph problem. Finally, we explore the possibilities and constraints of EGI tasks in various fields that require visual representations for problem solving

    From vision to reasoning

    Get PDF

    A BIASED COMPETITION COMPUTATIONAL MODEL OF SPATIAL AND OBJECT-BASED ATTENTION MEDIATING ACTIVE VISUAL SEARCH

    Get PDF
    A computational cognitive neuroscience approach was used to examine processes of visual attention in the human and monkey brain. The aim of the work was to produce a biologically plausible neurodynamical model of both spatial and object-based attention that accounted for observations in monkey visual areas V4, inferior temporal cortex (IT) and the lateral intraparietal area (LIP), and was able to produce search scan path behaviour similar to that observed in humans and monkeys. Of particular interest currently in the visual attention literature is the biased competition hypothesis (Desimone & Duncan. 1995). The model presented here is the first active vision implementation of biased competition, where attcntional shifts are overt. Therefore, retinal inputs change during the scan path and this approach raised issues, such as memory for searched locations across saccades, not addressed bv previous models with static retinas. This is the first model to examine the different time courses associated with spatial and object-based effects at the cellular level. Single cell recordings in areas V4 (Luck et al., 1997; Chelazzi et al., 2001) and IT (Chelazzi ct al., 1993, 1998) were replicated such that attentional effects occurred at the appropriate time after onset of the stimulus. Object-based effects at the cellular level of the model led to systems level behaviour that replicated that observed during active visual search for orientation and colour feature conjunction targets in psychophysical investigations. This provides a valuable insight into the link between cellular and system level behaviour in natural systems. At the systems level, the simulated search process showed selectivity in its scan path that was similar to that observed in humans (Scialfa & Joffe, 1998; Williams & Reingold, 2001) and monkeys (Motter & Belky. 1998b), being guided to target coloured locations in preference to locations containing the target orientation or blank areas. A connection between the ventral and dorsal visual processing streams (Ungerleider & Mishkin. 1982) is suggested to contribute to this selectivity and priority in the featural guidance of search. Such selectivity and avoidance of blank areas has potential application in computer vision applications. Simulation of lesions within the model and comparison with patient data provided further verification of the model. Simulation of visual neglect due to parietal cortical lesion suggests that the model has the capability to provide insights into the neural correlates of the conscious perception of stimuli The biased competition approach described here provides an extendable framework within which further "bottom-up" stimulus and "top-down" mnemonic and cognitive biases can be added, in order to further examine exogenous versus endogenous factors in the capture of attention

    Modelling visual search for surface defects

    Get PDF
    Much work has been done on developing algorithms for automated surface defect detection. However, comparisons between these models and human perception are rarely carried out. This thesis aims to investigate how well human observers can nd defects in textured surfaces, over a wide range of task di culties. Stimuli for experiments will be generated using texture synthesis methods and human search strategies will be captured by use of an eye tracker. Two di erent modelling approaches will be explored. A computational LNL-based model will be developed and compared to human performance in terms of the number of xations required to find the target. Secondly, a stochastic simulation, based on empirical distributions of saccades, will be compared to human search strategies

    Through the Eyes of a Programmer:A Research Project on how to Foster Programming Education with Eye-Tracking Technology

    Get PDF
    Nowadays, there is a high demand for programming expertise on the labor market. New technologies such as eye tracking could help to improve programming education and thereby help to fulfill this demand. For instance, Eye Movement Modeling Examples (EMMEs) are learning videos that visualize a person’s (the model’s) eye movements while s/he demonstrates how to perform a (programming) task. The eye movements can, for in-stance, get visualized as moving dots onto a screen recording. By observing where an expert programmer looks, programming beginners might better understand what s/he is doing and referring to. Recent studies showed promising first results about the beneficial effects of using EMMEs in programming education. In this manuscript, we present a research project that aims to provide evidence-based guidelines for educational practitioners on how to use eye-tracking technology for programming training. We first introduce the basic concept of EMMEs and exemplary gaps in literature. We then present our first empirical study on how different instructions affect expert programmer’s eye movements when modeling a debugging task (and hence EMME displays). With this manuscript, we hope to inspire more programmers to use eye-tracking technology for programming education

    Army-NASA aircrew/aircraft integration program. Phase 5: A3I Man-Machine Integration Design and Analysis System (MIDAS) software concept document

    Get PDF
    This is the Software Concept Document for the Man-machine Integration Design and Analysis System (MIDAS) being developed as part of Phase V of the Army-NASA Aircrew/Aircraft Integration (A3I) Progam. The approach taken in this program since its inception in 1984 is that of incremental development with clearly defined phases. Phase 1 began in 1984 and subsequent phases have progressed at approximately 10-16 month intervals. Each phase of development consists of planning, setting requirements, preliminary design, detailed design, implementation, testing, demonstration and documentation. Phase 5 began with an off-site planning meeting in November, 1990. It is expected that Phase 5 development will be complete and ready for demonstration to invited visitors from industry, government and academia in May, 1992. This document, produced during the preliminary design period of Phase 5, is intended to record the top level design concept for MIDAS as it is currently conceived. This document has two main objectives: (1) to inform interested readers of the goals of the MIDAS Phase 5 development period, and (2) to serve as the initial version of the MIDAS design document which will be continuously updated as the design evolves. Since this document is written fairly early in the design period, many design issues still remain unresolved. Some of the unresolved issues are mentioned later in this document in the sections on specific components. Readers are cautioned that this is not a final design document and that, as the design of MIDAS matures, some of the design ideas recorded in this document will change. The final design will be documented in a detailed design document published after the demonstrations

    Example-based learning: Integrating cognitive and social-cognitive research perspectives

    Get PDF
    Example-based learning has been studied from different perspectives. Cognitive research has mainly focused on worked examples, which typically provide students with a written worked-out didactical solution to a problem to study. Social-cognitive research has mostly focused on modeling examples, which provide students the opportunity to observe an adult or a peer model performing the task. The model can behave didactically or naturally, and the observation can take place face to face, on video, as a screen recording of the model's computer screen, or as an animation. This article reviews the contributions of the research on both types of example-based learning on questions such as why example-based learning is effective, for what kinds of tasks and learners it is effective, and how examples should be designed and delivered to students to optimize learning. This will show both the commonalities and the differences in research on example-based learning conducted from both perspectives and might inspire the identification of new research questions

    Adaptive Neural Networks for Control of Movement Trajectories Invariant under Speed and Force Rescaling

    Full text link
    This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.National Science Foundation (IRI-87-16960); Air Force Office of Scientific Research (90-0128, 90-0175

    A Neural Circuit Model for Prospective Control of Interceptive Reaching

    Full text link
    Two prospective controllers of hand movements in catching -- both based on required velocity control -- were simulated. Under certain conditions, this required velocity controlled to overshoots of the future interception point. These overshoots were absent in pertinent experiments. To remedy this shortcoming, the required velocity model was reformulated in terms of a neural network, the Vector Integration To Endpoint model, to create a Required Velocity Integration To Endpoint modeL Addition of a parallel relative velocity channel, resulting in the Relative and Required Velocity Integration To Endpoint model, provided a better account for the experimentally observed kinematics than the existing, purely behavioral models. Simulations of reaching to intercept decelerating and accelerating objects in the presence of background motion were performed to make distinct predictions for future experiments.Vrije Universiteit (Gerrit-Jan van Jngen-Schenau stipend of the Faculty of Human Movement Sciences); Royal Netherlands Academy of Arts and Sciences; Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409
    • …
    corecore