179 research outputs found

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    iBall: Augmenting Basketball Videos with Gaze-moderated Embedded Visualizations

    Full text link
    We present iBall, a basketball video-watching system that leverages gaze-moderated embedded visualizations to facilitate game understanding and engagement of casual fans. Video broadcasting and online video platforms make watching basketball games increasingly accessible. Yet, for new or casual fans, watching basketball videos is often confusing due to their limited basketball knowledge and the lack of accessible, on-demand information to resolve their confusion. To assist casual fans in watching basketball videos, we compared the game-watching behaviors of casual and die-hard fans in a formative study and developed iBall based on the fndings. iBall embeds visualizations into basketball videos using a computer vision pipeline, and automatically adapts the visualizations based on the game context and users' gaze, helping casual fans appreciate basketball games without being overwhelmed. We confrmed the usefulness, usability, and engagement of iBall in a study with 16 casual fans, and further collected feedback from 8 die-hard fans.Comment: ACM CHI2

    Objective and automated assessment of surgical technical skills with IoT systems: A systematic literature review

    Get PDF
    The assessment of surgical technical skills to be acquired by novice surgeons has been traditionally done by an expert surgeon and is therefore of a subjective nature. Nevertheless, the recent advances on IoT, the possibility of incorporating sensors into objects and environments in order to collect large amounts of data, and the progress on machine learning are facilitating a more objective and automated assessment of surgical technical skills. This paper presents a systematic literature review of papers published after 2013 discussing the objective and automated assessment of surgical technical skills. 101 out of an initial list of 537 papers were analyzed to identify: 1) the sensors used; 2) the data collected by these sensors and the relationship between these data, surgical technical skills and surgeons' levels of expertise; 3) the statistical methods and algorithms used to process these data; and 4) the feedback provided based on the outputs of these statistical methods and algorithms. Particularly, 1) mechanical and electromagnetic sensors are widely used for tool tracking, while inertial measurement units are widely used for body tracking; 2) path length, number of sub-movements, smoothness, fixation, saccade and total time are the main indicators obtained from raw data and serve to assess surgical technical skills such as economy, efficiency, hand tremor, or mind control, and distinguish between two or three levels of expertise (novice/intermediate/advanced surgeons); 3) SVM (Support Vector Machines) and Neural Networks are the preferred statistical methods and algorithms for processing the data collected, while new opportunities are opened up to combine various algorithms and use deep learning; and 4) feedback is provided by matching performance indicators and a lexicon of words and visualizations, although there is considerable room for research in the context of feedback and visualizations, taking, for example, ideas from learning analytics.This work was supported in part by the FEDER/Ministerio de Ciencia, InnovaciĂłn y Universidades;Agencia Estatal de InvestigaciĂłn, through the Smartlet Project under Grant TIN2017-85179-C3-1-R, and in part by the Madrid Regional Government through the e-Madrid-CM Project under Grant S2018/TCS-4307, a project which is co-funded by the European Structural Funds (FSE and FEDER). Partial support has also been received from the European Commission through Erasmus + Capacity Building in the Field of Higher Education projects, more specifically through projects LALA (586120-EPP-1-2017-1-ES-EPPKA2-CBHE-JP), InnovaT (598758-EPP-1-2018-1-AT-EPPKA2-CBHE-JP), and PROF-XXI (609767-EPP-1-2019-1-ES-EPPKA2-CBHE-JP)

    Task switching in the prefrontal cortex

    Get PDF
    The overall goal of this dissertation is to elucidate the cellular and circuit mechanisms underlying flexible behavior in the prefrontal cortex. We are often faced with situations in which the appropriate behavior in one context is inappropriate in others. If these situations are familiar, we can perform the appropriate behavior without relearning how the context relates to the behavior — an important hallmark of intelligence. Neuroimaging and lesion studies have shown that this dynamic, flexible process of remapping context to behavior (task switching) is dependent on prefrontal cortex, but the precise contributions and interactions of prefrontal subdivisions are still unknown. This dissertation investigates two prefrontal areas that are thought to be involved in distinct, but complementary executive roles in task switching — the dorsolateral prefrontal cortex (dlPFC) and the anterior cingulate cortex (ACC). Using electrophysiological recordings from macaque monkeys, I show that synchronous network oscillations in the dlPFC provide a mechanism to flexibly coordinate context representations (rules) between groups of neurons during task switching. Then, I show that, wheras the ACC neurons can represent rules at the cellular level, they do not play a significant role in switching between contexts — rather they seem to be more related to errors and motivational drive. Finally, I develop a set of web-enabled interactive visualization tools designed to provide a multi-dimensional integrated view of electrophysiological datasets. Taken together, these results contribute to our understanding of task switching by investigating new mechanisms for coordination of neurons in prefrontal cortex, clarifying the roles of prefrontal subdivisions during task switching, and providing visualization tools that enhance exploration and understanding of large, complex and multi-scale electrophysiological data

    Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking

    Get PDF
    Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications

    Using Eye Movement Data Visualization to Enhance Training of Air Traffic Controllers: A Dynamic Network Approach

    Get PDF
    The Federal Aviation Administration (FAA) forecasted substantial increase in the US air traffic volume creating a high demand in Air Traffic Control Specialists (ATCSs). Training times and passing rates for ATCSs might be improved if expert ATCSs’ eye movement (EM) characteristics can be utilized to support effective training. However, effective EM visualization is difficult for a dynamic task (e.g. aircraft conflict detection and mitigation) that includes interrogating multi-element targets that are dynamically moving, appearing, disappearing, and overlapping within a display. To address the issues, a dynamic network-based approach is introduced that integrates adapted visualizations (i.e. time-frame networks and normalized dot/bar plots) with measures used in network science (i.e. indegree, closeness, and betweenness) to provide in-depth EM analysis. The proposed approach was applied in an aircraft conflict task using a high-fidelity simulator; employing the use of veteran ATCSs and pseudo pilots. Results show that, ATCSs’ visual attention to multi-element dynamic targets can be effectively interpreted and supported through multiple evidences obtained from the various visualization and associated measures. In addition, we discovered that fewer eye fixation numbers or shorter eye fixation durations on a target may not necessarily indicate the target is less important when analyzing the flow of visual attention within a network. The results show promise in cohesively analyzing and visualizing various eye movement characteristics to better support training. 

    Learning from Teacher's Eye Movement: Expertise, Subject Matter and Video Modeling

    Full text link
    How teachers' eye movements can be used to understand and improve education is the central focus of the present paper. Three empirical studies were carried out to understand the nature of teachers' eye movements in natural settings and how they might be used to promote learning. The studies explored 1) the relationship between teacher expertise and eye movement in the course of teaching, 2) how individual differences and the demands of different subjects affect teachers' eye movement during literacy and mathematics instruction, 3) whether including an expert's eye movement and hand information in instructional videos can promote learning. Each study looked at the nature and use of teacher eye movements from a different angle but collectively converge on contributions to answering the question: what can we learn from teachers' eye movements? The paper also contains an independent methodology chapter dedicated to reviewing and comparing methods of representing eye movements in order to determine a suitable statistical procedure for representing the richness of current and similar eye tracking data. Results show that there are considerable differences between expert and novice teachers' eye movement in a real teaching situation, replicating similar patterns revealed by past studies on expertise and gaze behavior in athletics and other fields. This paper also identified the mix of person-specific and subject-specific eye movement patterns that occur when the same teacher teaches different topics to the same children. The final study reports evidence that eye movement can be useful in teaching; by showing increased learning when learners saw an expert model's eye movement in a video modeling example. The implications of these studies regarding teacher education and instruction are discussed.PHDEducation & PsychologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145853/1/yizhenh_1.pd
    • …
    corecore