3,532 research outputs found

    Looking at instructional animations through the frame of virtual camera

    Get PDF
    This thesis investigates the virtual camera and the function of camera movements in expository motion graphics for the purpose of instruction. Motion graphic design is a popular video production technique often employed to create instructional animations that present educational content through the persuasive presentation styles of the entertainment media industry. Adopting animation as a learning tool has distinct concerns and challenges when compared to its use in entertainment, and combining cognitive learning and emotive design aspects requires additional design considerations for each design element. The thesis will address how the camera movement-effect in supporting the narrative and aesthetic in instructional animations. It does this by investigating the virtual camera in terms of technical, semiotic and psychological level, culminating in a systematic categorization of functional camera movements on the basis of conceptual framework that describes hybrid integration of physical, cognitive and affective design aspects; and a creative work as a case study in the form of a comprehensive instructional animation that demonstrates practiced camera movements. Due to the correlation of the conceptual framework relied upon by the supplementary work with the techniques of effective instructional video production and conventional entertainment filmmaking, this thesis touches on the relationship between live action and animation in terms of directing and staging, concluding that the virtual camera as a design factor can be useful for supporting a narrative, evoking emotion and directing the audience’s focus while revealing, tracking and emphasizing informatio

    Cueing animations: Dynamic signaling aids information extraction and comprehension

    Get PDF
    The effectiveness of animations containing two novel forms of animation cueing that target relations between event units rather than individual entities was compared with that of animations containing conventional entity-based cueing or no cues. These relational event unit cues (progressive path and local coordinated cues) were specifically designed to support key learning processes posited by the Animation Processing Model (Lowe & Boucheix, 2008). Four groups of undergraduates (N = 84) studied a user-controllable animation of a piano mechanism and then were assessed for mental model quality (via a written comprehension test) and knowledge of the mechanism's dynamics (via a novel non-verbal manipulation test). Time-locked eye tracking was used to characterize participants' obedience to cues (initial engagement versus ongoing loyalty) across the learning period. For both output measures, participants in the two relational event unit cueing conditions were superior to those in the entity-based and uncued conditions. Time-locked eye tracking analysis of cue obedience revealed that initial cue engagement did not guarantee ongoing cue loyalty. The findings suggest that the Animation Processing Model provides a principled basis for designing more effective animation support

    Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection

    Get PDF
    In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lösen. Dies erfordert auch die Berücksichtigung veränderter Schwerkraftbedingungen

    Understanding Effects of Presentation on Concept Learning in Technology Supported Learning

    Get PDF
    The world of technology has had a significant impact on learning and instructional domain. Today, a large number of devices and software are specifically designed to afford faster and effective learning and instruction. They have not only erased the physical boundaries to resources in education but have also helped create new interactions and engagements for learners and instructors. With this changed scenario, the content or instructional material also needs our attention to become usable and compatible with the changed learning styles and preferences of the learners today. Not only does the content need to seamlessly integrate with the delivery methodology and technology but also utilize the capabilities offered by it to enhance the learning experience. For higher order learning content such as concepts and principles that involve deeper cognitive processes, there is a need to understand how instructional material can be made more effective in technology supported environment. The goal of this experimental study was to investigate if conceptual learning in electronically delivered self-paced format can be made more usable and effective with right amount of content and presentation. It presented stimulus (concept attributes) in five different variations of information presentations and made a comparative assessment of performances using post-stimulus questions as a measure of a learner\u27s ability to generalize a concept. The eye-tracking methodology used in this study provided an opportunity to understand learner\u27s perceptual processing during learning a concept. The results of this study indicated that too much information does not help in concept learning. At the same time, providing some learner control on display of information and providing information in smaller units help the cognitive processes involved in learning a concept. Though not statistically significant, the trend showed reduction in work overload and better performance with learner-controlled progressive display. Qualitative analysis also supports the learner satisfaction and preference for progressive presentation with learner control

    An Eye Tracking Study to Investigate the Influence of Language and Text Direction on Multimedia

    Get PDF
    This study investigated how native language orientation influences spatial bias, first visual fixation on screen, first visual fixation on pictures, learning outcomes, and mental effort of learners. Previous studies supported the effect of native language writing or reading direction on spatial bias, examining written text and images created by the participants (Barrett et al., 2002; Boroditsky, 2001; Chatterjee, Southwood & Basiko, 1999; Spalek & Hammad, 2005). However, no study investigated writing direction in multimedia presentations using eye tracking. This study addresses this gap. A total of 84 participants completed the study forming four groups. The first group (NativeLeft_InstrEng) consisted of individuals whose native language is written from left to right and who have never experienced a right to left language. They received the material in English. The second group (NativeRight_InstrAra), whose native language is written from right to left, received the material in Arabic. The third group (NativeLeft_LrnRight_InstrEng) consists of individuals whose native language is written from left to right and who are learning or have learned a language written from right to left. They received the material in English. The fourth group (NativeRight_InstrEng), whose native language is written from right to left, received the material in English. Participants were asked to complete a survey that consisted of eight sections: demographic questions, self-estimate prior knowledge test, the instructional unit, mental effort rating, sentence forming questions, recalling questions, sequence question and finally, post-test questions. Eye tracking was used to detect first fixation on screen and pictures, and results were compared with participants’ written responses. Eye movements can be considered the blueprint for how students process the visual information (Underwood & Radach, 1998). Significant results for learning and spatial bias confirmed that spatial bias is associated with native language orientation such that the left-oriented learners were more likely to demonstrate left bias on the screen, while participants who were right-oriented demonstrated right bias. However, exposure to other languages, culture, or beliefs; or living for some time in a country which uses a language with a different orientation can influence learner’s spatial bias, as seen with group NativeRight_InstrEng. Finally, differences in visual fixations on screen and pictures were not significant perhaps due to the simplicity of pictures used in this study

    Eye tracking in Educational Science: Theoretical frameworks and research agendas

    Get PDF
    Eye tracking is increasingly being used in Educational Science and so has the interest of the eye tracking community grown in this topic. In this paper we briefly introduce the discipline of Educational Science and why it might be interesting to couple it with eye tracking research. We then introduce three major research areas in Educational Science that have already successfully used eye tracking: First, eye tracking has been used to improve the instructional design of computer-based learning and testing environments, often using hyper- or multimedia. Second, eye tracking has shed light on expertise and its development in visual domains, such as chess or medicine. Third, eye tracking has recently been also used to promote visual expertise by means of eye movement modeling examples. We outline the main educational theories for these research areas and indicate where further eye tracking research is needed to expand them

    Gaze transitions when learning with multimedia

    Get PDF
    Eye tracking methodology is used to examine the influence of interactive multimedia on the allocation of visual attention and its dynamics during learning. We hypothesized that an interactive simulation promotes more organized switching of attention between different elements of multimedia learning material, e.g., textual description and pictorial visualization. Participants studied a description of an algorithm accompanied either by an interactive simulation, self-paced animation, or static illustration. Using a novel framework for entropy-based comparison of gaze transition matrices, results showed that the interactive simulation elicited more careful visual investigation of the learning material as well as reading of the problem description through to its completion

    Effects of Character Guide in Immersive Virtual Reality Stories

    Get PDF
    Bringing cinematic experiences from traditional film screens into Virtual Reality (VR) has become an increasingly popular form of entertainment in recent years. VR provides viewers unprecedented film experience that allows them to freely explore around the environment and even interact with virtual props and characters. For the audience, this kind of experience raises their sense of presence in a different world, and may even stimulate their full immersion in story scenarios. However, different from traditional film-making, where the audience is completely passive in following along director’s decisions of storytelling, more freedom in VR might cause viewers to get lost on halfway watching a series of events that build up a story. Therefore, striking a balance between user interaction and narrative progression is a big challenge for filmmakers. To assist in organizing the research space, we presented a media review and the resulting framework to characterize the primary differences among different variations of film, media, games, and VR storytelling. The evaluation in particular provided us with knowledge that were closely associated with story-progression strategies and gaze redirection methods for interactive content in the commercial domain. Following the existing VR storytelling framework, we then approached the problem of guiding the audience through the major events of a story by introducing a virtual character as a travel companion who provides assistance in directing the viewer’s focus to the target scenes. The presented research explored a new technique that allowed a separate virtual character to be overlaid on top of an existing 360-degree video such that the added character react based on the head-tracking data to help indicate to the viewer the core focal content of the story. The motivation behind this research is to assist directors in using a virtual guiding character to increase the effectiveness of VR storytelling, assuring that viewers fully understand the story through completing a sequence of events, and possibly realize a rich literary experience. To assess the effectiveness of this technique, we performed a controlled experiment by applying the method in three immersive narrative experiences, each with a control condition that was free ii from guidance. The experiment compared three variations of the character guide: 1) no guide; 2) a guide with an art style similar to the style of the video design; and 3) a character guide with a dissimilar style. All participants viewed the narrative experiences to test whether a similar art style led to better gaze behaviors that had higher likelihood of falling on the intended focus regions of the 360-degree range of the Virtual Environment (VE). By the end of the experiment, we concluded that adding a virtual character that was independent from the narrative had limited effects on users’ gaze performances when watching an interactive story in VR. Furthermore, the implemented character’s art style made very few difference to users’ gaze performance as well as their level of viewing satisfaction. The primary reason could be due to limitation of the implementation design. Besides this, the guiding body language designed for an animal character caused certain confusion for numerous participants viewing the stories. In the end, the character guide approaches still provided insights for future directors and designers into how to draw the viewers’ attention to a target point within a narrative VE, including what can work well and what should be avoide

    Gaze Guidance, Task-Based Eye Movement Prediction, and Real-World Task Inference using Eye Tracking

    Get PDF
    The ability to predict and guide viewer attention has important applications in computer graphics, image understanding, object detection, visual search and training. Human eye movements provide insight into the cognitive processes involved in task performance and there has been extensive research on what factors guide viewer attention in a scene. It has been shown, for example, that saliency in the image, scene context, and task at hand play significant roles in guiding attention. This dissertation presents and discusses research on visual attention with specific focus on the use of subtle visual cues to guide viewer gaze and the development of algorithms to predict the distribution of gaze about a scene. Specific contributions of this work include: a framework for gaze guidance to enable problem solving and spatial learning, a novel algorithm for task-based eye movement prediction, and a system for real-world task inference using eye tracking. A gaze guidance approach is presented that combines eye tracking with subtle image-space modulations to guide viewer gaze about a scene. Several experiments were conducted using this approach to examine its impact on short-term spatial information recall, task sequencing, training, and password recollection. A model of human visual attention prediction that uses saliency maps, scene feature maps and task-based eye movements to predict regions of interest was also developed. This model was used to automatically select target regions for active gaze guidance to improve search task performance. Finally, we develop a framework for inferring real-world tasks using image features and eye movement data. Overall, this dissertation naturally leads to an overarching framework, that combines all three contributions to provide a continuous feedback system to improve performance on repeated visual search tasks. This research has important applications in data visualization, problem solving, training, and online education

    Augmented Reality Interfaces for Procedural Tasks

    Get PDF
    Procedural tasks involve people performing established sequences of activities while interacting with objects in the physical environment to accomplish particular goals. These tasks span almost all aspects of human life and vary greatly in their complexity. For some simple tasks, little cognitive assistance is required beyond an initial learning session in which a person follows one-time compact directions, or even intuition, to master a sequence of activities. In the case of complex tasks, procedural assistance may be continually required, even for the most experienced users. Approaches for rendering this assistance employ a wide range of written, audible, and computer-based technologies. This dissertation explores an approach in which procedural task assistance is rendered using augmented reality. Augmented reality integrates virtual content with a user's natural view of the environment, combining real and virtual objects interactively, and aligning them with each other. Our thesis is that an augmented reality interface can allow individuals to perform procedural tasks more quickly while exerting less effort and making fewer errors than other forms of assistance. This thesis is supported by several significant contributions yielded during the exploration of the following research themes: What aspects of AR are applicable and beneficial to the procedural task problem? In answering this question, we developed two prototype AR interfaces that improve procedural task accomplishment. The first prototype was designed to assist mechanics carrying out maintenance procedures under field conditions. An evaluation involving professional mechanics showed our prototype reduced the time required to locate procedural tasks and resulted in fewer head movements while transitioning between tasks. Following up on this work, we constructed another prototype that focuses on providing assistance in the underexplored psychomotor phases of procedural tasks. This prototype presents dynamic and prescriptive forms of instruction and was evaluated using a demanding and realistic alignment task. This evaluation revealed that the AR prototype allowed participants to complete the alignment more quickly and accurately than when using an enhanced version of currently employed documentation systems. How does the user interact with an AR application assisting with procedural tasks? The application of AR to the procedural task problem poses unique user interaction challenges. To meet these challenges, we present and evaluate a novel class of user interfaces that leverage naturally occurring and otherwise unused affordances in the native environment to provide a tangible user interface for augmented reality applications. This class of techniques, which we call Opportunistic Controls, combines hand gestures, overlaid virtual widgets, and passive haptics to form an interface that was proven effective and intuitive during quantitative evaluation. Our evaluation of these techniques includes a qualitative exploration of various preferences and heuristics for Opportunistic Control-based designs
    • …
    corecore