99,313 research outputs found

    Visualization Support for Cognitive Sciences

    Get PDF
    The science of computer graphics and visualization is intertwined in many ways with Cognitive Sciences. On the one hand, computer graphics can lead to virtual environments in which a person is exposed to a virtual scenario. Typically, 3D-capable display technology combined with tracking systems, which are capable of identifying where the person is located at, are deployed to achieve maximal immersion in that the persons point of view is recreated in the virtual scenario. As a result, an impressive experience is created such that that person is navigating the virtual scenario as if it was real. On the other hand, visualization techniques can be utilized to present the results from a cognitive science experiment to the user such that it provides easier access to the data. This could range from simple plots to more sophisiticated approaches, such as parallel coordinates. In addition, results from cognitive sciences can feed back into the visualization to make the visualization more user-friendly. For example, more intuitive input devices, such as cyber gloves which track the position of a users fingers, could be used to intuitively make selections or view modifications. The Appenzeller Visualization Laboratory is in a perfect position to enable research in all of these areas mentioned above. Sophisticated display systems are available which provide full immersion, ranging from single screens and head-mounted displays to full-size CAVE-type displays. This presentation will illustrate some examples for visualizations of data from the cognitive science realm and showcase display systems and some of their use cases

    Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-based Assistance Systems

    Get PDF
    Renner P, Pfeiffer T. Attention Guiding Techniques using Peripheral Vision and Eye Tracking for Feedback in Augmented-Reality-based Assistance Systems. In: 2017 IEEE Symposium on 3D User Interfaces (3DUI). Los Angeles, CA: IEEE; 2017: 186-194.A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such ”off-screen gaze” conditions. The central idea put forward in this paper is to display cues for attention guidance in a way that they can still be followed using peripheral vision. While the eyes’ focus point is beyond the AR display, certain visual cues presented on the display are still detectable by the human. In addition to that, guidance methods that are adaptive to the eye movements of the user are introduced and evaluated. In the frame of a research project on smart glasses-based assistance systems for a manual assembly station, several attention guiding techniques with and without eye tracking have been designed, implemented and tested. As evaluation method simulated AR in a virtual reality HMD setup was used, which supports a repeatable and highly-controlled experimental design

    Adaptive User Perspective Rendering for Handheld Augmented Reality

    Full text link
    Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering

    Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning

    Full text link
    Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation dose to both patient and staff. This work proposes a marker-free "technician-in-the-loop" Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a particular target view, the recorded C-arm pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. We conduct experiments in a setting simulating orthopedic trauma surgery. Our proof-of-principle findings indicate that the proposed system can decrease the 2.76 X-ray images required per desired view down to zero, suggesting substantial reductions of radiation dose during C-arm repositioning. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future. The concept of technician-in-the-loop design will become relevant to various interventions considering the expected advancements of sensing and wearable computing in the near future

    An Immersive Telepresence System using RGB-D Sensors and Head Mounted Display

    Get PDF
    We present a tele-immersive system that enables people to interact with each other in a virtual world using body gestures in addition to verbal communication. Beyond the obvious applications, including general online conversations and gaming, we hypothesize that our proposed system would be particularly beneficial to education by offering rich visual contents and interactivity. One distinct feature is the integration of egocentric pose recognition that allows participants to use their gestures to demonstrate and manipulate virtual objects simultaneously. This functionality enables the instructor to ef- fectively and efficiently explain and illustrate complex concepts or sophisticated problems in an intuitive manner. The highly interactive and flexible environment can capture and sustain more student attention than the traditional classroom setting and, thus, delivers a compelling experience to the students. Our main focus here is to investigate possible solutions for the system design and implementation and devise strategies for fast, efficient computation suitable for visual data processing and network transmission. We describe the technique and experiments in details and provide quantitative performance results, demonstrating our system can be run comfortably and reliably for different application scenarios. Our preliminary results are promising and demonstrate the potential for more compelling directions in cyberlearning.Comment: IEEE International Symposium on Multimedia 201

    Empirical Comparisons of Virtual Environment Displays

    Get PDF
    There are many different visual display devices used in virtual environment (VE) systems. These displays vary along many dimensions, such as resolution, field of view, level of immersion, quality of stereo, and so on. In general, no guidelines exist to choose an appropriate display for a particular VE application. Our goal in this work is to develop such guidelines on the basis of empirical results. We present two initial experiments comparing head-mounted displays with a workbench display and a foursided spatially immersive display. The results indicate that the physical characteristics of the displays, users' prior experiences, and even the order in which the displays are presented can have significant effects on performance
    corecore