9 research outputs found

    Parametrizable cameras for 3D computational steering

    Get PDF
    We present a method for the definition of multiple views in 3D interfaces for computational steering. The method uses the concept of a point-based parametrizable camera object. This concept enables a user to create and configure multiple views on his custom 3D interface in an intuitive graphical manner. Each view can be coupled to objects present in the interface, parametrized to (simulation) data, or adjusted through direct manipulation or user defined camera controls. Although our focus is on 3D interfaces for computational steering, we think that the concept is valuable for many other 3D graphics applications as well

    Implementación de una lente mágica en un entorno virtual tridimensional con una tablet

    Get PDF
    When dealing with large data sets in Virtual Environments it is important to be able to manage the viewing of the data to obtain different levels of detail and to explore it from different perspectives. For this the appropriate interaction techniques have to be developed. This work consists on the investigation of different approaches to implement a flat Magic Lens application which interacts in a 3D Virtual Environment. The Magic Lens represents an interaction metaphor for the 2D selection and manipulation of 3D graphical information. The user by changing the position and the rotation of a hand held device, controls the Magic Lens motion in the Virtual Environment. Therefore two alternative views of the scene are offered, one where the user can appreciate an overview of the data set and the other which shows the focus view. Once some region of the VE is selected by the Magic Lens it is necessary to compute the view frustum, so that we can obtain a snapshot of the desired frustum in the hand held device and manipulate it there. This thesis focuses on the different interaction techniques which can be used in order to explore the virtual world with the Magic Lens.IngenierĂ­a de TelecomunicaciĂł

    A Virtual Testbed for Fish-Tank Virtual Reality: Improving Calibration with a Virtual-in-Virtual Display

    Get PDF
    With the development of novel calibration techniques for multimedia projectors and curved projection surfaces, volumetric 3D displays are becoming easier and more affordable to build. The basic requirements include a display shape that defines the volume (e.g. a sphere, cylinder, or cuboid) and a tracking system to provide each user's location for the perspective corrected rendering. When coupled with modern graphics cards, these displays are capable of high resolution, low latency, high frame rate, and even stereoscopic rendering; however, like many previous studies have shown, every component must be precisely calibrated for a compelling 3D effect. While human perceptual requirements have been extensively studied for head-tracked displays, most studies featured seated users in front of a flat display. It remains unclear if results from these flat display studies are applicable to newer, walk-around displays with enclosed or curved shapes. To investigate these issues, we developed a virtual testbed for volumetric head-tracked displays that can measure calibration accuracy of the entire system in real-time. We used this testbed to investigate visual distortions of prototype curved displays, improve existing calibration techniques, study the importance of stereo to performance and perception, and validate perceptual calibration with novice users. Our experiments show that stereo is important for task performance, but requires more accurate calibration, and that novice users can make effective use of perceptual calibration tools. We also propose a novel, real-time calibration method that can be used to fine-tune an existing calibration using perceptual feedback. The findings from this work can be used to build better head-tracked volumetric displays with an unprecedented amount of 3D realism and intuitive calibration tools for novice users

    The effects of changing projection geometry on perception of 3D objects on and around tabletops

    Get PDF
    Funding: Natural Sciences and Engineering Research Council of Canada Networks of Centres of Excellence of Canada.Displaying 3D objects on horizontal displays can cause problems in the way that the virtual scene is presented on the 2D surface; inappropriate choices in how 3D is represented can lead to distorted images and incorrect object interpretations. We present four experiments that test 3D perception. We varied projection geometry in three ways: type of projection (perspective/parallel), separation between the observer’s point of view and the projection’s center (discrepancy), and the presence of motion parallax (with/without parallax). Projection geometry had strong effects different for each task. Reducing discrepancy is desirable for orientation judgments, but not for object recognition or internal angle judgments. Using a fixed center of projection above the table reduces error and improves accuracy in most tasks. The results have far-reaching implications for the design of 3D views on tables, in particular for multi-user applications where projections that appear correct for one person will not be perceived correctly by another.PostprintPeer reviewe

    A new taxonomy for locomotion in virtual environments

    Get PDF
    The concept of virtual reality, although evolving due to technological advances, has always been fundamentally defined as a revolutionary way for humans to interact with computers. The revolution comes from the concept of immersion, which is the essence of virtual reality. Users are no longer passive observers of information, but active participants that have leaped through the computer screen and are now part of the information. This has tremendous implications on how users interact with computer information in the virtual world.;Perhaps the most common form of interaction in a virtual environment is locomotion. The term locomotion is used to indicate a user\u27s control of movement through the virtual environment. There are many ways for a user to change his viewpoint in the virtual world. Because virtual reality is a relatively young field, no standard interfaces exist for interaction, particularly locomotion, in a virtual world. There have been few attempts to formally classify the ways in which virtual locomotion can occur. These classification schemes do not take into account the various interaction devices such as joysticks and vehicle mock-ups that are used to perform the locomotion. Nor do they account for the differences in display devices, such as head-mounted displays, monitors, or projected walls.;This work creates a new classification system for virtual locomotion methods. The classification provides guidelines for designers of new VR applications, on what types of locomotion are best suited to the requirements of new applications. Unlike previous taxonomies, this work incorporates display devices, interaction devices, and travel tasks, along with identifying two major components of travel: translation and rotation. The classification also identifies important sub-components of these two.;In addition, we have experimentally validated the importance of display device and rotation method in this new classification system. This was accomplished through a large-scale user experiment. Users performed an architectural walkthrough of a virtual building. Both objective and subjective measures indicate that choice of display device is extremely important to the task of locomotion, and that for each display device, the choice of rotation method is also important

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an Popularität gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von Realität und Virtualität kombinieren. Während die Technologie sowohl für Eingabe- als auch für Ausgabegeräte marktreif ist, existieren nur wenige Lösungen für den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen über Leistung und Benutzerpräferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten Benutzeroberflächen für VR zu einer großen Herausforderung. Diese Arbeit beschäftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingeführt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und Benutzerpräferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und Menüsteuerung im Kontext des täglichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI
    corecore