8 research outputs found

    08231 Abstracts Collection -- Virtual Realities

    Get PDF
    From 1st to 6th June 2008, the Dagstuhl Seminar 08231 ``Virtual Realities\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. Virtual Reality (VR) is a multidisciplinary area of research aimed at interactive human-computer mediated simulations of artificial environments. Typical applications include simulation, training, scientific visualization, and entertainment. An important aspect of VR-based systems is the stimulation of the human senses -- typically sight, sound, and touch -- such that a user feels a sense of presence (or immersion) in the virtual environment. Different applications require different levels of presence, with corresponding levels of realism, sensory immersion, and spatiotemporal interactive fidelity. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. Links to extended abstracts or full papers are provided, if available

    Evaluating The Benefits Of 3d Stereo In Modern Video Games

    Get PDF
    We present a study that investigates user performance benefits of 3D stereo in modern video games. Based on an analysis of several video games that are best suited for use with commercial 3D stereo drivers and vision systems, we chose five modern titles focusing on racing, first person shooter, third person shooter, and sports game genres. For each game, quantitative and qualitative measures were taken to determine if users performed better and learned faster in the experimental group (3D stereo display) than in the control group (2D display). A game experience pre-questionnaire was used to classify participants into beginner, intermediate, and advanced gameplay categories to ensure prior game experience did not bias the experiment. Our results indicate that even though participants preferred playing in 3D stereo, for the games we tested, it does not provide any significant advantage in overall user performance. In addition, users‟ learning rates were comparable in the 3D stereo display and 2D display case

    Evaluating 3D pointing techniques

    Get PDF
    "This dissertation investigates various issues related to the empirical evaluation of 3D pointing interfaces. In this context, the term ""3D pointing"" is appropriated from analogous 2D pointing literature to refer to 3D point selection tasks, i.e., specifying a target in three-dimensional space. Such pointing interfaces are required for interaction with virtual 3D environments, e.g., in computer games and virtual reality. Researchers have developed and empirically evaluated many such techniques. Yet, several technical issues and human factors complicate evaluation. Moreover, results tend not to be directly comparable between experiments, as these experiments usually use different methodologies and measures. Based on well-established methods for comparing 2D pointing interfaces this dissertation investigates different aspects of 3D pointing. The main objective of this work is to establish methods for the direct and fair comparisons between 2D and 3D pointing interfaces. This dissertation proposes and then validates an experimental paradigm for evaluating 3D interaction techniques that rely on pointing. It also investigates some technical considerations such as latency and device noise. Results show that the mouse outperforms (between 10% and 60%) other 3D input techniques in all tested conditions. Moreover, a monoscopic cursor tends to perform better than a stereo cursor when using stereo display, by as much as 30% for deep targets. Results suggest that common 3D pointing techniques are best modelled by first projecting target parameters (i.e., distance and size) to the screen plane.

    Multi-touch interaction with stereoscopically rendered 3D objects

    Full text link
    Anfänglich hauptsächlich im 2D Kontext betrachtet, gewinnen Multi-Touch Interfaces immer mehr an Bedeutung im Bereich dreidimensionaler Umgebungen und, in den letzten Jahren, auch im Zusammenhang mit stereoskopischen Visualisierungen. Dennoch führt die Touch-basierte Interaktion mit stereoskopisch dargestellten Objekten zu Problemen, da die Objekte in der nahen Umgebung der Displayoberfläche schweben, während die Berührungspunkte nur bei direktem Kontakt mit dem Display robust detektiert werden können. In dieser Arbeit werden die Probleme bei Touch-Interaktion in stereoskopischen Umgebungen näher untersucht und Interaktionskonzepte in diesem Kontext entwickelt. Insbesondere wird die Anwendbarkeit unterschiedlicher Wahrnehmungsillusionen für 3D Touch-Interaktion mit stereoskopisch dargestellten Objekten in einer Reihe psychologischer Experimente untersucht. Basierend auf die Experimentdaten werden einige praktische Interaktionstechniken entwickelt und auf ihre Anwendbarkeit überprüft.While touch technology has proven its usability for 2D interaction and has already become a standard input modality for many devices, the challenges to exploit its applicability with stereoscopically rendered content have barely been studied. In this thesis we exploit different hardware and perception based techniques to allow users to touch stereoscopically displayed objects when the input is constrained to a 2D surface. Therefore we analyze the relation between the 3D positions of stereoscopically displayed objects and the on-surface touch points, where users touch the interactive surface, and we have conducted a series of experiments to investigate the user’s ability to discriminate small induced shifts while performing a touch gesture. The results were then used to design practical interaction techniques, which are suitable for numerous application scenarios. <br

    Entwicklung eines Patternkatalogs fĂĽr Augmented Reality Interfaces in der Industrie

    Get PDF
    Für die Gestaltung eines User Interfaces für ein Augmented Reality-System im industriellen Kontext gibt es gegenwärtig keine Vorgaben oder Richtlinien. Dabei gelten in diesem Be-reich besondere Anforderungen für das Wahrnehmen und Erkennen von Inhalten, die durch die Rahmenbedingungen der industriellen Umgebung, die Mensch-Technik-Interaktion und die Arbeitsaufgabe gegeben sind. Die vorliegende Dissertation befasst sich mit dieser For-schungs-lücke. Auf Basis der Anforderungsanalyse wurde mit Hilfe des modifizierten Usability Engineering Lifecycle nach Mayhew ein Designkonzept entwickelt. Dieses konzentriert sich auf die Wahr-nehmung von Informationen in Augmented Reality-Systemen. Eine Evaluation mit ExpertIn-nen untersuchte die Sättigung von Grau- und Farbwerten, in Bezug auf die Wahrnehmung und Erkennbarkeit bei minimal und maximal zulässiger Leuchtdichte des industriellen Umfeldes. Als Ergebnis wurde deutlich, dass sich Grauwerte für dauerhafte Textdarstellungen eignen und Farben bei Hervorhebungen oder Grafiken von Vorteil sind. Ein weiteres Hauptaugen-merk der Dissertation befasste sich mit dem Layout und Interaktionsmöglichkeiten des User-Interfaces des Augmented Reality-Systems. Basisinteraktionen und de-facto Standards wur-den spezifisch bezüglich der generischen Aufgaben untersucht. Daraus entstand ein Prototyp, der in einer ausführlichen Untersuchung getestet wurde. Die vier generischen Aufgaben „Auswählen aus dem Hauptmenü“, „Navigieren in Dokumenten“ „Vertiefen von Objektin-formation“ und „Auswählen aus der Funktionsleiste“ wurden auf ihre Nützlichkeit und Nut-zerfreundlichkeit hin evaluiert. Es wurde deutlich, dass bei der Gestaltung der Layout-Elemente demDer NutzerIn die Möglichkeit gegeben werden sollte, die Darstellung nach eigenen Erfahrungen anzupassen und bestehende Lösungen zu übernehmen. Im Ergebnis entstand ein Patternkatalog mit elf Layout- und 18 Interaktionsvarianten. Dieser kann bei der Entwicklung eines industriellen Augmented Reality-Systems in Hinblick auf eine nutzerorientierte Darstellung des Interfaces unterstützend wirken.Currently, there are no guidelines or specifications for the design of a user interface for an augmented reality system in an industrial context. Special requirements such as the (DIN Norm) in fluency the perception and recognition of information in industrial environment. The conditions in turn impact the human-machine-interaction as well as the job assignment. The present dissertation explores on this research gap. Based on a Usability Engineering Lifecycle model according to Mayhew, a design concept was developed, which deals with perception in augmented reality systems. An evaluation with experts examines the saturation of gray and color values, in terms of perception and recog-nizability at minimum and maximum permissible luminance of the industrial environment. The results indicate that gray values are suitable for permanent text display and colors are more suitable for highlighting or graphics. Another focus of the dissertation concerns the concept of the user interface for the augmented reality system. Basic interactions and de-facto standards were examined for the generic tasks of the industrial context. This resulted in a prototype, which was tested in a detailed evaluation. The four generic tasks “select from the main menu”, “navigate in documents”, “deepen object information” and “select from the function bar” were evaluated for their usefulness and usability. As a result, it became clear that when designing the layout elements, the users must be given the opportunity to adapt the presentation according to their own experience and existing solutions should be adopted. The result of the dissertation is a pattern catalog with eleven layouts and eighteen interac-tion alternatives. These design solutions can support the development of user-oriented inter-faces for industrial augmented reality systems.Für die Gestaltung eines User-Interfaces für ein AR-System im industriellen Kontext gelten besondere Anforderungen für das Wahrnehmen und Erkennen von Inhalten, die durch die Rahmenbedingungen der industriellen Umgebung, die Mensch-Technik-Interaktion und die Arbeitsaufgabe gegeben sind. Die Dissertation befasst sich näher mit der Gestaltung des Interface. Eine Studie konzentriert sich auf die Wahrnehmung von Informationen in AR-Systemen. Eine Evaluation mit ExpertInnen untersuchte die Sättigung von Grau- und Farbwerten in Bezug auf die Wahrnehmung und Erkennbarkeit bei minimal und maximal zulässiger Leuchtdichte des industriellen Umfeldes. Ein weiteres Hauptaugenmerk der Dissertation befasste sich mit dem Layout und Interaktionsmöglichkeiten des User-Interfaces des AR-Systems. Basisinteraktionen und de facto Standards wurden bezüglich der generischen Aufgaben untersucht. Daraus entstand ein Prototyp, der in einer ausführlichen nutzerorientierten Untersuchung getestet wurde. Im Ergebnis entstand ein Patternkatalog, der bei der Entwicklung eines industriellen AR-Systems in Hinblick auf eine nutzerorientierte Darstellung des Interfaces unterstützend kann

    Exploring 3D User Interface Technologies for Improving the Gaming Experience

    Get PDF
    3D user interface technologies have the potential to make games more immersive & engaging and thus potentially provide a better user experience to gamers. Although 3D user interface technologies are available for games, it is still unclear how their usage affects game play and if there are any user performance benefits. A systematic study of these technologies in game environments is required to understand how game play is affected and how we can optimize the usage in order to achieve better game play experience. This dissertation seeks to improve the gaming experience by exploring several 3DUI technologies. In this work, we focused on stereoscopic 3D viewing (to improve viewing experience) coupled with motion based control, head tracking (to make games more engaging), and faster gesture based menu selection (to reduce cognitive burden associated with menu interaction while playing). We first studied each of these technologies in isolation to understand their benefits for games. We present the results of our experiments to evaluate benefits of stereoscopic 3D (when coupled with motion based control) and head tracking in games. We discuss the reasons behind these findings and provide recommendations for game designers who want to make use of these technologies to enhance gaming experiences. We also present the results of our experiments with finger-based menu selection techniques with an aim to find out the fastest technique. Based on these findings, we custom designed an air-combat game prototype which simultaneously uses stereoscopic 3D, head tracking, and finger-count shortcuts to prove that these technologies could be useful for games if the game is designed with these technologies in mind. Additionally, to enhance depth discrimination and minimize visual discomfort, the game dynamically optimizes stereoscopic 3D parameters (convergence and separation) based on the user\u27s look direction. We conducted a within subjects experiment where we examined performance data and self-reported data on users perception of the game. Our results indicate that participants performed significantly better when all the 3DUI technologies (stereoscopic 3D, head-tracking and finger-count gestures) were available simultaneously with head tracking as a dominant factor. We explore the individual contribution of each of these technologies to the overall gaming experience and discuss the reasons behind our findings. Our experiments indicate that 3D user interface technologies could make gaming experience better if used effectively. The games must be designed to make use of the 3D user interface technologies available in order to provide a better gaming experience to the user. We explored a few technologies as part of this work and obtained some design guidelines for future game designers. We hope that our work will serve as the framework for the future explorations of making games better using 3D user interface technologies

    Towards achieving convincing live interaction in a mixed reality environment for television studios

    Get PDF
    The virtual studio is a form of Mixed Reality environment for creating television programmes, where the (real) actor appears to exist within an entirely virtual set. The work presented in this thesis evaluates the routes required towards developing a virtual studio that extends from current architectures in allowing realistic interactions between the actor and the virtual set in real-time. The methodologies and framework presented in this thesis is intended to support future work in this domain. Heuristic investigation is offered as a framework to analyse and provide the requirements for developing interaction within a virtual studio. In this framework a group of experts participate in case study scenarios to generate a list of requirements that guide future development of the technology. It is also concluded that this method could be used in a cyclical manner to further refine systems postdevelopment. This leads to the development of three key areas. Firstly a feedback system is presented, which tracks actor head motion within the studio and provides dynamic visual feedback relative to their current gaze location. Secondly a real-time actor/virtual set occlusion system that uses skeletal tracking data and depth information to change the relative location of virtual set elements dynamically is developed. Finally an interaction system is presented that facilitates real-time interaction between an actor and the virtual set objects, providing both single handed and bimanual interactions. Evaluation of this system highlights some common errors in mixed reality interaction, notably those arising from inaccurate hand placement when actors perform bimanual interactions. A novel two stage framework is presented that measures the magnitude of the errors in actor hand placement, and also, the perceived fidelity of the interaction from a third person viewer. The first stage of this framework quantifies the actor motion errors while completing a series of interaction tasks under varying controls. The second stage uses examples of these errors to measure the perceptual tolerance of a third person when viewing interaction errors in the end broadcast. The results from this two stage evaluation lead to the development of three methods for mitigating the actor errors, with each evaluated against its ability to aid in the visual fidelity of the interaction. It was discovered that the adapting the size of the virtual object was effective in improving the quality of the interaction, whereas adapting the colour of any exposed background did not have any apparent effects. Finally a set of guidelines based on these findings is provided to recommend appropriate solutions that can be applied for allowing interaction within live virtual studio environments that can easily be adapted for other mixed reality systems

    Guidelines for 3D positioning techniques

    No full text
    In this paper, we present a set of guidelines for designing 3D positioning techniques. These guidelines are intended for developers of object interaction schemes in 3D games, modeling packages, computer aided design systems, and virtual environments. The guidelines promote intuitive object movement techniques in these types of environments. We then present a study comparing 3D positioning techniques based on these guidelines with 2D and 3D/6D devices across VR display technologies. Display technologies such as stereoscopic graphics and head-coupled perspective provide additional depth cues and could affect how a user perceives and thus interacts with a 3D scene – regardless of the input device/technique used. Thus they are examined as well. The results suggest that 2D devices using “smart ” movement algorithms can outperform 3D devices
    corecore