403 research outputs found

    Multi-touch interaction with stereoscopically rendered 3D objects

    Full text link
    Anfänglich hauptsächlich im 2D Kontext betrachtet, gewinnen Multi-Touch Interfaces immer mehr an Bedeutung im Bereich dreidimensionaler Umgebungen und, in den letzten Jahren, auch im Zusammenhang mit stereoskopischen Visualisierungen. Dennoch führt die Touch-basierte Interaktion mit stereoskopisch dargestellten Objekten zu Problemen, da die Objekte in der nahen Umgebung der Displayoberfläche schweben, während die Berührungspunkte nur bei direktem Kontakt mit dem Display robust detektiert werden können. In dieser Arbeit werden die Probleme bei Touch-Interaktion in stereoskopischen Umgebungen näher untersucht und Interaktionskonzepte in diesem Kontext entwickelt. Insbesondere wird die Anwendbarkeit unterschiedlicher Wahrnehmungsillusionen für 3D Touch-Interaktion mit stereoskopisch dargestellten Objekten in einer Reihe psychologischer Experimente untersucht. Basierend auf die Experimentdaten werden einige praktische Interaktionstechniken entwickelt und auf ihre Anwendbarkeit überprüft.While touch technology has proven its usability for 2D interaction and has already become a standard input modality for many devices, the challenges to exploit its applicability with stereoscopically rendered content have barely been studied. In this thesis we exploit different hardware and perception based techniques to allow users to touch stereoscopically displayed objects when the input is constrained to a 2D surface. Therefore we analyze the relation between the 3D positions of stereoscopically displayed objects and the on-surface touch points, where users touch the interactive surface, and we have conducted a series of experiments to investigate the user’s ability to discriminate small induced shifts while performing a touch gesture. The results were then used to design practical interaction techniques, which are suitable for numerous application scenarios. <br

    Erg-O: ergonomic optimization of immersive virtual environments

    Get PDF
    Interaction in VR involves large body movements, easily inducing fatigue and discomfort. We propose Erg-O, a manipulation technique that leverages visual dominance to maintain the visual location of the elements in VR, while making them accessible from more comfortable locations. Our solution works in an open-ended fashion (no prior knowledge of the object the user wants to touch), can be used with multiple objects, and still allows interaction with any other point within user's reach. We use optimization approaches to compute the best physical location to interact with each visual element, and space partitioning techniques to distort the visual and physical spaces based on those mappings and allow multi-object retargeting. In this paper we describe the Erg-O technique, propose two retargeting strategies and report the results from a user study on 3D selection under different conditions, elaborating on their potential and application to specific usage scenarios

    Erg-O: Ergonomic Optimization of Immersive Virtual Environments

    Get PDF
    Interaction in VR involves large body movements, easily inducing fatigue and discomfort. We propose Erg-O, a manipulation technique that leverages visual dominance to maintain the visual location of the elements in VR, while making them accessible from more comfortable locations. Our solution works in an open-ended fashion (no prior knowledge of the object the user wants to touch), can be used with multiple objects, and still allows interaction with any other point within user's reach. We use optimization approaches to compute the best physical location to interact with each visual element, and space partitioning techniques to distort the visual and physical spaces based on those mappings and allow multi-object retargeting. In this paper we describe the Erg-O technique, propose two retargeting strategies and report the results from a user study on 3D selection under different conditions, elaborating on their potential and application to specific usage scenarios

    VR-CHEM Developing a virtual reality interface for molecular modelling

    Get PDF
    VR-CHEM is a prototype for a virtual reality molecular modelling program with a modern 3D user interface. In this thesis, the author discusses the research behind the development of the prototype, provides a detailed description of the program and its features, and reports on the user tests. The research includes reviewing previous programs of a similar category that have appeared in studies in the literature. Some of these are related to chemistry and molecular modelling while others focus on 3D input techniques. Consequently, the prototype contributes by exploring the design of the user interface and how it can affect productivity in this category of programs. The prototype is subjected to a pilot user test to evaluate what further developments are required. Based on this, the thesis proposes that 3D interfaces, while capable of several unique tasks, are yet to overcome some significant drawbacks such as limitations in accuracy and precision. It also suggests that virtual reality can aid in spatial understanding but virtual hands and controllers are far inferior to real hands for even basic tasks due to a lack of tactile feedback

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    Perceptual Manipulations for Hiding Image Transformations in Virtual Reality

    Get PDF
    Users of a virtual reality make frequent gaze shifts and head movements to explore their surrounding environment. Saccades are rapid, ballistic, conjugate eye movements that reposition our gaze, and in doing so create large-field motion on our retina. Due to the high speed motion on the retina during saccades, the brain suppresses the visual signals from the eye, a perceptual phenomenon known as the saccadic suppression. These moments of visual blindness can help hide the display graphical updates in a virtual reality. In this dissertation, I investigated how the visibility of various image transformations differed, during combinations of saccade and head rotation conditions. Additionally, I studied how hand and gaze interaction, affected image change discrimination in an inattentional blindness task. I conducted four psychophysical experiments in desktop or head-mounted VR. In the eye tracking studies, users viewed 3D scenes, and were triggered to make a vertical or horizontal saccade. During the saccade an instantaneous translation or rotation was applied to the virtual camera used to render the scene. Participants were required to indicate the direction of these transitions after each trial. The results showed that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. In a second similar study, I added a constant camera motion to simulate a head rotation, and in a third study, I compared active head rotation with a simulated rotation or a static head. I found less sensitivity to transsaccadic horizontal compared to vertical camera shifts during simulated or real head pan. Conversely, during simulated or real head tilt observers were less sensitive to transsaccadic vertical than horizontal camera shifts. In addition, in my multi-interactive inattentional blindness experiment, I compared sensitivity to sudden image transformations when a participant used their hand and gaze to move and watch an object, to when they only watched it move. The results confirmed that when involved in a primary task that requires focus and attention with two interaction modalities (gaze and hand), a visual stimuli can better be hidden than when only one sense (vision) is involved. Understanding the effect of continuous head movement and attention on the visibility of a sudden transsaccadic change can help optimize the visual performance of gaze-contingent displays and improve user experience. Perceptually suppressed rotations or translations can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. This dissertation suggests that such transformations can be more effective and more substantial during active or passive head motion. Moreover, inattentional blindness during an attention-demanding task provides additional opportunities for imperceptible updates to a visual display

    08231 Abstracts Collection -- Virtual Realities

    Get PDF
    From 1st to 6th June 2008, the Dagstuhl Seminar 08231 ``Virtual Realities\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. Virtual Reality (VR) is a multidisciplinary area of research aimed at interactive human-computer mediated simulations of artificial environments. Typical applications include simulation, training, scientific visualization, and entertainment. An important aspect of VR-based systems is the stimulation of the human senses -- typically sight, sound, and touch -- such that a user feels a sense of presence (or immersion) in the virtual environment. Different applications require different levels of presence, with corresponding levels of realism, sensory immersion, and spatiotemporal interactive fidelity. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. Links to extended abstracts or full papers are provided, if available

    Screen Space Reconfigured

    Get PDF
    Screen Space Reconfigured is the first edited volume that critically and theoretically examines the many novel renderings of space brought to us by 21st century screens. Exploring key cases such as post-perspectival space, 3D, vertical framing, haptics, and layering, this volume takes stock of emerging forms of screen space and spatialities as they move from the margins to the centre of contemporary media practice.Recent years have seen a marked scholarly interest in spatial dimensions and conceptions of moving image culture, with some theorists claiming that a 'spatial turn' has taken place in media studies and screen practices alike. Yet this is the first book-length study dedicated to on-screen spatiality as such.Spanning mainstream cinema, experimental film, video art, mobile screens, and stadium entertainment, the volume includes contributions from such acclaimed authors as Giuliana Bruno and Tom Gunning as well as a younger generation of scholars

    THE UNIVERSAL MEDIA BOOK

    Get PDF
    We explore the integration of projected imagery with a physical book that acts as a tangible interface to multimedia data. Using a camera and projector pair, a tracking framework is presented wherein the 3D position of planar pages are monitored as they are turned back and forth by a user, and data is correctly warped and projected onto each page at interactive rates to provide the user with an intuitive mixed-reality experience. The book pages are blank, so traditional camera-based approaches to tracking physical features on the display surface do not apply. Instead, in each frame, feature points are independently extracted from the camera and projector images, and matched to recover the geometry of the pages in motion. The book can be loaded with multimedia content, including images and videos. In addition, volumetric datasets can be explored by removing a page from the book and using it as a tool to navigate through a virtual 3D volume
    • …
    corecore