1,820 research outputs found

    EyePACT: eye-based parallax correction on touch-enabled interactive displays

    Get PDF
    The parallax effect describes the displacement between the perceived and detected touch locations on a touch-enabled surface. Parallax is a key usability challenge for interactive displays, particularly for those that require thick layers of glass between the screen and the touch surface to protect them from vandalism. To address this challenge, we present EyePACT, a method that compensates for input error caused by parallax on public displays. Our method uses a display-mounted depth camera to detect the user's 3D eye position in front of the display and the detected touch location to predict the perceived touch location on the surface. We evaluate our method in two user studies in terms of parallax correction performance as well as multi-user support. Our evaluations demonstrate that EyePACT (1) significantly improves accuracy even with varying gap distances between the touch surface and the display, (2) adapts to different levels of parallax by resulting in significantly larger corrections with larger gap distances, and (3) maintains a significantly large distance between two users' fingers when interacting with the same object. These findings are promising for the development of future parallax-free interactive displays

    Multi-touch 3D Exploratory Analysis of Ocean Flow Models

    Get PDF
    Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance

    Interacting with the biomolecular solvent accessible surface via a haptic feedback device

    Get PDF
    Background: From the 1950s computer based renderings of molecules have been produced to aid researchers in their understanding of biomolecular structure and function. A major consideration for any molecular graphics software is the ability to visualise the three dimensional structure of the molecule. Traditionally, this was accomplished via stereoscopic pairs of images and later realised with three dimensional display technologies. Using a haptic feedback device in combination with molecular graphics has the potential to enhance three dimensional visualisation. Although haptic feedback devices have been used to feel the interaction forces during molecular docking they have not been used explicitly as an aid to visualisation. Results: A haptic rendering application for biomolecular visualisation has been developed that allows the user to gain three-dimensional awareness of the shape of a biomolecule. By using a water molecule as the probe, modelled as an oxygen atom having hard-sphere interactions with the biomolecule, the process of exploration has the further benefit of being able to determine regions on the molecular surface that are accessible to the solvent. This gives insight into how awkward it is for a water molecule to gain access to or escape from channels and cavities, indicating possible entropic bottlenecks. In the case of liver alcohol dehydrogenase bound to the inhibitor SAD, it was found that there is a channel just wide enough for a single water molecule to pass through. Placing the probe coincident with crystallographic water molecules suggests that they are sometimes located within small pockets that provide a sterically stable environment irrespective of hydrogen bonding considerations. Conclusion: By using the software, named HaptiMol ISAS (available from http://​www.​haptimol.​co.​uk), one can explore the accessible surface of biomolecules using a three-dimensional input device to gain insights into the shape and water accessibility of the biomolecular surface that cannot be so easily attained using conventional molecular graphics software

    Particle tracking stereomicroscopy in optical tweezers: control of trap shape

    Get PDF
    We present an optical system capable of generating stereoscopic images to track trapped particles in three dimensions. Two-dimensional particle tracking on each image yields three dimensional position information. Our approach allows the use of a high numerical aperture (NA= 1.3) objective and large separation angle, such that particles can be tracked axially with resolution of 3nm at 340Hz. Spatial Light Modulators (SLMs), the diffractive elements used to steer and split laser beams in Holographic Optical Tweezers, are also capable of more general operations. We use one here to vary the ratio of lateral to axial trap stiffness by changing the shape of the beam at the back aperture of the microscope objective. Beams which concentrate their optical power at the extremes of the back aperture give rise to much more efficient axial trapping. The flexibility of using an SLM allows us to create multiple traps with different shapes

    Mechanism for Displaying Three-Dimensional Objects Alongside Video in Virtual Reality

    Get PDF
    The systems and methods described herein provide for displaying 3D objects alongside video or other content within a virtual reality environment. The user can see the video but can also see an object next to, behind, above, below, or positioned elsewhere in relation to the video. The user may be able to interact with that object. The object may or may not be related to the video. The object could be placed anywhere in the scene in relation to the video

    Piloting mobile mixed reality simulation in paramedic distance education

    Get PDF
    New pedagogical methods delivered through mobile mixed reality (via a user-supplied mobile phone incorporating 3d printing and augmented reality) are becoming possible in distance education, shifting pedagogy from 2D images, words and videos to interactive simulations and immersive mobile skill training environments. This paper presents insights from the implementation and testing of a mobile mixed reality intervention in an Australian distance paramedic science classroom. The context of this mobile simulation study is skills acquisition in airways management focusing on direct laryngoscopy with foreign body removal. The intervention aims to assist distance education learners in practicing skills prior to attending mandatory residential schools and helps build a baseline equality between those students that study face to face and those at a distance. Outcomes from the pilot study showed improvements in several key performance indicators in the distance learners, but also demonstrated problems to overcome in the pedagogical method

    Modeling On and Above a Stereoscopic Multitouch Display

    Get PDF
    International audienceWe present a semi-immersive environment for conceptual design where virtual mockups are obtained from gestures we aim to get closer to the way people conceive, create and manipulate three-dimensional shapes. We developed on-and-above-the-surface interaction techniques based on asymmetric bimanual interaction for creating and editing 3D models in a stereoscopic environment. Our approach combines hand and nger tracking in the space on and above a multitouch surface. This combination brings forth an alternative design environment where users can seamlessly switch between interacting on the surface or in the space above it to leverage the bene t of both interaction spaces
    • …
    corecore