252 research outputs found

    Relative and Absolute Mappings for Rotating Remote 3D Objects on Multi-Touch Tabletops

    Get PDF
    The use of human fingers as an object selection and manipulation tool has raised significant challenges when interacting with direct-touch tabletop displays. This is particularly an issue when manipulating remote objects in 3D environments as finger presses can obscure objects at a distance that are rendered very small. Techniques to support remote manipulation either provide absolute mappings between finger presses and object transformation or rely on tools that support relative mappings t o selected objects. This paper explores techniques to manipulate remote 3D objects on direct-touch tabletops using absolute and relative mapping modes. A user study was conducted to compare absolute and relative mappings in support of a rotation task. Overall results did not show a statistically significant difference between these two mapping modes on both task completion time and the number of touches. However, the absolute mapping mode was found to be less efficient than the relative mapping mode when rotating a small object. Also participants preferred relative mapping for small objects. Four mapping techniques were then compared for perceived ease of use and learnability. Touchpad, voodoo doll and telescope techniques were found to be comparable for manipulating remote objects in a 3D scene. A flying camera technique was considered too complex and required increased effort by participants. Participants preferred an absolute mapping technique augmented to support small object manipulation, e.g. the voodoo doll technique

    AirMouse: Finger Gesture for 2D and 3D Interaction

    Get PDF
    International audienceThis paper presents AirMouse, a new interaction technique based on finger gestures above the laptop's keyboard. At a reasonably low cost, the technique can replace the traditional methods for pointing in two or three dimensions. Moreover, the device-switching time is reduced and no additional surface than the one for the laptop is needed. In a 2D pointing evaluation, a vision-based implementation of the technique is compared with commonly used devices. The same implementation is also compared with the two most commonly used 3D pointing devices. The two user experiments show the benefits of the polyvalent technique: it is easy to learn, intuitive and efficient by providing good performance. In particular, our conducted experiment shows that performance with AirMouse is promising in comparison with a touchpad and with dedicated 3D pointing devices. It shows that AirMouse offers better performance as compared to FlowMouse, a previous solution using fingers above the keyboard

    An Evaluation of Input Controls for In-Car Interactions

    Get PDF
    The way drivers operate in-car systems is rapidly changing as traditional physical controls, such as buttons and dials, are being replaced by touchscreens and touch-sensing surfaces. This has the potential to increase driver distraction and error as controls may be harder to find and use. This paper presents an in-car, on the road driving study which examined three key types of input controls to investigate their effects: a physical dial, pressure-based input on a touch surface and touch input on a touchscreen. The physical dial and pressure-based input were also evaluated with and without haptic feedback. The study was conducted with users performing a list-based targeting task using the different controls while driving on public roads. Eye-gaze was recorded to measure distraction from the primary task of driving. The results showed that target accuracy was high across all input methods (greater than 94%). Pressure-based targeting was the slowest while directly tapping on the targets was the faster selection method. Pressure-based input also caused the largest number of glances towards to the touchscreen but the duration of each glance was shorter than directly touching the screen. Our study will enable designers to make more appropriate design choices for future in-car interactions

    Evaluation of Physical Finger Input Properties for Precise Target Selection

    Get PDF
    The multitouch tabletop display provides a collaborative workspace for multiple users around a table. Users can perform direct and natural multitouch interaction to select target elements using their bare fingers. However, physical size of fingertip varies from one person to another which generally introduces a fat finger problem. Consequently, it creates the imprecise selection of small size target elements during direct multitouch input. In this respect, an attempt is made to evaluate the physical finger input properties i.e. contact area and shape in the context of imprecise selection

    TapGazer:Text Entry with finger tapping and gaze-directed word selection

    Get PDF
    While using VR, efficient text entry is a challenge: users cannot easily locate standard physical keyboards, and keys are often out of reach, e.g. when standing. We present TapGazer, a text entry system where users type by tapping their fingers in place. Users can tap anywhere as long as the identity of each tapping finger can be detected with sensors. Ambiguity between different possible input words is resolved by selecting target words with gaze. If gaze tracking is unavailable, ambiguity is resolved by selecting target words with additional taps. We evaluated TapGazer for seated and standing VR: seated novice users using touchpads as tap surfaces reached 44.81 words per minute (WPM), 79.17% of their QWERTY typing speed. Standing novice users tapped on their thighs with touch-sensitive gloves, reaching 45.26 WPM (71.91%). We analyze TapGazer with a theoretical performance model and discuss its potential for text input in future AR scenarios.</p

    SpaceTop: integrating 2D and spatial 3D interactions in a see-through desktop environment

    Get PDF
    SpaceTop is a concept that fuses spatial 2D and 3D interactions in a single workspace. It extends the traditional desktop interface with interaction technology and visualization techniques that enable seamless transitions between 2D and 3D manipulations. SpaceTop allows users to type, click, draw in 2D, and directly manipulate interface elements that float in the 3D space above the keyboard. It makes it possible to easily switch from one modality to another, or to simultaneously use two modalities with different hands. We introduce hardware and software configurations for co-locating these various interaction modalities in a unified workspace using depth cameras and a transparent display. We describe new interaction and visualization techniques that allow users to interact with 2D elements floating in 3D space. We present the results from a preliminary user study that indicates the benefit of such hybrid workspaces

    A method for viewing and interacting with medical volumes in virtual reality

    Get PDF
    The medical field has long benefited from advancements in diagnostic imaging technology. Medical images created through methods such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are used by medical professionals to non-intrusively peer into the body to make decisions about surgeries. Over time, the viewing medium of medical images has evolved from X-ray film negatives to stereoscopic 3D displays, with each new development enhancing the viewer’s ability to discern detail or decreasing the time needed to produce and render a body scan. Though doctors and surgeons are trained to view medical images in 2D, some are choosing to view body scans in 3D through volume rendering. While traditional 2D displays can be used to display 3D data, a viewing method that incorporates depth would convey more information to the viewer. One device that has shown promise in medical image viewing applications is the Virtual Reality Head Mounted Display (VR HMD). VR HMDs have recently increased in popularity, with several commodity devices being released within the last few years. The Oculus Rift, HTC Vive, and Windows Mixed Reality HMDs like the Samsung Odyssey offer higher resolution screens, more accurate motion tracking, and lower prices than earlier HMDs. They also include motion-tracked handheld controllers meant for navigation and interaction in video games. Because of their popularity and low cost, medical volume viewing software that is compatible with these headsets would be accessible to a wide audience. However, the introduction of VR to medical volume rendering presents difficulties in implementing consistent user interactions and ensuring performance. Though all three headsets require unique driver software, they are compatible with OpenVR, a middleware that standardizes communication between the HMD, the HMD’s controllers, and VR software. However, the controllers included with the HMDs each has a slightly different control layout. Furthermore, buttons, triggers, touchpads, and joysticks that share the same hand position between devices do not report values to OpenVR in the same way. Implementing volume rendering functions like clipping and tissue density windowing on VR controllers could improve the user’s experience over mouse-and-keyboard schemes through the use of tracked hand and finger movements. To create a control scheme that is compatible with multiple HMD’s A way of mapping controls differently depending on the device was developed. Additionally, volume rendering is a computationally intensive process, and even more so when rendering for an HMD. By using techniques like GPU raytracing with modern GPUs, real-time framerates are achievable on desktop computers with traditional displays. However, the importance of achieving high framerates is even greater when viewing with a VR HMD due to its higher level of immersion. Because the 3D scene occupies most of the user’s field of view, low or choppy framerates contribute to feelings of motion sickness. This was mitigated through a decrease in volume rendering quality in situations where the framerate drops below acceptable levels. The volume rendering and VR interaction methods described in this thesis were demonstrated in an application developed for immersive viewing of medical volumes. This application places the user and a medical volume in a 3D VR environment, allowing the user to manually place clipping planes, adjust the tissue density window, and move the volume to achieve different viewing angles with handheld motion tracked controllers. The result shows that GPU raytraced medical volumes can be viewed and interacted with in VR using commodity hardware, and that a control scheme can be mapped to allow the same functions on different HMD controllers despite differences in layout

    Improving expressivity in desktop interactions with a pressure-augmented mouse

    Get PDF
    Desktop-based Windows, Icons, Menus and Pointers (WIMP) interfaces have changed very little in the last 30 years, and are still limited by a lack of powerful and expressive input devices and interactions. In order to make desktop interactions more expressive and controllable, expressive input mechanisms like pressure input must be made available to desktop users. One way to provide pressure input to these users is through a pressure-augmented computer mouse; however, before pressure-augmented mice can be developed, design information must be provided to mouse developers. The problem we address in this thesis is that there is a lack of ergonomics and performance information for the design of pressure-augmented mice. Our solution was to provide empirical performance and ergonomics information for pressure-augmented mice by performing five experiments. With the results of our experiments we were able to identify the optimal design parameters for pressure-augmented mice and provide a set of recommendations for future pressure-augmented mouse designs
    • …
    corecore