8 research outputs found
Combining Bimanual Interaction and Teleportation for 3D Manipulation on Multi-Touch Wall-sized Displays
International audienceWhile multi-touch devices are well established in our everyday life, they are currently becoming larger and larger. Large screens such as wall-sized displays are now equipped with multi-touch capabilities. Multi-touch wall-sized displays will become widespread in a near future in various places such as public places or meeting rooms. These new devices are an interesting opportunity to interact with 3D virtual environments: the large display surface offers a good immersion, while the multi-touch capabilities could make interaction with 3D content accessible to the general public.In this paper, we aim to explore touch-based 3D interaction in the situation where users are immersed in a 3D virtual environment and move in front of a vertical wall-sized display. We design In(SITE), a bimanual touch-based technique combined with object teleportation features which enables users to interact on a large wall-sized display. This technique is compared with a standard 3D interaction technique for performing 6 degrees of freedom manipulation tasks on a wall-sized display. The results of two controlled experiments show that participants can reach the same level of performance for completion time and a better precision for fine adjustments of object position with the In(SITE) technique. They also suggest that combining object teleportation with both techniques improves translations in terms of ease of use, fatigue, and user preference
Memorability of pre-designed and user-defined gesture sets
We studied the memorability of free-form gesture sets for invoking actions. We compared three types of gesture sets: user-defined gesture sets, gesture sets designed by the authors, and random gesture sets in three studies with 33 participants in total. We found that user-defined gestures are easier to remember, both immediately after creation and on the next day (up to a 24% difference in recall rate compared to pre-designed gestures). We also discovered that the differences between gesture sets are mostly due to association errors (rather than gesture form errors), that participants prefer user-defined sets, and that they think user-defined gestures take less time to learn. Finally, we contribute a qualitative analysis of the tradeoffs involved in gesture type selection and share our data and a video corpus of 66 gestures for replicability and further analysis.PostprintPostprin
The design and empirical evaluations of 3D positioning techniques for pressure-based touch control on mobile devices
The previous three degrees of freedom (DOF) 3D touch translations require more than one finger (usually two hands) to be performed, which limits their usability on mobile devices that need one hand to be held in most occasions. Given that the pressure-sensitive touch screen will become ubiquitous in the near future, we presented a pressure-based 3DOF 3D positioning technique that only uses one finger in operating. Our technique collects the normal force of the touch pressure and uses it to represent the depth value in 3D translating. Then we conducted several groups of tightly controlled user studies to conclude (1) how different strategies of pressure recognition will affect 3D translating and (2) how is the performance of the pressure-based manipulation compared to the previous two-fingered technique. Finally, we discussed some guidelines to help developers in the design of the pressure-sensing technique in 3D manipulations