351 research outputs found

    Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    Get PDF
    We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch environment was difficult because the mouse emulation of touch surfaces is often insufficient to provide full information visualization functionality. We present a unified design, combining many Rizzos that have been designed not only to provide mouse capabilities but also to act as zoomable lenses that make precise information access feasible. The Rizzos and the information visualizations all exist within a touch-enabled 3D window management system. Our approach permits touch interaction with both the 3D windowing environment as well as with the contents of the individual windows contained therein. We describe an implementation of our technique that augments the VisLink 3D visualization environment to demonstrate how to enable multi-touch capabilities on all visualizations written with the popular prefuse visualization toolkit.

    Gaze-shifting:direct-indirect input with pen and touch modulated by gaze

    Get PDF
    Modalities such as pen and touch are associated with direct input but can also be used for indirect input. We propose to combine the two modes for direct-indirect input modulated by gaze. We introduce gaze-shifting as a novel mechanism for switching the input mode based on the alignment of manual input and the user's visual attention. Input in the user's area of attention results in direct manipulation whereas input offset from the user's gaze is redirected to the visual target. The technique is generic and can be used in the same manner with different input modalities. We show how gaze-shifting enables novel direct-indirect techniques with pen, touch, and combinations of pen and touch input

    InfoVis experience enhancement through mediated interaction

    Get PDF
    Information visualization is an experience in which both the aesthetic representations and interaction are part. Such an experience can be augmented through close consideration of its major components. Interaction is crucial to the experience, yet it has seldom been adequately explored in the field. We claim that direct mediated interaction can augment such an experience. This paper discusses the reasons behind such a claim and proposes a mediated interactive manipulation scheme based on the notion of directness. It also describes the ways in which such a claim will be validated. The Literature Knowledge Domain (LKD) is used as the concrete domain around which the discussions will be held

    Improving Pre-Kindergarten Touch Performance

    Full text link
    © ACM, 2014. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (pp. 163-166). http://doi.acm.org/10.1145/2669485.2669498Multi-touch technology provides users with a more intuitive way of interaction. However, pre-kindergarten children, a growing group of potential users, have problems with some basic gestures according to previous studies. This is particularly the case of the double tap and long pressed gestures, which have some issues related to spurious entry events and time-constrained interactions, respectively. In this paper, we empirically test specific strategies to deal with these issues by evaluating off-the-shelf implementations of these gestures against alternative implementations that follow these guidelines. The study shows that the implementation of these design guidelines has a positive effect on success rates of these two gestures, being feasible their inclusion in future multi-touch applications targeted at pre-kindergarten children.This work received financial support from the Spanish Ministry of Education under the National Strategic Program of Research and Projects TIN2010-20488 (CREATEWORLD) and TIN2012-34003 (insPIre). This work is also supported by a postdoctoral fellowship within the VALi+d program from Conselleria d’Educació, Cultura i Esport (Generalitat Valenciana) to A. Catalá (APOSTD/2013/013).Nácher Soler, VE.; Jaén Martínez, FJ.; Catalá Bolós, A.; Navarro, E.; González, P. (2014). Improving Pre-Kindergarten Touch Performance. ACM. https://doi.org/10.1145/2669485.2669498SCouse, L.J. and Chen, D.W. A Tablet Computer for Young Children? Exploring Its Viability for Early Childhood Education. Journal of Research on Technology in Education 43, 1 (2010), 75--98.Harris, A., Rick, J., Bonnett, V., et al. Around the table: are multiple-touch surfaces better than single-touch for children's collaborative interactions? In Proc. CSCL'09, 335--344.Hoggan, E., Nacenta, M., Kristensson, P.O., Williamson, J., Oulasvirta, A., and Lehtiö, A. MultiTouch Pinch Gestures: Performance and Ergonomics. In Proc. ITS'13, 219--222.Hoggan, E., Williamson, J., Oulasvirta, A., Nacenta, M., Kristensson, P.O., and Lehtiö, A. Multi-Touch Rotation Gestures: Performance and Ergonomics. In Proc. CHI'13, 3--6.Nacenta, M.A., Baudisch, P., Benko, H., and Wilson, A. Separability of Spatial Manipulations in Multi-touch Interfaces. In Proc. GI'09, 175--182.Nacher, V., Jaen, J., Navarro, E., Catala, A., and González, P. Multi-touch gestures for pre-kindergarten children. International Journal of Human-Computer Studies. Available online http://dx.doi.org/10.1016/j.ijhcs.2014.08.004Rideout, V. Zero to Eight: Children's Media Use in America. Common Sense Media, 2011.Smith, S.P., Burd, E., and Rick, J. Developing, evaluating and deploying multi-touch systems. International Journal of Human-Computer Studies 70, 10 (2012), 653--656.Terra, D., Brinkman, W.P., and Heynderickx, I. Ease-ofUse and Enjoyment of Traditional vs. Stylus Input for Children in a Brazilian Primary School. LatinDisplay, (2009), 151--155

    An Inertial Device-based User Interaction with Occlusion-free Object Handling in a Handheld Augmented Reality

    Get PDF
    Augmented Reality (AR) is a technology used to merge virtual objects with real environments in real-time. In AR, the interaction which occurs between the end-user and the AR system has always been the frequently discussed topic. In addition, handheld AR is a new approach in which it delivers enriched 3D virtual objects when a user looks through the device’s video camera. One of the most accepted handheld devices nowadays is the smartphones which are equipped with powerful processors and cameras for capturing still images and video with a range of sensors capable of tracking location, orientation and motion of the user. These modern smartphones offer a sophisticated platform for implementing handheld AR applications. However, handheld display provides interface with the interaction metaphors which are developed with head-mounted display attached along and it might restrict with hardware which is inappropriate for handheld. Therefore, this paper will discuss a proposed real-time inertial device-based interaction technique for 3D object manipulation. It also explains the methods used such for selection, holding, translation and rotation. It aims to improve the limitation in 3D object manipulation when a user can hold the device with both hands without requiring the need to stretch out one hand to manipulate the 3D object. This paper will also recap of previous works in the field of AR and handheld AR. Finally, the paper provides the experimental results to offer new metaphors to manipulate the 3D objects using handheld devices

    Enhanced device-based 3D object manipulation technique for handheld mobile augmented reality

    Get PDF
    3D object manipulation is one of the most important tasks for handheld mobile Augmented Reality (AR) towards its practical potential, especially for realworld assembly support. In this context, techniques used to manipulate 3D object is an important research area. Therefore, this study developed an improved device based interaction technique within handheld mobile AR interfaces to solve the large range 3D object rotation problem as well as issues related to 3D object position and orientation deviations in manipulating 3D object. The research firstly enhanced the existing device-based 3D object rotation technique with an innovative control structure that utilizes the handheld mobile device tilting and skewing amplitudes to determine the rotation axes and directions of the 3D object. Whenever the device is tilted or skewed exceeding the threshold values of the amplitudes, the 3D object rotation will start continuously with a pre-defined angular speed per second to prevent over-rotation of the handheld mobile device. This over-rotation is a common occurrence when using the existing technique to perform large-range 3D object rotations. The problem of over-rotation of the handheld mobile device needs to be solved since it causes a 3D object registration error and a 3D object display issue where the 3D object does not appear consistent within the user’s range of view. Secondly, restructuring the existing device-based 3D object manipulation technique was done by separating the degrees of freedom (DOF) of the 3D object translation and rotation to prevent the 3D object position and orientation deviations caused by the DOF integration that utilizes the same control structure for both tasks. Next, an improved device-based interaction technique, with better performance on task completion time for 3D object rotation unilaterally and 3D object manipulation comprehensively within handheld mobile AR interfaces was developed. A pilot test was carried out before other main tests to determine several pre-defined values designed in the control structure of the proposed 3D object rotation technique. A series of 3D object rotation and manipulation tasks was designed and developed as separate experimental tasks to benchmark both the proposed 3D object rotation and manipulation techniques with existing ones on task completion time (s). Two different groups of participants aged 19-24 years old were selected for both experiments, with each group consisting sixteen participants. Each participant had to complete twelve trials, which came to a total 192 trials per experiment for all the participants. Repeated measure analysis was used to analyze the data. The results obtained have statistically proven that the developed 3D object rotation technique markedly outpaced existing technique with significant shorter task completion times of 2.04s shorter on easy tasks and 3.09s shorter on hard tasks after comparing the mean times upon all successful trials. On the other hand, for the failed trials, the 3D object rotation technique was 4.99% more accurate on easy tasks and 1.78% more accurate on hard tasks in comparison to the existing technique. Similar results were also extended to 3D object manipulation tasks with an overall 9.529s significant shorter task completion time of the proposed manipulation technique as compared to the existing technique. Based on the findings, an improved device-based interaction technique has been successfully developed to address the insufficient functionalities of the current technique

    Combining Multi-touch Input and Device Movement for 3D Manipulations in Mobile Augmented Reality Environments

    Get PDF
    International audienceNowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this paper we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi- touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations

    Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    Get PDF
    corecore