39 research outputs found

    3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit

    Full text link
    We present 3DTouch, a novel 3D wearable input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure

    Evaluating body tracking interaction in floor projection displays with an elderly population

    Get PDF
    The recent development of affordable full body tracking sensors has made this technology accessible to millions of users and gives the opportunity to develop new natural user interfaces. In this paper we focused on developing 2 natural user interfaces that could easily be used by an elderly population for interaction with a floor projection display. One interface uses feet positions to control a cursor and feet distance to activate interaction. In the second interface, the cursor is controlled by ray casting the forearm into the projection and interaction is activated by hand pose. The interfaces were tested by 19 elderly participants in a point-and-click and a drag-and-drop task using a between-subjects experimental design. The usability and perceived workload for each interface was assessed as well as performance indicators. Results show a clear preference by the participants for the feet controlled interface and also marginal better performance for this method.info:eu-repo/semantics/publishedVersio

    Laid-Back, Touchless Collaboration around Wall-size Displays: Visual Feedback and Affordances

    Get PDF
    Abstract To facilitate interaction and collaboration around ultrahigh-resolution, Wall-Size Displays (WSD), post-WIMP interaction modes like touchless and multi-touch have opened up new, unprecedented opportunities. Yet to fully harness this potential, we still need to understand fundamental design factors for successful WSD experiences. Some of these include visual feedback for touchless interactions, novel interface affordances for at-a-distance, high-bandwidth input, and the technosocial ingredients supporting laid-back, relaxed collaboration around WSDs. This position paper highlights our progress in a long-term research program that examines these issues and spurs new, exciting research directions. We recently completed a study aimed at investigating the properties of visual feedback in touchless WSD interaction, and we discuss some of our findings here. Our work exemplifies how research in WSD interaction calls for re-conceptualizing basic, first principles of Human-Computer Interaction (HCI) to pioneer a suite of next-generation interaction environments

    Un espace de conception centré sur les fonctions corporelles pour la génération et l'évaluation de nouvelles techniques d'interaction

    Get PDF
    Cette thèse présente BodyScape, un espace de conception prenant en compte l engagement corporel de l utilisateur dans l interaction. BodyScape décrit la façon dont les utilisateurs coordonnent les mouvements de, et entre leurs membres, lorsqu ils interagissent avec divers dispositifs d entrée et entre plusieurs surfaces d affichage. Il introduit une notation graphique pour l analyse des techniques d interaction en termes (1) d assemblages de moteurs, qui accomplissent une tâche d interaction atomique (assemblages de moteurs d entrée), ou qui positionnent le corps pour percevoir les sorties du système (assemblages de moteurs de sortie); (2) de coordination des mouvements de ces assemblages de moteurs, relativement au corps de l utilisateur ou à son environnement interactif.Nous avons appliqué BodyScape à : 1) la caractérisation du rôle du support dans l étude de nouvelles interactions bimanuelles pour dispositifs mobiles; 2) l analyse des effets de mouvements concurrents lorsque l interaction et son support impliquent le même membre; et 3) la comparaison de douze techniques d interaction multi-échelle afin d évaluer le rôle du guidage et des interférences sur la performance.La caractérisation des interaction avec BodyScape clarifie le rôle du support des dispositifs d interaction sur l équilibre de l utilisateur, et donc sur le confort d utilisation et la performance qui en découlent. L espace de conception permet aussi aux concepteurs d interactions d identifier des situations dans lesquelles des mouvements peuvent interférer entre eux et donc diminuer performance et confort. Enfin, BodyScape révèle les compromis à considérer a priori lors de la combinaison de plusieurs techniques d interaction, permettant l analyse et la génération de techniques d interaction variées pour les environnements multi-surfaces.Plus généralement, cette thèse défend l idée qu en adoptant une approche centrée sur les fonctions corporelles engagées au cours de l interaction, il est possible de maîtriser la complexité de la conception de techniques d interaction dans les environnements multi-surfaces, mais aussi dans un cadre plus général.This thesis introduces BodyScape, a body-centric framework that accounts for how users coordinate their movements within and across their own limbs in order to interact with a wide range of devices, across multiple surfaces. It introduces a graphical notation that describes interaction techniques in terms of (1) motor assemblies responsible for performing a control task (input motor assembly) or bringing the body into a position to visually perceive output (output motor assembly), and (2) the movement coordination of motor assemblies, relative to the body or fixed in the world, with respect to the interactive environment. This thesis applies BodyScape to 1) investigate the role of support in a set of novel bimanual interaction techniques for hand-held devices, 2) analyze the competing effect across multiple input movements, and 3) compare twelve pan-and-zoom techniques on a wall-sized display to determine the roles of guidance and interference on performance. Using BodyScape to characterize interaction clarifies the role of device support on the user's balance and subsequent comfort and performance. It allows designers to identify situations in which multiple body movements interfere with each other, with a corresponding decrease in performance. Finally, it highlights the trade-offs among different combinations of techniques, enabling the analysis and generation of a variety of multi-surface interaction techniques. I argue that including a body-centric perspective when defining interaction techniques is essential for addressing the combinatorial explosion of interactive devices in multi-surface environments.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF

    Understanding Visual Feedback in Large-Display Touchless Interactions: An Exploratory Study

    Get PDF
    Touchless interactions synthesize input and output from physically disconnected motor and display spaces without any haptic feedback. In the absence of any haptic feedback, touchless interactions primarily rely on visual cues, but properties of visual feedback remain unexplored. This paper systematically investigates how large-display touchless interactions are affected by (1) types of visual feedback—discrete, partial, and continuous; (2) alternative forms of touchless cursors; (3) approaches to visualize target-selection; and (4) persistent visual cues to support out-of-range and drag-and-drop gestures. Results suggest that continuous was more effective than partial visual feedback; users disliked opaque cursors, and efficiency did not increase when cursors were larger than display artifacts’ size. Semantic visual feedback located at the display border improved users’ efficiency to return within the display range; however, the path of movement echoed in drag-and-drop operations decreased efficiency. Our findings contribute key ingredients to design suitable visual feedback for large-display touchless environments.This work was partially supported by an IUPUI Research Support Funds Grant (RSFG)

    An Interaction Model for Visualizations Beyond The Desktop

    Full text link
    corecore