11 research outputs found

    glueTK: A Framework for Multi-modal, Multi-display Interaction

    Get PDF
    This thesis describes glueTK, a framework for human machine interaction, that allows the integration of multiple input modalities and the interaction across different displays. Building upon the framework, several contributions to integrate pointing gestures into interactive systems are presented. To address the design of interfaces for the wide range of supported displays, a concept for transferring interaction performance from one system to another is defined

    Summon and Select: Rapid Interaction with Interface Controls in Mid-air

    Get PDF
    International audienceCurrent freehand interactions with large displays rely on point & select as the dominant paradigm. However, constant hand movement in air for pointer navigation leads to hand fatigue quickly. We introduce summon & select, a new model for freehand interaction where, instead of navigating to the control , the user summons it into focus and then manipulates it. Summon & select solves the problems of constant pointer navigation, need for precise selection, and out-of-bounds gestures that plague point & select. We describe the design and conduct two studies to evaluate the design and compare it against point & select in a multi-button selection study. The results show that summon & select is significantly faster and has less physical and mental demand than point & select

    Mid-Air Gestural Interaction with a Large Fogscreen

    Get PDF
    Projected walk-through fogscreens have been created, but there is little research on the evaluation of the interaction performance with fogscreens. The present study investigated mid-air hand gestures for interaction with a large fogscreen. Participants (N = 20) selected objects from a fogscreen using tapping and dwell-based gestural techniques, with and without vibrotactile/haptic feedback. In terms of Fitts’ law, the throughput was about 1.4 bps to 2.6 bps, suggesting that gestural interaction with a large fogscreen is a suitable and effective input method. Our results also suggest that tapping without haptic feedback has good performance and potential for interaction with a fogscreen, and that tactile feedback is not necessary for effective mid-air interaction. These findings have implications for the design of gestural interfaces suitable for interaction with fogscreens.Peer reviewe

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    指先による空中タップジェスチャの検出とその応用

    Get PDF
    筑波大学 (University of Tsukuba)201

    Human Pose Estimation with Supervoxels

    Get PDF
    This thesis investigates how segmentation as a preprocessing step can reduce both the search space as well as complexity of human pose estimation in the context of smart environments. A 3D reconstruction is computed with a voxel carving algorithm. Based on a superpixel algorithm, these voxels are segmented into supervoxels that are then applied to pictorial structures in 3D to efficiently estimate the human pose. Both static and dynamic gesture recognition applications were developed

    Human Pose Estimation with Supervoxels

    Get PDF
    This thesis investigates how segmentation as a preprocessing step can reduce both the search space as well as complexity of human pose estimation in the context of smart environments. A 3D reconstruction is computed with a voxel carving algorithm. Based on a superpixel algorithm, these voxels are segmented into supervoxels that are then applied to pictorial structures in 3D to efficiently estimate the human pose. Both static and dynamic gesture recognition applications were developed
    corecore