A handle bar metaphor for virtual object manipulation with mid-air interaction

Abstract

Commercial 3D scene acquisition systems such as the Mi-crosoft Kinect sensor can reduce the cost barrier of realizing mid-air interaction. However, since it can only sense hand position but not hand orientation robustly, current mid-air interaction methods for 3D virtual object manipulation often require contextual and mode switching to perform transla-tion, rotation, and scaling, thus preventing natural continu-ous gestural interactions. A novel handle bar metaphor is proposed as an effective visual control metaphor between the user’s hand gestures and the corresponding virtual ob-ject manipulation operations. It mimics a familiar situation of handling objects that are skewered with a bimanual han-dle bar. The use of relative 3D motion of the two hands to design the mid-air interaction allows us to provide precise controllability despite the Kinect sensor’s low image resolu-tion. A comprehensive repertoire of 3D manipulation oper-ations is proposed to manipulate single objects, perform fast constrained rotation, and pack/align multiple objects along a line. Three user studies were devised to demonstrate the effi-cacy and intuitiveness of the proposed interaction techniques on different virtual manipulation scenarios. Author Keywords 3D manipulation; bimanual gestures; user interactio

Similar works

Full text

thumbnail-image

CiteSeerX

redirect
Last time updated on 30/10/2017

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.