3,406 research outputs found

    An inertial measurement unit for user interfaces

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 2000.Includes bibliographical references (p. 131-135).Inertial measurement components, which sense either acceleration or angular rate, are being embedded into common user interface devices more frequently as their cost continues to drop dramatically. These devices hold a number of advantages over other sensing technologies: they measure relevant parameters for human interfaces and can easily be embedded into wireless, mobile platforms. The work in this dissertation demonstrates that inertial measurement can be used to acquire rich data about human gestures, that we can derive efficient algorithms for using this data in gesture recognition, and that the concept of a parameterized atomic gesture recognition has merit. Further we show that a framework combining these three levels of description can be easily used by designers to create robust applications. A wireless six degree-of-freedom inertial measurement unit (IMU), with a cubical form factor (1.25 inches on a side) was constructed to collect the data, providing updates at 15 ms intervals. This data is analyzed for periods of activity using a windowed variance algorithm, whose thresholds can be set analytically. These segments are then examined by the gesture recognition algorithms, which are applied on an axis-by-axis basis to the data. The recognized gestures are considered atomic (i.e. cannot be decomposed) and are parameterized in terms of magnitude and duration. Given these atomic gestures, a simple scripting language is developed to allow designers to combine them into full gestures of interest. It allows matching of recognized atomic gestures to prototypes based on their type, parameters and time of occurrence. Because our goal is to eventually create stand-alone devices,the algorithms designed for this framework have both low algorithmic complexity and low latency, at the price of a small loss in generality. To demonstrate this system, the gesture recognition portion of (void*): A Cast of Characters, an installation which used a pair of hand-held IMUs to capture gestural inputs, was implemented using this framework. This version ran much faster than the original version (based on Hidden Markov Models), used less processing power, and performed at least as well.by Ari Yosef Benbasat.S.M

    3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit

    Full text link
    We present 3DTouch, a novel 3D wearable input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure

    Compact, configurable inertial gesture recognition

    Get PDF

    Combining inertial and visual sensing for human action recognition in tennis

    Get PDF
    In this paper, we present a framework for both the automatic extraction of the temporal location of tennis strokes within a match and the subsequent classification of these as being either a serve, forehand or backhand. We employ the use of low-cost visual sensing and low-cost inertial sensing to achieve these aims, whereby a single modality can be used or a fusion of both classification strategies can be adopted if both modalities are available within a given capture scenario. This flexibility allows the framework to be applicable to a variety of user scenarios and hardware infrastructures. Our proposed approach is quantitatively evaluated using data captured from elite tennis players. Results point to the extremely accurate performance of the proposed approach irrespective of input modality configuration

    Detection of bimanual gestures everywhere: why it matters, what we need and what is missing

    Full text link
    Bimanual gestures are of the utmost importance for the study of motor coordination in humans and in everyday activities. A reliable detection of bimanual gestures in unconstrained environments is fundamental for their clinical study and to assess common activities of daily living. This paper investigates techniques for a reliable, unconstrained detection and classification of bimanual gestures. It assumes the availability of inertial data originating from the two hands/arms, builds upon a previously developed technique for gesture modelling based on Gaussian Mixture Modelling (GMM) and Gaussian Mixture Regression (GMR), and compares different modelling and classification techniques, which are based on a number of assumptions inspired by literature about how bimanual gestures are represented and modelled in the brain. Experiments show results related to 5 everyday bimanual activities, which have been selected on the basis of three main parameters: (not) constraining the two hands by a physical tool, (not) requiring a specific sequence of single-hand gestures, being recursive (or not). In the best performing combination of modeling approach and classification technique, five out of five activities are recognized up to an accuracy of 97%, a precision of 82% and a level of recall of 100%.Comment: Submitted to Robotics and Autonomous Systems (Elsevier
    corecore