3 research outputs found

    Multi-Level Representation of Gesture as Command for Human Computer Interaction

    Get PDF
    oai:ojs.cai.ui.sav.sk:article/16The paper addresses the multiple forms of representation that human gesture takes at different levels for human computer interaction, ranging from gesture acquisition to mathematical model for analysis, pattern for recognition, record for database up to end-level application event triggers. A mathematical model for gesture as command is presented. We equally identify and provide particular models for four different types of gestures by considering both posture information and underlying motion trajectories. The problem of constructing gesture dictionaries is further addressed by taking into account similarity measures and dictionary discriminative features

    Device Independence and Extensibility in Gesture Recognition

    No full text
    Gesture recognition techniques often suffer from being highly device-dependent and hard to extend. If a system is trained using data from a specific glove input device, that system is typically unusable with any other input device. The set of gestures that a system is trained to recognize is typically not extensible, without retraining the entire system. We propose a novel gesture recognition framework to address these problems. This framework is based on a multi-layered view of gesture recognition. Only the lowest layer is device dependent; it converts raw sensor values produced by the glove to a glove-independent semantic description of the hand. The higher layers of our framework can be reused across gloves, and are easily extensible to include new gestures. We have experimentally evaluated our framework and found that it yields comparable performance to conventional techniques, while substantiating our claims of device independence and extensibility
    corecore