2 research outputs found

    Learning event patterns for gesture detection

    Get PDF
    Usability often plays a key role when software is brought to market, including clearly structured workows, the way of presenting information to the user, and, last but not least, how he interacts with the application. In this context, input devices as 3D cameras or (multi-)touch displays became omnipresent in order to define new intuitive ways of user interaction. State-of-the-art systems tightly couple application logic with separate gesture detection components for supported devices. Hard-coded rules or static models obtained by applying machine learning algorithms on many training samples are used in order to robustly detect a pre defined set of gesture patterns. If possible at all, it becomes difficcult to extend these sets with new patterns or to modify existing ones difficult for both, application developers and end users. Further, adding gesture support for legacy software or for additional devices becomes dificult with this hardwired approach. In previous research we demonstrated how the database community can contribute to this challenge by leveraging complex event processing on data streams to express gesture patterns. While this declarative approach decouples application logic from gesture detection components, its major drawback was the non-intuitive definition of gesture queries. In this paper, we present an approach that is related to density-based clustering in order to find declarative gesture descriptions using only a few samples. We demonstrate the algorithms on mining definitions for multi-dimensional gestures from the sensor data stream that is delivered by a Microsoft Kinect 3D camera, and provide a way for non-expert users to intuitively customize gesturecontrolled user interfaces even during runtime
    corecore