1 research outputs found

    Spotting Human Activities and Gestures in Continuous Data Streams

    Get PDF
    In this thesis we use algorithms on data from body-worn sensors to detect physical gestures and activities. While gesture recognition is a promising and upcoming alternative to explicitly interact with computers in a mobile setting, the user’s activity is considered an important part of his/her context which can help computer applications adapt automatically to the user’s situation. Numerous context-aware applications can be found ranging from industrial to medical to educational domains. A particular emphasis of this thesis is the recognition of short activities or quick actions, which often occur amid large quantities of irrelevant data. Embedded in different application scenarios, we focus on four challenges in gesture and activity recognition: multiple types and diversity of activities, high variance in performance and user independence, continuous data stream with large background and finally activity recognition on different levels. We make several contributions to overcome these challenges. We start with a method for activity recognition using short fixed positions of the wrist to extract activities from a continuous data stream. Postures are used to recognize short activities in continuous recordings. In order to evaluate the distinctiveness of gestures in continuous recordings of gestures in daily life, we present a new approach for the important and challenging problem of user-independent gesture recognition. Beyond the recognition aspects, we pay particular attention to the social acceptability of the evaluated gestures. We performed user interviews in order to find adequate control gestures for five scenarios. Activity recognition is typically challenged by spotting a large number of activities amid irrelevant data in a user-independent manner. We present a model-based approach using joint boosting to enable the automatic discovery of important high-level primitives that are derived from the human body-model. Subsequently, we systematically analyze the benefit of body-model derived primitives in different sensor settings for multi activity recognition. Furthermore, we propose a new body-model based approach using accelerometer sensors thereby reducing the sensor requirements significantly. The proposed methods to recognize ‘atomic’ activities such as drilling, handshaking, or walking do not scale well for high-level tasks composed of multiple activities. A prohibitive amount of training would be required to cover the high variability and the large number of possibilities to execute high-level tasks. To this end, an approach considering temporal constraints encoded in UML diagrams enables a reliable recognition of composed activities or high-level tasks without requiring large amounts of training data. We show the validity of the approach by introducing a realistic and challenging data set
    corecore