34 research outputs found

    Discriminative vision-based recovery and recognition of human motion

    Get PDF
    The automatic analysis of human motion from images opens up the way for applications in the domains of security and surveillance, human-computer interaction, animation, retrieval and sports motion analysis. In this dissertation, the focus is on robust and fast human pose recovery and action recognition. The former is a regression task where the aim is to determine the locations of key joints in the human body, given an image of a human figure. The latter is the process of labeling image sequences with action labels, a classification task.\ud \ud An example-based pose recovery approach is introduced where histograms of oriented gradients (HOG) are used as the image descriptor. From a database containing thousands of HOG-pose pairs, the visually closest examples are selected. Weighted interpolation of the corresponding poses is used to obtain the pose estimate. This approach is fast due to the use of a low-cost distance function. To cope with partial occlusions of the human figure, the normalization and matching of the HOG descriptors was changed from global to the cell level. When occlusion areas in the image are predicted, only part of the descriptor can be used for recovery, thus avoiding adaptation of the database to the occlusion setting.\ud \ud For the recognition of human actions, simple functions are used to discriminate between two classes after applying a common spatial patterns (CSP) transform on sequences of HOG descriptors. In the transform, the difference in variance between two classes is maximized. Each of the discriminative functions softly votes into the two classes. After evaluation of all pairwise functions, the action class that receives most of the voting mass is the estimated class. By combining the two approaches, actions could be recognized by considering sequences of recovered, rotation-normalized poses. Thanks to this normalization, actions could be recognized from arbitrary viewpoints. By handling occlusions in the pose recovery step, actions could be recognized from image observations where occlusion was simulated

    Speaker Prediction based on Head Orientations

    Get PDF

    Iterative Perceptual Learning for Social Behavior Synthesis

    Get PDF
    We introduce Iterative Perceptual Learning (IPL), a novel approach for learning computational models for social behavior synthesis from corpora of human-human interactions. The IPL approach combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of synthesized individual behaviors in the context of a conversation. These ratings are in turn used to refine the machine learning models. As the ratings correspond to those moments in the conversation where the production of a specific social behavior is inappropriate, we can regard features extracted at these moments as negative samples for the training of a machine learning classifier. This is an advantage over traditional corpusbased approaches, in which negative samples at extracted at random from moments in the conversation where the specific social behavior does not occur. We perform a comparison between the IPL approach and the traditional corpus-based approach on the timing of backchannels for a listener in speaker-listener dialogs. While both models perform similarly in terms of precision and recall scores, the results of the IPL model are rated as more appropriate in the perceptual evaluation.We additionally investigate the effect of the amount of available training data and the variation of training data on the outcome of the models

    Online behavior evaluation with the switching wizard of Oz

    Get PDF
    Advances in animation and sensor technology allow us to engage in face-to-face conversations with virtual agents [1]. One major challenge is to generate the virtual agentā€™s appropriate, human-like behavior contingent with that of the human conversational partner. Models of (nonverbal) behavior are pre-dominantly learned from corpora of dialogs between human subjects [2], or based on simple observations from literature (e.g. [3,4,5,6]

    Twente Debate Corpus - A Multimodal Corpus for Head Movement Analysis

    Get PDF
    This paper introduces a multimodal discussion corpus for the study into head movement and turn-taking patterns in debates. Given that participants either acted alone or in a pair, cooperation and competition and their nonverbal correlates can be analyzed. In addition to the video and audio of the recordings, the corpus contains automatically estimated head movements, and manual annotations of who is speaking and who is looking where. The corpus consists of over 2 hours of debates, in 6 groups with 18 participants in total. We describe the recording setup and present initial analyses of the recorded data. We found that the person who acted as single debater speaks more and also receives more attention compared to the other debaters, also when corrected for the time speaking.We also found that a single debater was more likely to speak after a team debater. Future work will be aimed at further analysis of the relation between speaking and looking patterns, the outcome of the debate and perceived dominance of the debaters

    Backchannel Strategies for Artificial Listeners

    Get PDF
    We evaluate multimodal rule-based strategies for backchannel (BC) generation in face-to-face conversations. Such strategies can be used by artificial listeners to determine when to produce a BC in dialogs with human speakers. In this research, we consider features from the speakerā€™s speech and gaze. We used six rule-based strategies to determine the placement of BCs. The BCs were performed by an intelligent virtual agent using nods and vocalizations. In a user perception experiment, participants were shown video fragments of a human speaker together with an artificial listener who produced BC behavior according to one of the strategies. Participants were asked to rate how likely they thought the BC behavior had been performed by a human listener. We found that the number, timing and type of BC had a significant effect on how human-like the BC behavior was perceived
    corecore