11 research outputs found

    Contribution à l'analyse et à l'interprétation du mouvement humain: application à la reconnaissance de postures

    No full text
    This Ph.D. thesis research work is dedicated to the analysis and the interpretation of human motion with an application to posture recognition. Human motion analysis and interpretation in computer vision have numerous applications domains such as videosurveillance, mixed-reality applications and advanced man-machine interfaces. We propose here a real-time system that allows human motion analysis and interpretation.Human motion analysis includes several processing steps of image processing such as segmentation of moving objects, temporal tracking, skin detection, human body models, and actions or pose recognition. We propose a temporal tracking method in two stages that allows to track one or several persons even if they occlude each other. This method is based on the computation of bounding boxes overlap and a partial Kalman filtering. Then we explicit a skin detection method by a color approach in order to localize their faces and hands. All these preliminary steps give access to a lot of low-level data. In a last part, we use some of these data to perform static human body posture recognition of people among the four following postures: standing, sitting, squatting and lying. Several results illustrate the advantages and limitations of the proposed methods, as their efficiency and robustness.Le travail de recherche présenté dans ce mémoire de thèse est dédié à l'analyse et à l'interprétation du mouvement humain avec application à la reconnaissance de postures. L'analyse et l'interprétation du mouvement humain en vision par ordinateur ont de nombreux domaines d'applications tels que la vidéosurveillance, les applications de réalité mixte et les interfaces homme-machine avancées. Nous proposons ici un système temps-réel permettant une analyse et une interprétation du mouvement humain.L'analyse du mouvement humain fait intervenir plusieurs processus de traitement d'images tels que la segmentation d'objets en mouvement, le suivi temporel, la détection de peau, les modèles de corps humain et la reconnaissance d'actions ou de postures. Nous proposons une méthode de suivi temporel en deux étapes permettant de suivre au cours du temps une ou plusieurs personnes même si elles s'occultent entre elles. Cette méthode est basée sur un calcul d'intersection de boîtes englobantes rectangulaires et sur un filtrage partiel de Kalman. Puis nous explicitons une méthode de détection de peau par une approche couleur afin de localiser leurs visages et leurs mains. Toutes ces étapes préliminaires donnent accès à de nombreuses informations bas-niveau. Dans une dernière partie, nous utilisons une partie de ces informations pour reconnaître les postures statiques de personnes parmi les quatre postures suivantes: debout, assis, accroupi et couché. De nombreux résultats illustrent les avantages et les limitations des méthodes proposées, ainsi que leur efficacité et robustesse

    Inner and outer lip contour tracking using cubic curve parametric models

    No full text
    International audienceThe first step in lipreading applications is mouth contour extraction to provide the link between lip shape and the oral message. In our approach, the lip contours are detected in the first image with the two algorithms developed in [1] and [2] for static images. On subsequent images of the sequence, several key points (mouth corners and inner and outer middle contour points) are tracked with the Lucas-Kanade method to define an initial parametric lip model of the mouth. According to a combined luminance and chrominance gradient, the model is optimized and precisely locked onto the lip contours. The algorithm performances are evaluated with regard to a lipreading application

    Lip contour segmentation and tracking compliant with lip-reading application constraints

    No full text
    International audienceWe propose to use both active contours and parametric models for lip contour extraction and tracking. In the first image, jumping snakes are used to detect outer and inner contour key points. These points initialize a lip parametric model composed of several cubic curves that are appropriate to the mouth deformations. According to a combined luminance and chrominance gradient, the initial model is optimized and precisely locked onto the lip contours. On subsequent images, the segmentation is based on the mouth bounding box and key point tracking. Quantitative and qualitative evaluations show the effectiveness of the algorithm for lip-reading applications

    Hands detection and tracking for interactive multimedia applications

    No full text
    International audienceThe context of this work is a European project art.live which aims at mixing real and virtual worlds for multimedia applications. This paper focuses on an algorithm for the detection and tracking of face and both hands of segmented persons standing in front of a camera. The first step consists in the detection of skin pixels based on skin colour: the HSI and YCbCr colour spaces are compared. The colour space that allows both fast detection and accurate results is selected. The second step is the identification of face and both hands among all detected skin patches. This involves spatial criteria related to human morphology and temporal tracking. The third step consists in parameter adaptation of the skin detection algorithm. Several results show the efficiency of the method. It has been integrated and validated in a global real time interactive multimedia system

    A belief theory-based static posture recognition system for real-time videosurveillance applications

    No full text
    International audienceThis paper presents a system that can automatically recognize four different static human body postures for video surveillance applications. The considered postures are standing, sitting, squatting, and lying. The data come from the persons 2D segmentation and from their face localization. It consists in distance measurements relative to a reference posture (standing, arms stretched horizontally). The recognition is based on data fusion using the belief theory, because this theory allows the modelling of imprecision and uncertainty. The efficiency and the limits of the recognition system are highlighted thanks to the processing of several thousands of frames. A considered application is the monitoring of elder people in hospitals or at home. This system allows real-time processing

    Real-time tracking of multiple persons by Kalman filtering and face pursuit for multimedia applications

    No full text
    International audienceWe present an algorithm that can track multiple persons and their faces simultaneously in a video sequence, even if they are completely occluded from the camera's point of view. This algorithm is based on the detection and tracking of persons masks and their faces. Face localization uses skin detection based on color information with an adaptive thresholding. In order to handle occlusions, a Kalman filter is defined for each person that allows the prediction of the person bounding box, of the face bounding box and of its speed. In case of incomplete measurements (for instance, in case of partial occlusion), a partial Kalman filtering is done. Several results show the efficiency of this method. This algorithm allows real time processing

    Contribution à l'analyse et à l'interprétation du mouvement humain (application à la reconnaissance de postures)

    No full text
    Le travail de recherche présenté dans ce mémoire est dédié à l'analyse et à l'interprétation du mouvement humain avec application à la reconnaissance de postures. L'analyse et l'interprétation du mouvement humain en vision par ordinateur ont de nombreux domaines d'applications tels que la vidéosurveillance, les applications de réalité mixte et les interfaces homme-machine avancées. Nous proposons ici un système temps-réel permettant une analyse et une interprétation du mouvement humain. L'analyse du mouvement humain fait intervenir plusieurs processus de traitement d'images tels que la segmentation d'objets en mouvement, le suivi temporel, la détection de peau, les modèles de corps humain et la reconnaissance d'actions ou de postures. Nous proposons une méthode de suivi temporel en deux étapes permettant de suivre au cours du temps une ou plusieurs personnes même si elles s'occultent entre elles. Cette méthode est basée sur un calcul d'intersection de boîtes englobantes rectangulaires et sur un filtrage partiel de Kalman. Puis nous explicitons une méthode de détection de peau par une approche couleur afin de localiser leurs visages et leurs mains. Toutes ces étapes préliminaires donnent accès à de nombreuses informations bas-niveau. Dans une dernière partie nous utilisons une partie de ces informations pour reconnaître les postures statiques de personnes parmi les postures suivantes: debout, assis, accroupi et couché. De nombreux résultats illustrent les avantages et les limitations des méthodes proposées ainsi que leur efficacité et robustesse.GRENOBLE1-BU Sciences (384212103) / SudocSudocFranceF

    Static human body postures recognition in video sequences using the belief theory

    No full text
    International audienceThis paper presents a system that can automatically recognize four different static human body postures in video sequences. The considered postures are standing, sitting, squatting, and lying. The recognition is based on data fusion using the belief theory. The data come from the persons 2D segmentation and from their face localization. It consists in distance measurements relative to a reference posture (“Da Vinci posture”: standing, arms stretched horizontally). The segmentation is based on an adaptive background removal algorithm. The face localization process uses skin detection based on color information with an adaptive thresholding. The efficiency and the limits of the recognition system are highlighted thanks to the analysis of a great number of results. This system allows real-time processing

    Belief theory-based classifiers comparison for static human body postures recognition in video

    No full text
    International audienceThis paper presents various classifiers results from a system that can automatically recognize four different static human body postures in video sequences. The considered postures are standing, sitting, squatting, and lying. The three classifiers considered are a naïve one and two based on the belief theory. The belief theory-based classifiers use either a classic or restricted plausibility criterion to make a decision after data fusion. The data come from the people 2D segmentation and from their face localization. Measurements consist in distances relative to a reference posture. The efficiency and the limits of the different classifiers on the recognition system are highlighted thanks to the analysis of a great number of results. This system allows real-time processing

    Interactions and systems for augmenting a live dance performance

    Get PDF
    Paper Session 3: AMH (Art/Theory/Embodiment) ISBN:978-1-4673-4663-4 eISBN:978-1-4673-4664-1International audienceThe context of this work is to develop, adapt and integrate augmented reality related tools to enhance the emotion involved in cultural performances. Part of the work was dedicated to augmenting a stage in a live performance, with dance as an application case. In this paper, we present a milestone of this work, an augmented dance show that brings together several tools and technologies that were developed over the project's lifetime. This is the result of mixing an artistic process with scientific research and development. This augmented show brings to stage issues from the research fields of Human-Machine Interaction (HMI) and Augmented Reality (AR). Virtual elements are added on stage (visual and audio) and the dancer is able to interact with them in real-time, using different interaction techniques. The originality of this work is threefold. Firstly, we propose a set of movement-based interaction techniques that can be used independently on stage or in another context. In this set, some techniques are direct, while others go through a high level of abstraction. Namely, we performed movement-based emotion recognition on the dancer, and used the recognized emotions to generate emotional music pieces and emotional poses for a humanoid robot. Secondly, those interaction techniques rely on various interconnected systems that can be reassembled. We hence propose an integrated, interactive system for augmenting a live performance, a context where system failure is not tolerated. The final system can be adapted following the artist's preferences. Finally, those systems were validated through an on field experiment - the show itself - after which we gathered and analyzed the feedback from both the audience and the choreographer
    corecore