1,292 research outputs found

    Automated Markerless Extraction of Walking People Using Deformable Contour Models

    No full text
    We develop a new automated markerless motion capture system for the analysis of walking people. We employ global evidence gathering techniques guided by biomechanical analysis to robustly extract articulated motion. This forms a basis for new deformable contour models, using local image cues to capture shape and motion at a more detailed level. We extend the greedy snake formulation to include temporal constraints and occlusion modelling, increasing the capability of this technique when dealing with cluttered and self-occluding extraction targets. This approach is evaluated on a large database of indoor and outdoor video data, demonstrating fast and autonomous motion capture for walking people

    Object Segmentation from Motion Discontinuities and Temporal Occlusions–A Biologically Inspired Model

    Get PDF
    BACKGROUND: Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. METHODOLOGY/PRINCIPAL FINDINGS: From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. CONCLUSIONS/SIGNIFICANCE: A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion

    A graphical model based solution to the facial feature point tracking problem

    Get PDF
    In this paper a facial feature point tracker that is motivated by applications such as human-computer interfaces and facial expression analysis systems is proposed. The proposed tracker is based on a graphical model framework. The facial features are tracked through video streams by incorporating statistical relations in time as well as spatial relations between feature points. By exploiting the spatial relationships between feature points, the proposed method provides robustness in real-world conditions such as arbitrary head movements and occlusions. A Gabor feature-based occlusion detector is developed and used to handle occlusions. The performance of the proposed tracker has been evaluated on real video data under various conditions including occluded facial gestures and head movements. It is also compared to two popular methods, one based on Kalman filtering exploiting temporal relations, and the other based on active appearance models (AAM). Improvements provided by the proposed approach are demonstrated through both visual displays and quantitative analysis

    Reconstructing depth from spatiotemporal curves

    Get PDF
    We present a novel approach for 3D reconstruction based on multiple video frames taken from a static scene. Our solution emerges from the spatiotemporal analysis of video frames. The method is based on a best fitting scheme for spatiotemporal depth curves, which allows us to compute 3D world coordinates of the objects within the scene. As opposed to a large number of current methods, our technique deals with random camera movements in a transparent way, and even performs better in these cases than with pure translation. Robustness against occlusion and aliasing is inherent to the method as well.Fundação para a Ciência e a Tecnologia – PRAXIS XXI/BD/20322/99

    Exploitation d'indices visuels liés au mouvement pour l'interprétation du contenu des séquences vidéos

    Get PDF
    L'interprétation du contenu des séquences vidéo est un des principaux domaines de recherche en vision artificielle. Dans le but d'enrichir l'information provenant des indices visuels qui sont propres à une seule image, on peut se servir d'indices découlant du mouvement entre les images. Ce mouvement peut être causé par un changement d'orientation ou de position du système d'acquisition, par un déplacement des objets dans la scène, et par bien d'autres facteurs. Je me suis intéressé à deux phénomènes découlant du mouvement dans les séquences vidéo. Premièrement, le mouvement causé par la caméra, et comment il est possible de l'interpréter par une combinaison du mouvement apparent entre les images, et du déplacement de points de fuite dans ces images. Puis, je me suis intéressé à la détection et la classification du phénomène d'occultation, qui est causé par le mouvement dans une scène complexe, grâce à un modèle géométrique dans le volume spatio-temporel. Ces deux travaux sont présentés par le biais de deux articles soumis pour publication dans des revues scientifiques

    Multigranularity Representations for Human Inter-Actions: Pose, Motion and Intention

    Get PDF
    Tracking people and their body pose in videos is a central problem in computer vision. Standard tracking representations reason about temporal coherence of detected people and body parts. They have difficulty tracking targets under partial occlusions or rare body poses, where detectors often fail, since the number of training examples is often too small to deal with the exponential variability of such configurations. We propose tracking representations that track and segment people and their body pose in videos by exploiting information at multiple detection and segmentation granularities when available, whole body, parts or point trajectories. Detections and motion estimates provide contradictory information in case of false alarm detections or leaking motion affinities. We consolidate contradictory information via graph steering, an algorithm for simultaneous detection and co-clustering in a two-granularity graph of motion trajectories and detections, that corrects motion leakage between correctly detected objects, while being robust to false alarms or spatially inaccurate detections. We first present a motion segmentation framework that exploits long range motion of point trajectories and large spatial support of image regions. We show resulting video segments adapt to targets under partial occlusions and deformations. Second, we augment motion-based representations with object detection for dealing with motion leakage. We demonstrate how to combine dense optical flow trajectory affinities with repulsions from confident detections to reach a global consensus of detection and tracking in crowded scenes. Third, we study human motion and pose estimation. We segment hard to detect, fast moving body limbs from their surrounding clutter and match them against pose exemplars to detect body pose under fast motion. We employ on-the-fly human body kinematics to improve tracking of body joints under wide deformations. We use motion segmentability of body parts for re-ranking a set of body joint candidate trajectories and jointly infer multi-frame body pose and video segmentation. We show empirically that such multi-granularity tracking representation is worthwhile, obtaining significantly more accurate multi-object tracking and detailed body pose estimation in popular datasets

    Recovering metric properties of objects through spatiotemporal interpolation

    Get PDF
    AbstractSpatiotemporal interpolation (STI) refers to perception of complete objects from fragmentary information across gaps in both space and time. It differs from static interpolation in that requirements for interpolation are not met in any static frame. It has been found that STI produced objective performance advantages in a shape discrimination paradigm for both illusory and occluded objects when contours met conditions of spatiotemporal relatability. Here we report psychophysical studies testing whether spatiotemporal interpolation allows recovery of metric properties of objects. Observers viewed virtual triangles specified only by sequential partial occlusions of background elements by their vertices (the STI condition) and made forced choice judgments of the object’s size relative to a reference standard. We found that length could often be accurately recovered for conditions where fragments were relatable and formed illusory triangles. In the first control condition, three moving dots located at the vertices provided the same spatial and timing information as the virtual object in the STI condition but did not induce perception of interpolated contours or a coherent object. In the second control condition oriented line segments were added to the dots and mid-points between the dots in a way that did not induce perception of interpolated contours. Control stimuli did not lead to accurate size judgments. We conclude that spatiotemporal interpolation can produce representations, from fragmentary information, of metric properties in addition to shape

    Vision-based techniques for gait recognition

    Full text link
    Global security concerns have raised a proliferation of video surveillance devices. Intelligent surveillance systems seek to discover possible threats automatically and raise alerts. Being able to identify the surveyed object can help determine its threat level. The current generation of devices provide digital video data to be analysed for time varying features to assist in the identification process. Commonly, people queue up to access a facility and approach a video camera in full frontal view. In this environment, a variety of biometrics are available - for example, gait which includes temporal features like stride period. Gait can be measured unobtrusively at a distance. The video data will also include face features, which are short-range biometrics. In this way, one can combine biometrics naturally using one set of data. In this paper we survey current techniques of gait recognition and modelling with the environment in which the research was conducted. We also discuss in detail the issues arising from deriving gait data, such as perspective and occlusion effects, together with the associated computer vision challenges of reliable tracking of human movement. Then, after highlighting these issues and challenges related to gait processing, we proceed to discuss the frameworks combining gait with other biometrics. We then provide motivations for a novel paradigm in biometrics-based human recognition, i.e. the use of the fronto-normal view of gait as a far-range biometrics combined with biometrics operating at a near distance
    • …
    corecore