1,208 research outputs found

    Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies

    Get PDF
    In motion analysis and understanding it is important to be able to fit a suitable model or structure to the temporal series of observed data, in order to describe motion patterns in a compact way, and to discriminate between them. In an unsupervised context, i.e., no prior model of the moving object(s) is available, such a structure has to be learned from the data in a bottom-up fashion. In recent times, volumetric approaches in which the motion is captured from a number of cameras and a voxel-set representation of the body is built from the camera views, have gained ground due to attractive features such as inherent view-invariance and robustness to occlusions. Automatic, unsupervised segmentation of moving bodies along entire sequences, in a temporally-coherent and robust way, has the potential to provide a means of constructing a bottom-up model of the moving body, and track motion cues that may be later exploited for motion classification. Spectral methods such as locally linear embedding (LLE) can be useful in this context, as they preserve "protrusions", i.e., high-curvature regions of the 3D volume, of articulated shapes, while improving their separation in a lower dimensional space, making them in this way easier to cluster. In this paper we therefore propose a spectral approach to unsupervised and temporally-coherent body-protrusion segmentation along time sequences. Volumetric shapes are clustered in an embedding space, clusters are propagated in time to ensure coherence, and merged or split to accommodate changes in the body's topology. Experiments on both synthetic and real sequences of dense voxel-set data are shown. This supports the ability of the proposed method to cluster body-parts consistently over time in a totally unsupervised fashion, its robustness to sampling density and shape quality, and its potential for bottom-up model constructionComment: 31 pages, 26 figure

    Parametric Human Movements:Learning, Synthesis, Recognition, and Tracking

    Get PDF

    Augmenting the Creation of 3D Character Motion By Learning from Video Data

    Get PDF
    When it comes to character motions, especially articulated character animation, the majority of efforts are spent on accurately capturing the low level and high level action styles. Among the many techniques which have evolved over the years, motion capture (mocap) and key frame animations are the two popular choices. Both techniques are capable of capturing the low level and high level action styles of a particular individual, but at great expense in terms of the human effort involved. In this thesis, we make use of performance data in video format to augment the process of character animation, considerably decreasing human effort for both style preservation and motion regeneration. Two new methods, one for high-level and another for low-level character animation, which are based on learning from video data to augment the motion creation process, constitute the major contribution of this research. In the first, we take advantage of the recent advancements in the field of action recognition to automatically recognize human actions from video data. High level action patterns are learned and captured using Hidden Markov Models (HMM) to generate action sequences with the same pattern. For the low level action style, we present a completely different approach that utilizes user-identified transition frames in a video to enhance the transition construction in the standard motion graph technique for creating smooth action sequences. Both methods have been implemented and a number of results illustrating the concept and applicability of the proposed approach are presented

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF
    corecore