13 research outputs found

    A Comparison Framework for Walking Performances using aSpaces

    Get PDF
    In this paper, we address the analysis of human actions by comparing different performances of the same action executed by different actors. Specifically, we present a comparison procedure applied to the walking action, but the scheme can be applied to other different actions, such as bending, running, etc. To achieve fair comparison results, we define a novel human body model based on joint angles, which maximizes the differences between human postures and, moreover, reflects the anatomical structure of human beings. Subsequently, a human action space, called aSpace, is built in order to represent each performance (i.e., each predefined sequence of postures) as a parametric manifold. The final human action representation is called p-action, which is based on the most characteristic human body postures found during several walking performances. These postures are found automatically by means of a predefined distance function, and they are called key-frames. By using key-frames, we synchronize any performance with respect to the p- action. Furthermore, by considering an arc length parameterization, independence from the speed at which performances are played is attained. As a result, the style of human walking can be successfully analysed by establishing the differences of the joints between a male and a female walkers

    A Comparison Framework for Walking Performances using aSpaces

    Get PDF
    In this paper, we address the analysis of human actions by comparing different performances of the same action executed by different actors. Specifically, we present a comparison procedure applied to the walking action, but the scheme can be applied to other different actions, such as bending, running, etc. To achieve fair comparison results, we define a novel human body model based on joint angles, which maximizes the differences between human postures and, moreover, reflects the anatomical structure of human beings. Subsequently, a human action space, called aSpace, is built in order to represent each performance (i.e., each predefined sequence of postures) as a parametric manifold. The final human action representation is called p-action, which is based on the most characteristic human body postures found during several walking performances. These postures are found automatically by means of a predefined distance function, and they are called key-frames. By using key-frames, we synchronize any performance with respect to the p-action. Furthermore, by considering an arc length parameterization, independence from the speed at which performances are played is attained. As a result, the style of human walking can be successfully analysed by establishing the differences of the joints between female and male walkers

    Automatic learning of 3D pose variability in walking performances for gait analysis

    Get PDF
    This paper proposes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. First, a Dynamic Programing synchronization algorithm is presented in order to establish a mapping between postures from different walking cycles, so the whole training set can be synchronized to a common time pattern. Then, the model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally statistics about the observed variability of the postures and motion direction are also computed at each time step. As a result, in this work we have extended a similar action model successfully used for tracking, by providing facilities for gait analysis and gait recognition applications.Peer ReviewedPreprin

    Principal Deformations Modes of Articulated Models for the Analysis of 3D Spine Deformities

    Get PDF
    Articulated models are commonly used for recognition tasks in robotics and in gait analysis, but can also be extremely useful to develop analytical methods targeting spinal deformities studies. The threedimensional analysis of these deformities is critical since they are complex and not restricted to a given plane. Thus, they cannot be assessed as a two-dimensional phenomenon. However, analyzing large databases of 3D spine models is a difficult and time-consuming task. In this context, a method that automatically extracts the most important deformation modes from sets of articulated spine models is proposed. The spine was modeled with two levels of details. In the first level, the global shape of the spine was expressed using a set of rigid transformations that superpose local coordinates systems of neighboring vertebrae. In the second level, anatomical landmarks measured with respect to a vertebra's local coordinate system were used to quantify vertebra shape. These articulated spine models do not naturally belong to a vector space because of the vertebral rotations. The Fréchet mean, which is a generalization of the conventional mean to Riemannian manifolds, was thus used to compute the mean spine shape. Moreover, a generalized covariance computed in the tangent space of the Fréchet mean was used to construct a statistical shape model of the scoliotic spine. The principal deformation modes were then extracted by performing a principal component analysis (PCA) on the generalized covariance matrix. The principal deformations modes were computed for a large database of untreated scoliotic patients. The obtained results indicate that combining rotation, translation and local vertebra shape into a unified framework leads to an effective and meaningful analysis method for articulated anatomical structures. The computed deformation modes also revealed clinically relevant information. For instance, the first mode of deformation is associated with patients' growth, the second is a double thoraco-lumbar curve and the third is a thoracic curve. Other experiments were performed on patients classified by orthopedists with respect to a widely used two-dimensional surgical planning system (the Lenke classification) and patterns relevant to the definition of a new three-dimensional classification were identified. Finally, relationships between local vertebrae shapes and global spine shape (such as vertebra wedging) were demonstrated using a sample of 3D spine reconstructions with 14 anatomical landmarks per vertebra

    Automatic Video-based Analysis of Human Motion

    Get PDF

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF

    On-line locomotion synthesis for virtual humans

    Get PDF
    Ever since the development of Computer Graphics in the industrial and academic worlds in the seventies, public knowledge and expertise have grown in a tremendous way, notably because of the increasing fascination for Computer Animation. This specific field of Computer Graphics gathers numerous techniques, especially for the animation of characters or virtual humans in movies and video games. To create such high-fidelity animations, a particular interest has been dedicated to motion capture, a technology which allows to record the 3D movement of a live performer. The resulting realism motion is convincing. However, this technique offers little control to animators, as the recorded motion can only be played back. Recently, many advances based on motion capture have been published, concerning slight but precise modifications of an original motion or the parameterization of large motion databases. The challenge consists in combining motion realism with an intuitive on-line motion control, while preserving real-time performances. In the first part of this thesis, we would like to add a brick in the wall of motion parameterization techniques based on motion capture, by introducing a generic motion modeling for locomotion and jump activities. For this purpose, we simplify the motion representation using a statistical method in order to facilitate the elaboration of an efficient parametric model. This model is structured in hierarchical levels, allowing an intuitive motion synthesis with high-level parameters. In addition, we present a space and time normalization process to adapt our model to characters of various sizes. In the second part, we integrate this motion modeling in an animation engine, thus allowing for the generation of a continuous stream of motion for virtual humans. We provide two additional tools to improve the flexibility of our engine. Based on the concept of motion anticipation, we first introduce an on-line method for detecting and enforcing foot-ground constraints. Hence, a straight line walking motion can be smoothly modified to a curved one. Secondly, we propose an approach for the automatic and coherent synthesis of transitions from locomotion to jump (and inversely) motions, by taking into account their respective properties. Finally, we consider the interaction of a virtual human with its environment. Given initial and final conditions set on the locomotion speed and foot positions, we propose a method which computes the corresponding trajectory. To illustrate this method, we propose a case study which mirrors as closely as possible the behavior of a human confronted with an obstacle: at any time, obstacles may be interactively created in front of a moving virtual human. Our method computes a trajectory allowing the virtual human to precisely jump over the obstacle in an on-line manner

    2001-2002 Graduate Catalog

    Get PDF
    corecore