6 research outputs found

    Parametric improvement of lateral interaction in accumulative computation in motion-based segmentation

    Get PDF
    Segmentation of moving objects is an essential component of any vision system. However, its accomplishment is hard due to some challenges such as the occlusion treatment or the detection of objects with deformable appearance. In this paper an artificial neuronal network approach for moving object segmentation, called lateral interaction in accumulative computation (LIAC), which uses accumulative computation and recurrent lateral interaction is revisited. Although the results reported for this approach so far may be considered relevant, the problems faced each time (environment, objects of interest, etc.) make that the system outcome varies. Hence, our aim is to improve segmentation provided by LIAC in a double sense: by removing the detected objects not matching some size or compactness constraints, and by learning suitable parameters that improve the segmentation behavior through a genetic algorithm

    A procedure for automatically estimating model parameters in optical motion capture

    No full text

    A Procedure for Automatically Estimating Model Parameters in Optical Motion Capture

    No full text
    Model-based optical motion capture systems require knowledge of the position of the markers relative to the underlying skeleton, the lengths of the skeleton's limbs, and which limb each marker is attached to. These model parameters are typically assumed and entered into the system manually, although techniques exist for calculating some of them, such as the position of the markers relative to the skeleton's joints

    A procedure for automatically estimating model parameters in optical motion capture

    No full text
    Model-based optical motion capture systems require knowledge of the position of the markers relative to the underlying skeleton, the lengths of the skeleton's limbs, and which limb each marker is attached to. These model parameters are typically assumed and entered into the system manually, although techniques exist for calculating some of them, such as the position of the markers relative to the skeleton's joints. We present a fully automatic procedure for determining these model parameters. It tracks the 2D positions of the markers on the cameras' image planes and determines which markers lie on each limb before calculating the position of the underlying skeleton. The only assumption is that the skeleton consists of rigid limbs connected with ball joints. The proposed system is demonstrated on a number of real data examples and is shown to calculate good estimates of the model parameters in each. © 2004 Elsevier B.V. All rights reserved

    Developing a 3D multi-body simulation tool to study dynamic behaviour of human scoliosis

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Using biomechanical constraints to improve video-based motion capture

    Get PDF
    In motion capture applications whose aim is to recover human body postures from various input, the high dimensionality of the problem makes it desirable to reduce the size of the search-space by eliminating a priori impossible configurations. This can be carried out by constraining the posture recovery process in various ways. Most recent work in this area has focused on applying camera viewpoint-related constraints to eliminate erroneous solutions. When camera calibration parameters are available, they provide an extremely efficient tool for disambiguating not only posture estimation, but also 3D reconstruction and data segmentation. Increased robustness is indeed to be gained from enforcing such constraints, which we prove in the context of an optical motion capture framework. Our contribution in this respect resides in having applied such constraints consistently to each main step involved in a motion capture process, namely marker reconstruction and segmentation, followed by posture recovery. These steps are made inter-dependent, where each one constrains the other. A more application-independent approach is to encode constraints directly within the human body model, such as limits on the rotational joints. This being an almost unexplored research subject, our efforts were mainly directed at determining a new method for measuring, representing and applying such joint limits. To the present day, the few existing range of motion boundary representations present severe drawbacks that call for an alternative formulation. The joint limits paradigm we propose not only overcomes these drawbacks, but also allows to capture intra- and inter-joint rotation dependencies, these being essential to realistic joint motion representation. The range of motion boundary is defined by an implicit surface, its analytical expression enabling us to readily establish whether a given joint rotation is valid or not. Furthermore, its continuous and differentiable nature provides us with a means of elegantly incorporating such a constraint within an optimisation process for posture recovery. Applying constrained optimisation to our body model and stereo data extracted from video sequence, we demonstrate the clearly resulting decrease in posture estimation errors. As a bonus, we have integrated our joint limits representation in character animation packages to show how motion can be naturally constrained in this manner
    corecore