1,378 research outputs found

    Human motion reconstruction using wearable accelerometers

    Get PDF
    We address the problem of capturing human motion in scenarios where the use of a traditional optical motion capture system is impractical. Such scenarios are relatively commonplace, such as in large spaces, outdoors or at competitive sporting events, where the limitations of such systems are apparent: the small physical area where motion capture can be done and the lack of robustness to lighting changes and occlusions. In this paper, we advocate the use of body-worn wearable wireless accelerometers for reconstructing human motion and to this end we outline a system that is more portable than traditional optical motion capture systems, whilst producing naturalistic motion. Additionally, if information on the person's root position is available, an extended version of our algorithm can use this information to correct positional drift

    Visuo-vestibular interaction in the reconstruction of travelled trajectories

    Get PDF
    We recently published a study of the reconstruction of passively travelled trajectories from optic flow. Perception was prone to illusions in a number of conditions, and not always veridical in the others. Part of the illusionary reconstructed trajectories could be explained by assuming that subjects base their reconstruction on the ego-motion percept built during the stimulus' initial moments . In the current paper, we test this hypothesis using a novel paradigm: if the final reconstruction is governed by the initial percept, providing additional, extra-retinal information that modifies the initial percept should predictably alter the final reconstruction. The extra-retinal stimulus was tuned to supplement the information that was under-represented or ambiguous in the optic flow: the subjects were physically displaced or rotated at the onset of the visual stimulus. A highly asymmetric velocity profile (high acceleration, very low deceleration) was used. Subjects were required to guide an input device (in the form of a model vehicle; we measured position and orientation) along the perceived trajectory. We show for the first time that a vestibular stimulus of short duration can influence the perception of a much longer lasting visual stimulus. Perception of the ego-motion translation component in the visual stimulus was improved by a linear physical displacement: perception of the ego-motion rotation component by a physical rotation. This led to a more veridical reconstruction in some conditions, but to a less veridical reconstruction in other conditions

    GraMMaR: Ground-aware Motion Model for 3D Human Motion Reconstruction

    Full text link
    Demystifying complex human-ground interactions is essential for accurate and realistic 3D human motion reconstruction from RGB videos, as it ensures consistency between the humans and the ground plane. Prior methods have modeled human-ground interactions either implicitly or in a sparse manner, often resulting in unrealistic and incorrect motions when faced with noise and uncertainty. In contrast, our approach explicitly represents these interactions in a dense and continuous manner. To this end, we propose a novel Ground-aware Motion Model for 3D Human Motion Reconstruction, named GraMMaR, which jointly learns the distribution of transitions in both pose and interaction between every joint and ground plane at each time step of a motion sequence. It is trained to explicitly promote consistency between the motion and distance change towards the ground. After training, we establish a joint optimization strategy that utilizes GraMMaR as a dual-prior, regularizing the optimization towards the space of plausible ground-aware motions. This leads to realistic and coherent motion reconstruction, irrespective of the assumed or learned ground plane. Through extensive evaluation on the AMASS and AIST++ datasets, our model demonstrates good generalization and discriminating abilities in challenging cases including complex and ambiguous human-ground interactions. The code will be available at https://github.com/xymsh/GraMMaR.Comment: Accepted to ACM Multimedia 2023. The code will be available at https://github.com/xymsh/GraMMa

    Three dimensional transparent structure segmentation and multiple 3D motion estimation from monocular perspective image sequences

    Get PDF
    A three dimensional scene can be segmented using different cues, such as boundaries, texture, motion, discontinuities of the optical flow, stereo, models for structure, etc. We investigate segmentation based upon one of these cues, namely three dimensional motion. If the scene contain transparent objects, the two dimensional (local) cues are inconsistent, since neighboring points with similar optical flow can correspond to different objects. We present a method for performing three dimensional motion-based segmentation of (possibly) transparent scenes together with recursive estimation of the motion of each independent rigid object from monocular perspective images. Our algorithm is based on a recently proposed method for rigid motion reconstruction and a validation test which allows us to initialize the scheme and detect outliers during the motion estimation procedure. The scheme is tested on challenging real and synthetic image sequences. Segmentation is performed for the Ullmann's experiment of two transparent cylinders rotating about the same axis in opposite directions
    corecore