34 research outputs found

    Markerless structure-based multi-sensor calibration for free viewpoint video capture

    Get PDF
    Free-viewpoint capture technologies have recently started demonstrating impressive results. Being able to capture human performances in full 3D is a very promising technology for a variety of applications. However, the setup of the capturing infrastructure is usually expensive and requires trained personnel. In this work we focus on one practical aspect of setting up a free-viewpoint capturing system, the spatial alignment of the sensors. Our work aims at simplifying the external calibration process that typically requires significant human intervention and technical knowledge. Our method uses an easy to assemble structure and unlike similar works, does not rely on markers or features. Instead, we exploit the a-priori knowledge of the structure’s geometry to establish correspondences for the little-overlapping viewpoints typically found in free-viewpoint capture setups. These establish an initial sparse alignment that is then densely optimized. At the same time, our pipeline improves the robustness to assembly errors, allowing for non-technical users to calibrate multi-sensor setups. Our results showcase the feasibility of our approach that can make the tedious calibration process easier, and less error-prone

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems

    {MoCapDeform}: {M}onocular {3D} Human Motion Capture in Deformable Scenes

    Get PDF
    3D human motion capture from monocular RGB images respecting interactions ofa subject with complex and possibly deformable environments is a verychallenging, ill-posed and under-explored problem. Existing methods address itonly weakly and do not model possible surface deformations often occurring whenhumans interact with scene surfaces. In contrast, this paper proposesMoCapDeform, i.e., a new framework for monocular 3D human motion capture thatis the first to explicitly model non-rigid deformations of a 3D scene forimproved 3D human pose estimation and deformable environment reconstruction.MoCapDeform accepts a monocular RGB video and a 3D scene mesh aligned in thecamera space. It first localises a subject in the input monocular video alongwith dense contact labels using a new raycasting based strategy. Next, ourhuman-environment interaction constraints are leveraged to jointly optimiseglobal 3D human poses and non-rigid surface deformations. MoCapDeform achievessuperior accuracy than competing methods on several datasets, including ournewly recorded one with deforming background scenes.<br

    Deformable Objects for Virtual Environments

    Get PDF

    The Acquisition, Modelling and Estimation of Canine 3D Shape and Pose

    Get PDF
    corecore