16,241 research outputs found

    Fast Optimal Joint Tracking-Registration for Multi-Sensor Systems

    Full text link
    Sensor fusion of multiple sources plays an important role in vehicular systems to achieve refined target position and velocity estimates. In this article, we address the general registration problem, which is a key module for a fusion system to accurately correct systematic errors of sensors. A fast maximum a posteriori (FMAP) algorithm for joint registration-tracking (JRT) is presented. The algorithm uses a recursive two-step optimization that involves orthogonal factorization to ensure numerically stability. Statistical efficiency analysis based on Cram\`{e}r-Rao lower bound theory is presented to show asymptotical optimality of FMAP. Also, Givens rotation is used to derive a fast implementation with complexity O(n) with nn the number of tracked targets. Simulations and experiments are presented to demonstrate the promise and effectiveness of FMAP

    GASP : Geometric Association with Surface Patches

    Full text link
    A fundamental challenge to sensory processing tasks in perception and robotics is the problem of obtaining data associations across views. We present a robust solution for ascertaining potentially dense surface patch (superpixel) associations, requiring just range information. Our approach involves decomposition of a view into regularized surface patches. We represent them as sequences expressing geometry invariantly over their superpixel neighborhoods, as uniquely consistent partial orderings. We match these representations through an optimal sequence comparison metric based on the Damerau-Levenshtein distance - enabling robust association with quadratic complexity (in contrast to hitherto employed joint matching formulations which are NP-complete). The approach is able to perform under wide baselines, heavy rotations, partial overlaps, significant occlusions and sensor noise. The technique does not require any priors -- motion or otherwise, and does not make restrictive assumptions on scene structure and sensor movement. It does not require appearance -- is hence more widely applicable than appearance reliant methods, and invulnerable to related ambiguities such as textureless or aliased content. We present promising qualitative and quantitative results under diverse settings, along with comparatives with popular approaches based on range as well as RGB-D data.Comment: International Conference on 3D Vision, 201

    A tracker alignment framework for augmented reality

    Get PDF
    To achieve accurate registration, the transformations which locate the tracking system components with respect to the environment must be known. These transformations relate the base of the tracking system to the virtual world and the tracking system's sensor to the graphics display. In this paper we present a unified, general calibration method for calculating these transformations. A user is asked to align the display with objects in the real world. Using this method, the sensor to display and tracker base to world transformations can be determined with as few as three measurements

    LiveCap: Real-time Human Performance Capture from Monocular Video

    Full text link
    We present the first real-time human performance capture approach that reconstructs dense, space-time coherent deforming geometry of entire humans in general everyday clothing from just a single RGB video. We propose a novel two-stage analysis-by-synthesis optimization whose formulation and implementation are designed for high performance. In the first stage, a skinned template model is jointly fitted to background subtracted input video, 2D and 3D skeleton joint positions found using a deep neural network, and a set of sparse facial landmark detections. In the second stage, dense non-rigid 3D deformations of skin and even loose apparel are captured based on a novel real-time capable algorithm for non-rigid tracking using dense photometric and silhouette constraints. Our novel energy formulation leverages automatically identified material regions on the template to model the differing non-rigid deformation behavior of skin and apparel. The two resulting non-linear optimization problems per-frame are solved with specially-tailored data-parallel Gauss-Newton solvers. In order to achieve real-time performance of over 25Hz, we design a pipelined parallel architecture using the CPU and two commodity GPUs. Our method is the first real-time monocular approach for full-body performance capture. Our method yields comparable accuracy with off-line performance capture techniques, while being orders of magnitude faster
    • 

    corecore