114 research outputs found

    Efficient illumination independent appearance-based face tracking

    Get PDF
    One of the major challenges that visual tracking algorithms face nowadays is being able to cope with changes in the appearance of the target during tracking. Linear subspace models have been extensively studied and are possibly the most popular way of modelling target appearance. We introduce a linear subspace representation in which the appearance of a face is represented by the addition of two approxi- mately independent linear subspaces modelling facial expressions and illumination respectively. This model is more compact than previous bilinear or multilinear ap- proaches. The independence assumption notably simplifies system training. We only require two image sequences. One facial expression is subject to all possible illumina- tions in one sequence and the face adopts all facial expressions under one particular illumination in the other. This simple model enables us to train the system with no manual intervention. We also revisit the problem of efficiently fitting a linear subspace-based model to a target image and introduce an additive procedure for solving this problem. We prove that Matthews and Baker’s Inverse Compositional Approach makes a smoothness assumption on the subspace basis that is equiva- lent to Hager and Belhumeur’s, which worsens convergence. Our approach differs from Hager and Belhumeur’s additive and Matthews and Baker’s compositional ap- proaches in that we make no smoothness assumptions on the subspace basis. In the experiments conducted we show that the model introduced accurately represents the appearance variations caused by illumination changes and facial expressions. We also verify experimentally that our fitting procedure is more accurate and has better convergence rate than the other related approaches, albeit at the expense of a slight increase in computational cost. Our approach can be used for tracking a human face at standard video frame rates on an average personal computer

    Robust direct vision-based pose tracking using normalized mutual information

    Get PDF
    This paper presents a novel visual tracking approach that combines the NMI metric and the traditional SSD metric within a gradient-based optimization frame, which can be used for direct visual odometry and SLAM. We firstly derivate the closed form expression for first- and second-order analytical NMI derivatives under the assumption of rigid-body transformations, which then can be used by subsequent Newton-like optimization methods. Then we develop a robust tracking scheme that utilizes the robustness of NMI metric while keeping the optimization characteristics of SSD-based Lucas-Kanade (LK) tracking methods. To validate the robustness and accuracy of the proposed approach, several experiments are performed on synthetic datasets as well as real image datasets. The experimental results demonstrate that our approach can provide fast, accurate pose estimation and obtain better tracking performance over standard SSD-based methods in most cases. © 2018 SPIE

    RGBDTAM: A Cost-Effective and Accurate RGB-D Tracking and Mapping System

    Full text link
    Simultaneous Localization and Mapping using RGB-D cameras has been a fertile research topic in the latest decade, due to the suitability of such sensors for indoor robotics. In this paper we propose a direct RGB-D SLAM algorithm with state-of-the-art accuracy and robustness at a los cost. Our experiments in the RGB-D TUM dataset [34] effectively show a better accuracy and robustness in CPU real time than direct RGB-D SLAM systems that make use of the GPU. The key ingredients of our approach are mainly two. Firstly, the combination of a semi-dense photometric and dense geometric error for the pose tracking (see Figure 1), which we demonstrate to be the most accurate alternative. And secondly, a model of the multi-view constraints and their errors in the mapping and tracking threads, which adds extra information over other approaches. We release the open-source implementation of our approach 1 . The reader is referred to a video with our results 2 for a more illustrative visualization of its performance

    Distribution Fields with Adaptive Kernels for Large Displacement Image Alignment

    Get PDF

    Keyframe-based monocular SLAM: design, survey, and future directions

    Get PDF
    Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality. Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system. The objective of this paper is threefold: first, the paper serves as a guideline for people seeking to design their own monocular SLAM according to specific environmental constraints. Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies made in each proposed solution. Third, the paper provides insight into the direction of future research in this field, to address the major limitations still facing monocular SLAM; namely, in the issues of illumination changes, initialization, highly dynamic motion, poorly textured scenes, repetitive textures, map maintenance, and failure recovery

    Digital Image Stabilization

    Get PDF

    Sparse variational regularization for visual motion estimation

    Get PDF
    The computation of visual motion is a key component in numerous computer vision tasks such as object detection, visual object tracking and activity recognition. Despite exten- sive research effort, efficient handling of motion discontinuities, occlusions and illumina- tion changes still remains elusive in visual motion estimation. The work presented in this thesis utilizes variational methods to handle the aforementioned problems because these methods allow the integration of various mathematical concepts into a single en- ergy minimization framework. This thesis applies the concepts from signal sparsity to the variational regularization for visual motion estimation. The regularization is designed in such a way that it handles motion discontinuities and can detect object occlusions
    • …
    corecore