2,223 research outputs found

    Physical simulation for monocular 3D model based tracking

    Get PDF
    The problem of model-based object tracking in three dimensions is addressed. Most previous work on tracking assumes simple motion models, and consequently tracking typically fails in a variety of situations. Our insight is that incorporating physics models of object behaviour improves tracking performance in these cases. In particular it allows us to handle tracking in the face of rigid body interactions where there is also occlusion and fast object motion. We show how to incorporate rigid body physics simulation into a particle filter. We present two methods for this based on pose and force noise. The improvements are tested on four videos of a robot pushing an object, and results indicate that our approach performs considerably better than a plain particle filter tracker, with the force noise method producing the best results over the range of test videos

    Integration of the 3D Environment for UAV Onboard Visual Object Tracking

    Full text link
    Single visual object tracking from an unmanned aerial vehicle (UAV) poses fundamental challenges such as object occlusion, small-scale objects, background clutter, and abrupt camera motion. To tackle these difficulties, we propose to integrate the 3D structure of the observed scene into a detection-by-tracking algorithm. We introduce a pipeline that combines a model-free visual object tracker, a sparse 3D reconstruction, and a state estimator. The 3D reconstruction of the scene is computed with an image-based Structure-from-Motion (SfM) component that enables us to leverage a state estimator in the corresponding 3D scene during tracking. By representing the position of the target in 3D space rather than in image space, we stabilize the tracking during ego-motion and improve the handling of occlusions, background clutter, and small-scale objects. We evaluated our approach on prototypical image sequences, captured from a UAV with low-altitude oblique views. For this purpose, we adapted an existing dataset for visual object tracking and reconstructed the observed scene in 3D. The experimental results demonstrate that the proposed approach outperforms methods using plain visual cues as well as approaches leveraging image-space-based state estimations. We believe that our approach can be beneficial for traffic monitoring, video surveillance, and navigation.Comment: Accepted in MDPI Journal of Applied Science

    Image processing and analysis : applications and trends

    Get PDF
    The computational analysis of images is challenging as it usually involves tasks such as segmentation, extraction of representative features, matching, alignment, tracking, motion analysis, deformation estimation, and 3D reconstruction. To carry out each of these tasks in a fully automatic, efficient and robust manner is generally demanding.The quality of the input images plays a crucial role in the success of any image analysis task. The higher their quality, the easier and simpler the tasks are. Hence, suitable methods of image processing such as noise removal, geometric correction, edges and contrast enhancement or illumination correction are required.Despite the challenges, computational methods of image processing and analysis are suitable for a wide range of applications.In this paper, the methods that we have developed for processing and analyzing objects in images are introduced. Furthermore, their use in applications from medicine and biomechanics to engineering and materials sciences are presented

    Streaming Monte Carlo Pose Estimation for Autonomous Object Modeling

    Get PDF
    This work contributes the optimization of a streaming pose estimation particle filter and its integration into an autonomous object modeling approach. The particle filter is advanced by an additional pose optimization in the particle weighting step. By integrating the method into the autonomous object modeling approach, the repositioning of objects is enabled, which is often necessary in order to acquire complete models. Experiments show that the usage of iterative closest point is too restrictive for general transformations. The used Monte Carlo method enables a robust pose estimation without loss of time and with high precision. Further, it is shown that the overall modeling results are improved clearly

    Computer analysis of objects’ movement in image sequences: methods and applications

    Get PDF
    Computer analysis of objects’ movement in image sequences is a very complex problem, considering that it usually involves tasks for automatic detection, matching, tracking, motion analysis and deformation estimation. In spite of its complexity, this computational analysis has a wide range of important applications; for instance, in surveillance systems, clinical analysis of human gait, objects recognition, pose estimation and deformation analysis. Due to the extent of the purposes, several difficulties arise, such as the simultaneous tracking of manifold objects, their possible temporary occlusion or definitive disappearance from the image scene, changes of the viewpoints considered in images acquisition or of the illumination conditions, or even nonrigid deformations that objects may suffer in image sequences. In this paper, we present an overview of several methods that may be considered to analyze objects’ movement; namely, for their segmentation, tracking and matching in images, and for estimation of the deformation involved between images.This paper was partially done in the scope of project “Segmentation, Tracking and Motion Analysis of Deformable (2D/3D) Objects using Physical Principles”, with reference POSC/EEA-SRI/55386/2004, financially supported by FCT -Fundação para a Ciência e a Tecnologia from Portugal. The fourth, fifth and seventh authors would like to thank also the support of their PhD grants from FCT with references SFRH/BD/29012/2006, SFRH/BD/28817/2006 and SFRH/BD/12834/2003, respectively

    Occlusion-Robust MVO: Multimotion Estimation Through Occlusion Via Motion Closure

    Full text link
    Visual motion estimation is an integral and well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation, which is especially challenging in highly dynamic environments. Such environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Previous work in object tracking focuses on maintaining the integrity of object tracks but usually relies on specific appearance-based descriptors or constrained motion models. These approaches are very effective in specific applications but do not generalize to the full multimotion estimation problem. This paper presents a pipeline for estimating multiple motions, including the camera egomotion, in the presence of occlusions. This approach uses an expressive motion prior to estimate the SE (3) trajectory of every motion in the scene, even during temporary occlusions, and identify the reappearance of motions through motion closure. The performance of this occlusion-robust multimotion visual odometry (MVO) pipeline is evaluated on real-world data and the Oxford Multimotion Dataset.Comment: To appear at the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). An earlier version of this work first appeared at the Long-term Human Motion Planning Workshop (ICRA 2019). 8 pages, 5 figures. Video available at https://www.youtube.com/watch?v=o_N71AA6FR
    corecore