1,110 research outputs found

    Computer analysis of objects’ movement in image sequences: methods and applications

    Get PDF
    Computer analysis of objects’ movement in image sequences is a very complex problem, considering that it usually involves tasks for automatic detection, matching, tracking, motion analysis and deformation estimation. In spite of its complexity, this computational analysis has a wide range of important applications; for instance, in surveillance systems, clinical analysis of human gait, objects recognition, pose estimation and deformation analysis. Due to the extent of the purposes, several difficulties arise, such as the simultaneous tracking of manifold objects, their possible temporary occlusion or definitive disappearance from the image scene, changes of the viewpoints considered in images acquisition or of the illumination conditions, or even nonrigid deformations that objects may suffer in image sequences. In this paper, we present an overview of several methods that may be considered to analyze objects’ movement; namely, for their segmentation, tracking and matching in images, and for estimation of the deformation involved between images.This paper was partially done in the scope of project “Segmentation, Tracking and Motion Analysis of Deformable (2D/3D) Objects using Physical Principles”, with reference POSC/EEA-SRI/55386/2004, financially supported by FCT -Fundação para a Ciência e a Tecnologia from Portugal. The fourth, fifth and seventh authors would like to thank also the support of their PhD grants from FCT with references SFRH/BD/29012/2006, SFRH/BD/28817/2006 and SFRH/BD/12834/2003, respectively

    Image processing and analysis : applications and trends

    Get PDF
    The computational analysis of images is challenging as it usually involves tasks such as segmentation, extraction of representative features, matching, alignment, tracking, motion analysis, deformation estimation, and 3D reconstruction. To carry out each of these tasks in a fully automatic, efficient and robust manner is generally demanding.The quality of the input images plays a crucial role in the success of any image analysis task. The higher their quality, the easier and simpler the tasks are. Hence, suitable methods of image processing such as noise removal, geometric correction, edges and contrast enhancement or illumination correction are required.Despite the challenges, computational methods of image processing and analysis are suitable for a wide range of applications.In this paper, the methods that we have developed for processing and analyzing objects in images are introduced. Furthermore, their use in applications from medicine and biomechanics to engineering and materials sciences are presented

    Data Fusion for Vision-Based Robotic Platform Navigation

    Get PDF
    Data fusion has become an active research topic in recent years. Growing computational performance has allowed the use of redundant sensors to measure a single phenomenon. While Bayesian fusion approaches are common in general applications, the computer vision community has largely relegated this approach. Most object following algorithms have gone towards pure machine learning fusion techniques that tend to lack flexibility. Consequently, a more general data fusion scheme is needed. The motivation for this work is to propose methods that allow for the development of simple and cost effective, yet robust visual following robots capable of tracking a general object with limited restrictions on target characteristics. With that purpose in mind, in this work, a hierarchical adaptive Bayesian fusion approach is proposed, which outperforms individual trackers by using redundant measurements. The adaptive framework is achieved by relying in each measurement\u27s local statistics and a global softened majority voting. Several approaches for robots that can follow targets have been proposed in recent years. However, many require the use of several, expensive sensors and often the majority of the image processing and other calculations are performed independently. In the proposed approach, objects are detected by several state-of-the-art vision-based tracking algorithms, which are then used within a Bayesian framework to filter and fuse the measurements and generate the robot control commands. Target scale variations and, in one of the platforms, a time-of-flight (ToF) depth camera, are used to determine the relative distance between the target and the robotic platforms. The algorithms are executed in real-time (approximately 30fps). The proposed approaches were validated in a simulated application and several robotics platforms: one stationary pan-tilt system, one small unmanned air vehicle, and one ground robot with a Jetson TK1 embedded computer. Experiments were conducted with different target objects in order to validate the system in scenarios including occlusions and various illumination conditions as well as to show how the data fusion improves the overall robustness of the system

    Outlier-robust Kalman filtering through generalised Bayes

    Get PDF
    We derive a novel, provably robust, and closedform Bayesian update rule for online filtering in state-space models in the presence of outliers and misspecified measurement models. Our method combines generalised Bayesian inference with filtering methods such as the extended and ensemble Kalman filter. We use the former to show robustness and the latter to ensure computational efficiency in the case of nonlinear models. Our method matches or outperforms other robust filtering methods (such as those based on variational Bayes) at a much lower computational cost. We show this empirically on a range of filtering problems with outlier measurements, such as object tracking, state estimation in high-dimensional chaotic systems, and online learning of neural networks
    corecore