225,656 research outputs found

    Object Tracking from Unstabilized Platforms by Particle Filtering with Embedded Camera Ego Motion

    Get PDF
    Visual tracking with moving cameras is a challenging task. The global motion induced by the moving camera moves the target object outside the expected search area, according to the object dynamics. The typical approach is to use a registration algorithm to compensate the camera motion. However, in situations involving several moving objects, and backgrounds highly affected by the aperture problem, image registration quality may be very low, decreasing dramatically the performance of the tracking. In this work, a novel approach is proposed to successfully tackle the tracking with moving cameras in complex situations, which involve several independent moving objects. The key idea is to compute several hypotheses for the camera motion, instead of estimating deterministically only one. These hypotheses are combined with the object dynamics in a Particle Filter framework to predict the most probable object locations. Then, each hypothetical object location is evaluated by the measurement model using a spatiogram, which is a region descriptor based on color and spatial distributions. Experimental results show that the proposed strategy allows to accurately track an object in complex situations affected by strong ego motion

    A robust motion estimation and segmentation approach to represent moving images with layers

    Get PDF
    The paper provides a robust representation of moving images based on layers. To that goal, we have designed efficient motion estimation and segmentation techniques by affine model fitting suitable for the construction of layers. Layered representations, originally introduced by Wang and Adelson (see IEEE Transactions on Image Processing, vol.3, no.5, p.625-38, 1994) are important in several applications. In particular they are very appropriate for object tracking, object manipulation and content-based scalability which are among the main functionalities of the future MPEG-4 standard. In addition a variety of examples are provided that give a deep insight into the performance bounds of the representation of moving images using layers.Peer ReviewedPostprint (published version

    3D Tracking Using Multi-view Based Particle Filters

    Get PDF
    Visual surveillance and monitoring of indoor environments using multiple cameras has become a field of great activity in computer vision. Usual 3D tracking and positioning systems rely on several independent 2D tracking modules applied over individual camera streams, fused using geometrical relationships across cameras. As 2D tracking systems suffer inherent difficulties due to point of view limitations (perceptually similar foreground and background regions causing fragmentation of moving objects, occlusions), 3D tracking based on partially erroneous 2D tracks are likely to fail when handling multiple-people interaction. To overcome this problem, this paper proposes a Bayesian framework for combining 2D low-level cues from multiple cameras directly into the 3D world through 3D Particle Filters. This method allows to estimate the probability of a certain volume being occupied by a moving object, and thus to segment and track multiple people across the monitored area. The proposed method is developed on the basis of simple, binary 2D moving region segmentation on each camera, considered as different state observations. In addition, the method is proved well suited for integrating additional 2D low-level cues to increase system robustness to occlusions: in this line, a naïve color-based (HSI) appearance model has been integrated, resulting in clear performance improvements when dealing with complex scenarios

    Active Contour-Based Visual Tracking by Integrating Colors, Shapes, and Motions Using Level Sets

    Get PDF
    Using a camera,the visual object tracking is one of the most important process in searching the spot of moving object over the time. In the case of the object moves fast relative to the frame rate,the visual object tracking is difficult task. The active contour evolution algorithm which is used for the tracking of object in a given frame of an image sequence. Active contour based visual object tracking using the level sets is proposed which does not consider the camera either stationary or moving. We present a framework for active contour-based visual object tracking using the level sets. The main components of our framework consist of the contour-based tracking initialization, colour-based contour evolution, the adaptive shape-based contour evolution for the non-periodic motions, the dynamic shape-based contour evolution for the periodic motions and handling of the abrupt motions. For the contour-based tracking initialization, we use an optical flow-based algorithm for the automatically initializing contours at the first frame. In the color-based contour evolution, we use Markov random field theory to measure correlations between values of the neighboring pixels for the posterior probability estimation.In the adaptive shape-based contour evolution, we combined the global shape information and the local color information to hierarchically develop gradually the contour, and a flexible shape updating model is made. In the dynamic shape based contour evolution, a shape mode transition matrix is gain to characterize the temporal correlations of the object shapes. In the handling of abrupt motions, particle swarm optimization (PSO) is used to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame. DOI: 10.17762/ijritcc2321-8169.15013

    Model-based vision for space applications

    Get PDF
    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks

    Fast and Resource-Efficient Object Tracking on Edge Devices: A Measurement Study

    Full text link
    Object tracking is an important functionality of edge video analytic systems and services. Multi-object tracking (MOT) detects the moving objects and tracks their locations frame by frame as real scenes are being captured into a video. However, it is well known that real time object tracking on the edge poses critical technical challenges, especially with edge devices of heterogeneous computing resources. This paper examines the performance issues and edge-specific optimization opportunities for object tracking. We will show that even the well trained and optimized MOT model may still suffer from random frame dropping problems when edge devices have insufficient computation resources. We present several edge specific performance optimization strategies, collectively coined as EMO, to speed up the real time object tracking, ranging from window-based optimization to similarity based optimization. Extensive experiments on popular MOT benchmarks demonstrate that our EMO approach is competitive with respect to the representative methods for on-device object tracking techniques in terms of run-time performance and tracking accuracy. EMO is released on Github at https://github.com/git-disl/EMO
    corecore