4,930 research outputs found

    Real-time visual tracking using image processing and filtering methods

    Get PDF
    The main goal of this thesis is to develop real-time computer vision algorithms in order to detect and to track targets in uncertain complex environments purely based on a visual sensor. Two major subjects addressed by this work are: 1. The development of fast and robust image segmentation algorithms that are able to search and automatically detect targets in a given image. 2. The development of sound filtering algorithms to reduce the effects of noise in signals from the image processing. The main constraint of this research is that the algorithms should work in real-time with limited computing power on an onboard computer in an aircraft. In particular, we focus on contour tracking which tracks the outline of the target represented by contours in the image plane. This thesis is concerned with three specific categories, namely image segmentation, shape modeling, and signal filtering. We have designed image segmentation algorithms based on geometric active contours implemented via level set methods. Geometric active contours are deformable contours that automatically track the outlines of objects in images. In this approach, the contour in the image plane is represented as the zero-level set of a higher dimensional function. (One example of the higher dimensional function is a three-dimensional surface for a two-dimensional contour.) This approach handles the topological changes (e.g., merging, splitting) of the contour naturally. Although geometric active contours prevail in many fields of computer vision, they suffer from the high computational costs associated with level set methods. Therefore, simplified versions of level set methods such as fast marching methods are often used in problems of real-time visual tracking. This thesis presents the development of a fast and robust segmentation algorithm based on up-to-date extensions of level set methods and geometric active contours, namely a fast implementation of Chan-Vese's (active contour) model (FICVM). The shape prior is a useful cue in the recognition of the true target. For the contour tracker, the outline of the target can be easily disrupted by noise. In geometric active contours, to cope with deviations from the true outline of the target, a higher dimensional function is constructed based on the shape prior, and the contour tracks the outline of an object by considering the difference between the higher dimensional functions obtained from the shape prior and from a measurement in a given image. The higher dimensional function is often a distance map which requires high computational costs for construction. This thesis focuses on the extraction of shape information from only the zero-level set of the higher dimensional function. This strategy compensates for inaccuracies in the calculation of the shape difference that occur when a simplified higher dimensional function is used. This is named as contour-based shape modeling. Filtering is an essential element in tracking problems because of the presence of noise in system models and measurements. The well-known Kalman filter provides an exact solution only for problems which have linear models and Gaussian distributions (linear/Gaussian problems). For nonlinear/non-Gaussian problems, particle filters have received much attention in recent years. Particle filtering is useful in the approximation of complicated posterior probability distribution functions. However, the computational burden of particle filtering prevents it from performing at full capacity in real-time applications. This thesis concentrates on improving the processing time of particle filtering for real-time applications. In principle, we follow the particle filter in the geometric active contour framework. This thesis proposes an advanced blob tracking scheme in which a blob contains shape prior information of the target. This scheme simplifies the sampling process and quickly suggests the samples which have a high probability of being the target. Only for these samples is the contour tracking algorithm applied to obtain a more detailed state estimate. Curve evolution in the contour tracking is realized by the FICVM. The dissimilarity measure is calculated by the contour based shape modeling method and the shape prior is updated when it satisfies certain conditions. The new particle filter is applied to the problems of low contrast and severe daylight conditions, to cluttered environments, and to the appearing/disappearing target tracking. We have also demonstrated the utility of the filtering algorithm for multiple target tracking in the presence of occlusions. This thesis presents several test results from simulations and flight tests. In these tests, the proposed algorithms demonstrated promising results in varied situations of tracking.Ph.D.Committee Chair: Eric N. Johnson; Committee Co-Chair: Allen R. Tannenbaum; Committee Member: Anthony J. Calise; Committee Member: Eric Feron; Committee Member: Patricio A. Vel

    Computer analysis of objects’ movement in image sequences: methods and applications

    Get PDF
    Computer analysis of objects’ movement in image sequences is a very complex problem, considering that it usually involves tasks for automatic detection, matching, tracking, motion analysis and deformation estimation. In spite of its complexity, this computational analysis has a wide range of important applications; for instance, in surveillance systems, clinical analysis of human gait, objects recognition, pose estimation and deformation analysis. Due to the extent of the purposes, several difficulties arise, such as the simultaneous tracking of manifold objects, their possible temporary occlusion or definitive disappearance from the image scene, changes of the viewpoints considered in images acquisition or of the illumination conditions, or even nonrigid deformations that objects may suffer in image sequences. In this paper, we present an overview of several methods that may be considered to analyze objects’ movement; namely, for their segmentation, tracking and matching in images, and for estimation of the deformation involved between images.This paper was partially done in the scope of project “Segmentation, Tracking and Motion Analysis of Deformable (2D/3D) Objects using Physical Principles”, with reference POSC/EEA-SRI/55386/2004, financially supported by FCT -Fundação para a CiĂȘncia e a Tecnologia from Portugal. The fourth, fifth and seventh authors would like to thank also the support of their PhD grants from FCT with references SFRH/BD/29012/2006, SFRH/BD/28817/2006 and SFRH/BD/12834/2003, respectively

    A Generic Framework for Tracking Using Particle Filter With Dynamic Shape Prior

    Get PDF
    ©2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2007.894244Tracking deforming objects involves estimating the global motion of the object and its local deformations as functions of time. Tracking algorithms using Kalman filters or particle filters (PFs) have been proposed for tracking such objects, but these have limitations due to the lack of dynamic shape information. In this paper, we propose a novel method based on employing a locally linear embedding in order to incorporate dynamic shape information into the particle filtering framework for tracking highly deformable objects in the presence of noise and clutter. The PF also models image statistics such as mean and variance of the given data which can be useful in obtaining proper separation of object and backgroun

    A survey on 2d object tracking in digital video

    Get PDF
    This paper presents object tracking methods in video.Different algorithms based on rigid, non rigid and articulated object tracking are studied. The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends.It is often the case that tracking objects in consecutive frames is supported by a prediction scheme. Based on information extracted from previous frames and any high level information that can be obtained, the state (location) of the object is predicted.An excellent framework for prediction is kalman filter, which additionally estimates prediction error.In complex scenes, instead of single hypothesis, multiple hypotheses using Particle filter can be used.Different techniques are given for different types of constraints in video

    Adaptive object segmentation and tracking

    Get PDF
    Efficient tracking of deformable objects moving with variable velocities is an important current research problem. In this thesis a robust tracking model is proposed for the automatic detection, recognition and tracking of target objects which are subject to variable orientations and velocities and are viewed under variable ambient lighting conditions. The tracking model can be applied to efficiently track fast moving vehicles and other objects in various complex scenarios. The tracking model is evaluated on both colour visible band and infra-red band video sequences acquired from the air by the Sussex police helicopter and other collaborators. The observations made validate the improved performance of the model over existing methods. The thesis is divided in three major sections. The first section details the development of an enhanced active contour for object segmentation. The second section describes an implementation of a global active contour orientation model. The third section describes the tracking model and assesses it performance on the aerial video sequences. In the first part of the thesis an enhanced active contour snake model using the difference of Gaussian (DoG) filter is reported and discussed in detail. An acquisition method based on the enhanced active contour method developed that can assist the proposed tracking system is tested. The active contour model is further enhanced by the use of a disambiguation framework designed to assist multiple object segmentation which is used to demonstrate that the enhanced active contour model can be used for robust multiple object segmentation and tracking. The active contour model developed not only facilitates the efficient update of the tracking filter but also decreases the latency involved in tracking targets in real-time. As far as computational effort is concerned, the active contour model presented improves the computational cost by 85% compared to existing active contour models. The second part of the thesis introduces the global active contour orientation (GACO) technique for statistical measurement of contoured object orientation. It is an overall object orientation measurement method which uses the proposed active contour model along with statistical measurement techniques. The use of the GACO technique, incorporating the active contour model, to measure object orientation angle is discussed in detail. A real-time door surveillance application based on the GACO technique is developed and evaluated on the i-LIDS door surveillance dataset provided by the UK Home Office. The performance results demonstrate the use of GACO to evaluate the door surveillance dataset gives a success rate of 92%. Finally, a combined approach involving the proposed active contour model and an optimal trade-off maximum average correlation height (OT-MACH) filter for tracking is presented. The implementation of methods for controlling the area of support of the OT-MACH filter is discussed in detail. The proposed active contour method as the area of support for the OT-MACH filter is shown to significantly improve the performance of the OT-MACH filter's ability to track vehicles moving within highly cluttered visible and infra-red band video sequence

    Comparison of different integral histogram based tracking algorithms

    Get PDF
    Object tracking is an important subject in computer vision with a wide range of applications – security and surveillance, motion-based recognition, driver assistance systems, and human-computer interaction. The proliferation of high-powered computers, the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis have generated a great deal of interest in object tracking algorithms. Tracking is usually performed in the context of high-level applications that require the location and/or shape of the object in every frame. Research is being conducted in the development of object tracking algorithms over decades and a number of approaches have been proposed. These approaches differ from each other in object representation, feature selection, and modeling the shape and appearance of the object. Histogram-based tracking has been proved to be an efficient approach in many applications. Integral histogram is a novel method which allows the extraction of histograms of multiple rectangular regions in an image in a very efficient manner. A number of algorithms have used this function in their approaches in the recent years, which made an attempt to use the integral histogram in a more efficient manner. In this paper different algorithms which used this method as a part of their tracking function, are evaluated by comparing their tracking results and an effort is made to modify some of the algorithms for better performance. The sequences used for the tracking experiments are of gray scale (non-colored) and have significant shape and appearance variations for evaluating the performance of the algorithms. Extensive experimental results on these challenging sequences are presented, which demonstrate the tracking abilities of these algorithms

    Automated visual tracking for studying the ontogeny of zebrafish swimming

    Get PDF
    The zebrafish Danio rerio is a widely used model organism in studies of genetics, developmental biology, and recently, biomechanics. In order to quantify changes in swimming during all stages of development, we have developed a visual tracking system that estimates the posture of fish. Our current approach assumes planar motion of the fish, given image sequences taken from a top view. An accurate geometric fish model is automatically designed and fit to the images at each time frame. Our approach works across a range of fish shapes and sizes and is therefore well suited for studying the ontogeny of fish swimming, while also being robust to common environmental occlusions. Our current analysis focuses on measuring the influence of vertebra development on the swimming capabilities of zebrafish. We examine wild-type zebrafish and mutants with stiff vertebrae (stocksteif) and quantify their body kinematics as a function of their development from larvae to adult (mutants made available by the Hubrecht laboratory, The Netherlands). By tracking the fish, we are able to measure the curvature and net acceleration along the body that result from the fish's body wave. Here, we demonstrate the capabilities of the tracking system for the escape response of wild-type zebrafish and stocksteif mutant zebrafish. The response was filmed with a digital high-speed camera at 1500 frames s–1. Our approach enables biomechanists and ethologists to process much larger datasets than possible at present. Our automated tracking scheme can therefore accelerate insight in the swimming behavior of many species of (developing) fish

    Image Segmentation using PDE, Variational, Morphological and Probabilistic Methods

    Get PDF
    The research in this dissertation has focused upon image segmentation and its related areas, using the techniques of partial differential equations, variational methods, mathematical morphological methods and probabilistic methods. An integrated segmentation method using both curve evolution and anisotropic diffusion is presented that utilizes both gradient and region information in images. A bottom-up image segmentation method is proposed to minimize the Mumford-Shah functional. Preferential image segmentation methods are presented that are based on the tree of shapes in mathematical morphologies and the Kullback-Leibler distance in information theory. A thorough evaluation of the morphological preferential image segmentation method is provided, and a web interface is described. A probabilistic model is presented that is based on particle filters for image segmentation. These methods may be incorporated as components of an integrated image processed system. The system utilizes Internet Protocol (IP) cameras for data acquisition. It utilizes image databases to provide prior information and store image processing results. Image preprocessing, image segmentation and object recognition are integrated in one stage in the system, using various methods developed in several areas. Interactions between data acquisition, integrated image processing and image databases are handled smoothly. A framework of the integrated system is implemented using Perl, C++, MySQL and CGI. The integrated system works for various applications such as video tracking, medical image processing and facial image processing. Experimental results on this applications are provided in the dissertation. Efficient computations such as multi-scale computing and parallel computing using graphic processors are also presented

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Learning the dynamics and time-recursive boundary detection of deformable objects

    Get PDF
    We propose a principled framework for recursively segmenting deformable objects across a sequence of frames. We demonstrate the usefulness of this method on left ventricular segmentation across a cardiac cycle. The approach involves a technique for learning the system dynamics together with methods of particle-based smoothing as well as non-parametric belief propagation on a loopy graphical model capturing the temporal periodicity of the heart. The dynamic system state is a low-dimensional representation of the boundary, and the boundary estimation involves incorporating curve evolution into recursive state estimation. By formulating the problem as one of state estimation, the segmentation at each particular time is based not only on the data observed at that instant, but also on predictions based on past and future boundary estimates. Although the paper focuses on left ventricle segmentation, the method generalizes to temporally segmenting any deformable object
    • 

    corecore