4 research outputs found
Recommended from our members
Edge-based motion segmentation
Motion segmentation is the process of dividing video frames into regions which have different motions, providing a cut-out of the moving objects. Such a segmentation is a necessary first stage in many video analysis applications, but providing an accurate, efficient motion segmentation still presents a challenge. This dissertation proposes a novel approach to motion segmentation, using the image edges in a frame. Using edges, a motion can be calculated for each object. Edges provide good motion information, and it is shown that a set of edges, labelled according to the object motion that they obey, is sufficient to completely determine the labelling of the whole frame, up to unresolvable ambiguities. The areas of the frame between edges are divided into regions, grouping together pixels of similar colour, and these regions can each be assigned to different motion layers by reference to the edges. The depth ordering of these layers can also be deduced. A Bayesian framework is presented, which determines the most likely region labelling and depth ordering, given edges labelled with their probability of obeying each of the object motions. An efficient implementation of this framework is presented, initially for segmenting two motions (foreground and background) using two frames. The ExpectationMaximisation algorithm is used to determine the two motions and calculate the label probability for each edge. The frame is then segmented into regions. The best motion labelling for these regions is determined using simulated annealing. Extensions of this simple implementation are then presented. It is demonstrated how, by tracking the edges into further frames, the statistics may be accumulated to provide an even more accurate and robust segmentation. This also allows a complete sequence to be segmented. It is then demonstrated that the framework can be extended to a larger number of motions. A new hierarchical method of initialising the Expectation-Maximisation algorithm is described, which also determines the best number of motions. These techniques have been extensively tested on thirty-four real sequences, covering a wide range of genres. The results demonstrate that the proposed edge-based approach is an accurate and efficient method of obtaining a motion segmentation
Differential Tracking through Sampling and Linearizing the Local Appearance Manifold
Recovering motion information from input camera image sequences is a classic problem of computer vision. Conventional approaches estimate motion from either dense optical flow or sparse feature correspondences identified across successive image frames. Among other things, performance depends on the accuracy of the feature detection, which can be problematic in scenes that exhibit view-dependent geometric or photometric behaviors such as occlusion, semitransparancy, specularity and curved reflections. Beyond feature measurements, researchers have also developed approaches that directly utilize appearance (intensity) measurements. Such appearance-based approaches eliminate the need for feature extraction and avoid the difficulty of identifying correspondences. However the simplicity of on-line processing of image features is usually traded for complexity in off-line modeling of the appearance function. Because the appearance function is typically very nonlinear, learning it usually requires an impractically large number of training samples. I will present a novel appearance-based framework that can be used to estimate rigid motion in a manner that is computationally simple and does not require global modeling of the appearance function. The basic idea is as follows. An n-pixel image can be considered as a point in an n-dimensional appearance space. When an object in the scene or the camera moves, the image point moves along a low-dimensional appearance manifold. While globally nonlinear, the appearance manifold can be locally linearized using a small number of nearby image samples. This linear approximation of the local appearance manifold defines a mapping between the images and the underlying motion parameters, allowing the motion estimation to be formulated as solving a linear system. I will address three key issues related to motion estimation: how to acquire local appearance samples, how to derive a local linear approximation given appearance samples, and whether the linear approximation is sufficiently close to the real local appearance manifold. In addition I will present a novel approach to motion segmentation that utilizes the same appearance-based framework to classify individual image pixels into groups associated with different underlying rigid motions