26 research outputs found

    Integrated 2-D Optical Flow Sensor

    Get PDF
    I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation

    Segmentation Based on Graphical Models

    Get PDF

    A three-frame digital image correlation (DIC) method for the measurement of small displacements and strains

    Get PDF
    Digital image correlation (DIC) has become a well established approach for the calculation of full-field displacement and strains within the field of experimental mechanics. Since their introduction, DIC methods have been relying on only two images to measure the displacements and strains that materials undergo under load. It can be foreseen that the use of additional image information for the calculus of displacements and strains, although computationally more expensive, can positively impact DIC method accuracy in both ideal and challenging experimental conditions. Such accuracy improvements are especially important when measuring very small deformations, which still constitutes a great challenge: small displacements and strains translate into equally small digital image intensity changes on the material’s surface, which are affected by the digitization processes of the imaging hardware and by other image acquisition effects such as image noise. This paper proposes a new threeframe Newton-Raphson DIC method and evaluates it from the accuracy and speed standpoints. The method models the deformations that are to be measured under the assumption that the deformation occurs at approximately the same rate between each two consecutive images in the three image sequences that are employed. The aim is to investigate how the use of image data from more than two images impacts accuracy and what is the effect on the computational speed. The proposed method is compared with the classic two-frame Newton-Raphson method in three experiments. Two experiments rely on numerically deformed images that simulate heterogeneous deformations. The third experiment uses images from a real deformation experiment. Results indicate that although it is computationally more demanding, the three-frame method significantly improves displacement and strain accuracy and is less sensitive to image noise

    Motion representation using composite energy features

    Get PDF
    This work tackles the segmentation of apparent-motion from a bottom-up perspective. When no information is available to build prior high-level models, the only alternative are bottom-up techniques. Hence, the whole segmentation process relies on the suitability of the low-level features selected to describe motion. A wide variety of low-level spatio-temporal features have been proposed so far. However, all of them suffer from diverse drawbacks. Here, we propose the use of composite energy features in bottom-up motion segmentation to solve several of these problems. Composite energy features are clusters of energy filters—pairs of band-pass filters in quadrature—each one sensitive to a different set of scale, orientation, direction of motion and speed. They are grouped in order to reconstruct independent motion patterns in a video sequence. A composite energy feature, this is, the response of one of these clusters of filters, can be built as a combination of the responses of the individual filters. Therefore, it inherits the desirable properties of energy filters but providing a more complete representation of motion patterns. In this paper, we will present our approach for integration of composite features based on the concept of Phase Congruence. We will show some results that illustrate the capabilities of this low-level motion representation and its usefulness in bottom-up motion segmentation and tracking.This work has been financially supported by the Ministry of Education and Science of the Spanish Government, through the Research Project TIN2006-08447.S

    Approximation algorithm for the kinetic robust K-center problem

    Get PDF
    AbstractTwo complications frequently arise in real-world applications, motion and the contamination of data by outliers. We consider a fundamental clustering problem, the k-center problem, within the context of these two issues. We are given a finite point set S of size n and an integer k. In the standard k-center problem, the objective is to compute a set of k center points to minimize the maximum distance from any point of S to its closest center, or equivalently, the smallest radius such that S can be covered by k disks of this radius. In the discrete k-center problem the disk centers are drawn from the points of S, and in the absolute k-center problem the disk centers are unrestricted.We generalize this problem in two ways. First, we assume that points are in continuous motion, and the objective is to maintain a solution over time. Second, we assume that some given robustness parameter 0<t⩽1 is given, and the objective is to compute the smallest radius such that there exist k disks of this radius that cover at least ⌈tn⌉ points of S. We present a kinetic data structure (in the KDS framework) that maintains a (3+ε)-approximation for the robust discrete k-center problem and a (4+ε)-approximation for the robust absolute k-center problem, both under the assumption that k is a constant. We also improve on a previous 8-approximation for the non-robust discrete kinetic k-center problem, for arbitrary k, and show that our data structure achieves a (4+ε)-approximation. All these results hold in any metric space of constant doubling dimension, which includes Euclidean space of constant dimension

    Generalized least squares-based parametric motion estimation and segmentation

    Get PDF
    El análisis del movimiento es uno de los campos más importantes de la visión por computador. Esto es debido a que el mundo real está en continuo movimiento y es obvio que podremos obtener mucha más información de escenas en movimiento que de escenas estáticas. En esta tesis se ha trabajado principalmente en desarrollar algoritmos de estimación de movimiento para su aplicación a problemas de registrado de imágenes y a problemas de segmentación del movimiento. Uno de los principales objetivos de este trabajo es desarrollar una técnica de registrado de imágenes de gran exactitud, tolerante a outliers y que sea capaz de realizar su labor incluso en la presencia de deformaciones de gran magnitud tales como traslaciones, rotaciones, cambios de escala, cambios de iluminación globales y no espacialmente uniformes, etc. Otro de los objetivos de esta tesis es trabajar en problemas de estimación y la segmentación del movimiento en secuencias de dos imágenes de forma casi simultánea y sin conocimiento a priori del número de modelos de movimiento presentes. Los experimentos mostrados en este trabajo demuestran que los algoritmos propuestos en esta tesis obtienen resultados de gran exactitud.This thesis proposes several techniques related with the motion estimation problem. In particular, it deals with global motion estimation for image registration and motion segmentation. In the first case, we will suppose that the majority of the pixels of the image follow the same motion model, although the possibility of a large number of outliers are also considered. In the motion segmentation problem, the presence of more than one motion model will be considered. In both cases, sequences of two consecutive grey level images will be used. A new generalized least squares-based motion estimator will be proposed. The proposed formulation of the motion estimation problem provides an additional constraint that helps to match the pixels using image gradient information. That is achieved thanks to the use of a weight for each observation, providing high weight values to the observations considered as inliers, and low values to the ones considered as outliers. To avoid falling in a local minimum, the proposed motion estimator uses a Feature-based method (SIFT-based) to obtain good initial motion parameters. Therefore, it can deal with large motions like translation, rotations, scales changes, viewpoint changes, etc. The accuracy of our approach has been tested using challenging real images using both affine and projective motion models. Two Motion Estimator techniques, which use M-Estimators to deal with outliers into a iteratively reweighted least squared-based strategy, have been selected to compare the accuracy of our approach. The results obtained have showed that the proposed motion estimator can obtain as accurate results as M-Estimator-based techniques and even better in most cases. The problem of estimating accurately the motion under non-uniform illumination changes will also be considered. A modification of the proposed global motion estimator will be proposed to deal with this kind of illumination changes. In particular, a dynamic image model where the illumination factors are functions of the localization will be used replacing the brightens constancy assumption allowing for a more general and accurate image model. Experiments using challenging images will be performed showing that the combination of both techniques is feasible and provides accurate estimates of the motion parameters even in the presence of strong illumination changes between the images. The last part of the thesis deals with the motion estimation and segmentation problem. The proposed algorithm uses temporal information, by using the proposed generalized least-squares motion estimation process and spatial information by using an iterative region growing algorithm which classifies regions of pixels into the different motion models present in the sequence. In addition, it can extract the different moving regions of the scene while estimating its motion quasi-simultaneously and without a priori information of the number of moving objects in the scene. The performance of the algorithm will be tested on synthetic and real images with multiple objects undergoing different types of motion
    corecore