13 research outputs found

    Applications of two dimensional multiscale stochastic models Mark R. Luettgen.

    Get PDF
    Caption title.Includes bibliographical references (p. 33-34).Supported by AFOSR. AFOSR-88-0032 Supported by NSF. MIP-9015281 INT-9002393 Supported by ONR. N00014-91-J-100

    Line search multilevel optimization as computational methods for dense optical flow

    Get PDF
    We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation

    Efficient multiscale regularization with applications to the computation of optical flow

    Get PDF
    Includes bibliographical references (p. 28-31).Supported by the Air Force Office of Scientific Research. AFOSR-92-J-0002 Supported by the Draper Laboratory IR&D Program. DL-H-418524 Supported by the Office of Naval Research. N00014-91-J-1004 Supported by the Army Research Office. DAAL03-92-G-0115Mark R. Luettgen, W. Clem Karl, Alan S. Willsky

    A general framework for nonlinear multigrid inversion

    Full text link

    Motion estimation using optical flow field

    Get PDF
    Over the last decade, many low-level vision algorithms have been devised for extracting depth from intensity images. Most of them are based on motion of the rigid observer. Translation and rotation are constants with respect to space coordinates. When multi-objects move and/or the objects change shape, the algorithms cannot be used. In this dissertation, we develop a new robust framework for the determination of dense 3-D position and motion fields from a stereo image sequence. The framework is based on unified optical flow field (UOFF). In the UOFF approach, a four frame mode is used to compute six dense 3-D position and velocity fields. Their accuracy depends on the accuracy of optical flow field computation. The approach can estimate rigid and/or nonrigid motion as well as observer and/or object(s) motion. Here, a novel approach to optical flow field computation is developed. The approach is named as correlation-feedback approach. It has three different features from any other existing approaches. They are feedback, rubber window, and special refinement. With those three features, error is reduced, boundary is conserved, subpixel estimation accuracy is increased, and the system is robust. Convergence of the algorithm is proved in general. Since the UOFF is based on each pixel, it is sensitive to noise or uncertainty at each pixel. In order to improve its performance, we applied two Kalman filters. Our analysis indicates that different image areas need different convergence rates, for instance. the areas along boundaries have faster convergence rate than an interior area. The first Kalman filter is developed to conserve moving boundary in optical How determination by applying needed nonhomogeneous iterations. The second Kalman filter is devised to compute 3-D motion and structure based on a stereo image sequence. Since multi-object motion is allowed, newly visible areas may be exposed in images. How to detect and handle the newly visible areas is addressed. The system and measurement noise covariance matrices, Q and R, in the two Kalman filters are analyzed in detail. Numerous experiments demonstrate the efficiency of our approach

    Object-based 3-d motion and structure analysis for video coding applications

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent University, 1997.Thesis (Ph.D.) -- -Bilkent University, 1997.Includes bibliographical references leaves 102-115Novel 3-D motion analysis tools, which can be used in object-based video codecs, are proposed. In these tools, the movements of the objects, which are observed through 2-D video frames, are modeled in 3-D space. Segmentation of 2-D frames into objects and 2-D dense motion vectors for each object are necessary as inputs for the proposed 3-D analysis. 2-D motion-based object segmentation is obtained by Gibbs formulation; the initialization is achieved by using a fast graph-theory based region segmentation algorithm which is further improved to utilize the motion information. Moreover, the same Gibbs formulation gives the needed dense 2-D motion vector field. The formulations for the 3-D motion models are given for both rigid and non- rigid moving objects. Deformable motion is modeled by a Markov random field which permits elastic relations between neighbors, whereas, rigid 3-D motion parameters are estimated using the E-matrix method. Some improvements on the E-matrix method are proposed to make this algorithm more robust to gross errors like the consequence of incorrect segmentation of 2-D correspondences between frames. Two algorithms are proposed to obtain dense depth estimates, which are robust to input errors and suitable for encoding, respectively. While the former of these two algorithms gives simply a MAP estimate, the latter uses rate-distortion theory. Finally, 3-D motion models are further utilized for occlusion detection and motion compensated temporal interpolation, and it is observed that for both applications 3-D motion models have superiority over their 2-D counterparts. Simulation results on artificial and real data show the advantages of the 3-D motion models in object-based video coding algorithms.Alatan, A AydinPh.D

    Signal Processing on Textured Meshes

    Get PDF
    In this thesis we extend signal processing techniques originally formulated in the context of image processing to techniques that can be applied to signals on arbitrary triangles meshes. We develop methods for the two most common representations of signals on triangle meshes: signals sampled at the vertices of a finely tessellated mesh, and signals mapped to a coarsely tessellated mesh through texture maps. Our first contribution is the combination of Lagrangian Integration and the Finite Elements Method in the formulation of two signal processing tasks: Shock Filters for texture and geometry sharpening, and Optical Flow for texture registration. Our second contribution is the formulation of Gradient-Domain processing within the texture atlas. We define a function space that handles chart discontinuities, and linear operators that capture the metric distortion introduced by the parameterization. Our third contribution is the construction of a spatiotemporal atlas parameterization for evolving meshes. Our method introduces localized remeshing operations and a compact parameterization that improves geometry and texture video compression. We show temporally coherent signal processing using partial correspondences
    corecore