2 research outputs found

    Model-based Optical Flow: Layers, Learning, and Geometry

    Get PDF
    The estimation of motion in video sequences establishes temporal correspondences between pixels and surfaces and allows reasoning about a scene using multiple frames. Despite being a focus of research for over three decades, computing motion, or optical flow, remains challenging due to a number of difficulties, including the treatment of motion discontinuities and occluded regions, and the integration of information from more than two frames. One reason for these issues is that most optical flow algorithms only reason about the motion of pixels on the image plane, while not taking the image formation pipeline or the 3D structure of the world into account. One approach to address this uses layered models, which represent the occlusion structure of a scene and provide an approximation to the geometry. The goal of this dissertation is to show ways to inject additional knowledge about the scene into layered methods, making them more robust, faster, and more accurate. First, this thesis demonstrates the modeling power of layers using the example of motion blur in videos, which is caused by fast motion relative to the exposure time of the camera. Layers segment the scene into regions that move coherently while preserving their occlusion relationships. The motion of each layer therefore directly determines its motion blur. At the same time, the layered model captures complex blur overlap effects at motion discontinuities. Using layers, we can thus formulate a generative model for blurred video sequences, and use this model to simultaneously deblur a video and compute accurate optical flow for highly dynamic scenes containing motion blur. Next, we consider the representation of the motion within layers. Since, in a layered model, important motion discontinuities are captured by the segmentation into layers, the flow within each layer varies smoothly and can be approximated using a low dimensional subspace. We show how this subspace can be learned from training data using principal component analysis (PCA), and that flow estimation using this subspace is computationally efficient. The combination of the layered model and the low-dimensional subspace gives the best of both worlds, sharp motion discontinuities from the layers and computational efficiency from the subspace. Lastly, we show how layered methods can be dramatically improved using simple semantics. Instead of treating all layers equally, a semantic segmentation divides the scene into its static parts and moving objects. Static parts of the scene constitute a large majority of what is shown in typical video sequences; yet, in such regions optical flow is fully constrained by the depth structure of the scene and the camera motion. After segmenting out moving objects, we consider only static regions, and explicitly reason about the structure of the scene and the camera motion, yielding much better optical flow estimates. Furthermore, computing the structure of the scene allows to better combine information from multiple frames, resulting in high accuracies even in occluded regions. For moving regions, we compute the flow using a generic optical flow method, and combine it with the flow computed for the static regions to obtain a full optical flow field. By combining layered models of the scene with reasoning about the dynamic behavior of the real, three-dimensional world, the methods presented herein push the envelope of optical flow computation in terms of robustness, speed, and accuracy, giving state-of-the-art results on benchmarks and pointing to important future research directions for the estimation of motion in natural scenes

    Modeling of Locally Scaled Spatial Point Processes, and Applications in Image Analysis

    Get PDF
    Spatial point processes provide a statistical framework for modeling random arrangements of objects, which is of relevance in a variety of scientific disciplines, including ecology, spatial epidemiology and material science. Describing systematic spatial variations within this framework and developing methods for estimating parameters from empirical data constitute an active area of research. Image analysis, in particular, provides a range of scenarios to which point process models are applicable. Typical examples are images of trees in remote sensing, cells in biology, or composite structures in material science. Due to its real-world orientation and versatility, the class of the recently developed locally scaled point processes appears particularly suitable for the modeling of spatial object patterns. An unknown normalizing constant in the likelihood, however, makes inference complicated and requires elaborate techniques. This work presents an efficient Bayesian inference concept for locally scaled point processes. The suggested optimization procedure is applied to images of cross-sections through the stems of maize plants, where the goal is to accurately describe and classify different genotypes based on the spatial arrangement of their vascular bundles. A further spatial point process framework is specifically provided for the estimation of shape from texture. Texture learning and the estimation of surface orientation are two important tasks in pattern analysis and computer vision. Given the image of a scene in three-dimensional space, a frequent goal is to derive global geometrical knowledge, e.g. information on camera positioning and angle, from the local textural characteristics in the image. The statistical framework proposed comprises locally scaled point process strategies as well as the draft of a Bayesian marked point process model for inferring shape from texture
    corecore