9 research outputs found

    Hierarchical Object Parsing from Structured Noisy Point Clouds

    Full text link
    Object parsing and segmentation from point clouds are challenging tasks because the relevant data is available only as thin structures along object boundaries or other features, and is corrupted by large amounts of noise. To handle this kind of data, flexible shape models are desired that can accurately follow the object boundaries. Popular models such as Active Shape and Active Appearance models lack the necessary flexibility for this task, while recent approaches such as the Recursive Compositional Models make model simplifications in order to obtain computational guarantees. This paper investigates a hierarchical Bayesian model of shape and appearance in a generative setting. The input data is explained by an object parsing layer, which is a deformation of a hidden PCA shape model with Gaussian prior. The paper also introduces a novel efficient inference algorithm that uses informed data-driven proposals to initialize local searches for the hidden variables. Applied to the problem of object parsing from structured point clouds such as edge detection images, the proposed approach obtains state of the art parsing errors on two standard datasets without using any intensity information.Comment: 13 pages, 16 figure

    PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume

    Full text link
    We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the cur- rent optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024x436) images. Our models are available on https://github.com/NVlabs/PWC-Net.Comment: CVPR 2018 camera ready version (with github link to Caffe and PyTorch code

    Simultaneous motion detection and background reconstruction with a conditional mixed-state markov random field

    Get PDF
    In this work we present a new way of simultaneously solving the problems of motion detection and background image reconstruction. An accurate estimation of the background is only possible if we locate the moving objects. Meanwhile, a correct motion detection is achieved if we have a good available background model. The key of our joint approach is to define a single random process that can take two types of values, instead of defining two different processes, one symbolic (motion detection) and one numeric (background intensity estimation). It thus allows to exploit the (spatio-temporal) interaction between a decision (motion detection) and an estimation (intensity reconstruction) problem. Consequently, the meaning of solving both tasks jointly, is to obtain a single optimal estimate of such a process. The intrinsic interaction and simultaneity between both problems is shown to be better modeled within the so-called mixed-state statistical framework, which is extended here to account for symbolic states and conditional random fields. Experiments on real sequences and comparisons with existing motion detection methods support our proposal. Further implications for video sequence inpainting will be also discussed. © 2011 Springer Science+Business Media, LLC.postprin

    Dense Corresspondence Estimation for Image Interpolation

    Get PDF
    We evaluate the current state-of-the-art in dense correspondence estimation for the use in multi-image interpolation algorithms. The evaluation is carried out on three real-world scenes and one synthetic scene, each featuring varying challenges for dense correspondence estimation. The primary focus of our study is on the perceptual quality of the interpolation sequences created from the estimated flow fields. Perceptual plausibility is assessed by means of a psychophysical userstudy. Our results show that current state-of-the-art in dense correspondence estimation does not produce visually plausible interpolations.In diesem Bericht evaluieren wir den gegenwärtigen Stand der Technik in dichter Korrespondenzschätzung hinsichtlich der Eignung für die Nutzung in Algorithmen zur Zwischenbildsynthese. Die Auswertung erfolgt auf drei realen und einer synthetischen Szene mit variierenden Herausforderungen für Algorithmen zur Korrespondenzschätzung. Mittels einer perzeptuellen Benutzerstudie werten wir die wahrgenommene Qualität der interpolierten Bildsequenzen aus. Unsere Ergebnisse zeigen dass der gegenwärtige Stand der Technik in dichter Korrespondezschätzung nicht für die Zwischenbildsynthese geeignet ist

    Learning Random Field Models For Computer Vision

    Full text link
    Random fields are among the most popular models in computer vision due to their ability to model statistical interdependence between individual variables. Three key issues in the application of random fields to a given problem are (i) defining appropriate graph structures that represent the underlying task, (ii) finding suitable functions over the graph that encode certain preferences, and (iii) performing inference efficiently on the resulting model to obtain a solution. While a large body of recent research has been devoted to the last issue, this thesis will focus on the first two. We first study them in the context of three well-known low-level vision problems, namely image denoising, stereo vision, and optical flow, and demonstrate the benefit of using more appropriate graph structures and learning more suitable potential functions. Moreover we extend our study to landmark classification, a problem in the high-level vision domain where random field models have rarely been used. We show that higher classification accuracy can be achieved by considering multiple images jointly as a random field instead of regarding them as separate entities

    Modeling Pedestrian Behavior in Video

    Get PDF
    The purpose of this dissertation is to address the problem of predicting pedestrian movement and behavior in and among crowds. Specifically, we will focus on an agent based approach where pedestrians are treated individually and parameters for an energy model are trained by real world video data. These learned pedestrian models are useful in applications such as tracking, simulation, and artificial intelligence. The applications of this method are explored and experimental results show that our trained pedestrian motion model is beneficial for predicting unseen or lost tracks as well as guiding appearance based tracking algorithms. The method we have developed for training such a pedestrian model operates by optimizing a set of weights governing an aggregate energy function in order to minimize a loss function computed between a model\u27s prediction and annotated ground-truth pedestrian tracks. The formulation of the underlying energy function is such that using tight convex upper bounds, we are able to efficiently approximate the derivative of the loss function with respect to the parameters of the model. Once this is accomplished, the model parameters are updated using straightforward gradient descent techniques in order to achieve an optimal solution. This formulation also lends itself towards the development of a multiple behavior model. The multiple pedestrian behavior styles, informally referred to as stereotypes , are common in real data. In our model we show that it is possible, due to the unique ability to compute the derivative of the loss function, to build a new model which utilizes a soft-minimization of single behavior models. This allows unsupervised training of multiple different behavior models in parallel. This novel extension makes our method unique among other methods in the attempt to accurately describe human pedestrian behavior for the myriad of applications that exist. The ability to describe multiple behaviors shows significant improvements in the task of pedestrian motion prediction

    Algorithmen zur Korrespondenzschätzung und Bildinterpolation für die photorealistische Bildsynthese

    Get PDF
    Free-viewpoint video is a new form of visual medium that has received considerable attention in the last 10 years. Most systems reconstruct the geometry of the scene, thus restricting themselves to synchronized multi-view footage and Lambertian scenes. In this thesis we follow a different approach and describe contributions to a purely image-based end-to-end system operating on sparse, unsynchronized multi-view footage. In particular, we focus on dense correspondence estimation and synthesis of in-between views. In contrast to previous approaches, our correspondence estimation is specifically tailored to the needs of image interpolation; our multi-image interpolation technique advances the state-of-the-art by disposing the conventional blending step. Both algorithms are put to work in an image-based free-viewpoint video system and we demonstrate their applicability to space-time visual effects production as well as to stereoscopic content creation.3D-Video mit Blickpunktnavigation ist eine neues digitales Medium welchem die Forschung in den letzten 10 Jahren viel Aufmerksamkeit gewidmet hat. Die meisten Verfahren rekonstruieren dabei die Szenengeometrie und schränken sich somit auf Lambertsche Szenen und synchron aufgenommene Eingabedaten ein. In dieser Dissertation beschreiben wir Beiträge zu einem rein bild-basierten System welches auf unsynchronisierten Eingabevideos arbeitet. Unser Fokus liegt dabei auf der Schätzung dichter Korrespondenzkarten und auf der Synthese von Zwischenbildern. Im Gegensatz zu bisherigen Verfahren ist unser Ansatz der Korrespondenzschätzung auf die Bedürfnisse der Bilderinterpolation ausgerichtet; unsere Zwischenbildsynthese verzichtet auf das Überblenden der Eingabebilder zu Gunsten der Lösung eines Labelingproblems. Das resultierende System eignet sich sowohl zur Produktion räumlich-zeitlicher Spezialeffekte als auch zur Erzeugung stereoskopischer Videosequenzen
    corecore