3,205 research outputs found

    Full Flow: Optical Flow Estimation By Global Optimization over Regular Grids

    Full text link
    We present a global optimization approach to optical flow estimation. The approach optimizes a classical optical flow objective over the full space of mappings between discrete grids. No descriptor matching is used. The highly regular structure of the space of mappings enables optimizations that reduce the computational complexity of the algorithm's inner loop from quadratic to linear and support efficient matching of tens of thousands of nodes to tens of thousands of displacements. We show that one-shot global optimization of a classical Horn-Schunck-type objective over regular grids at a single resolution is sufficient to initialize continuous interpolation and achieve state-of-the-art performance on challenging modern benchmarks.Comment: To be presented at CVPR 201

    Efficient MRF Energy Propagation for Video Segmentation via Bilateral Filters

    Get PDF
    Segmentation of an object from a video is a challenging task in multimedia applications. Depending on the application, automatic or interactive methods are desired; however, regardless of the application type, efficient computation of video object segmentation is crucial for time-critical applications; specifically, mobile and interactive applications require near real-time efficiencies. In this paper, we address the problem of video segmentation from the perspective of efficiency. We initially redefine the problem of video object segmentation as the propagation of MRF energies along the temporal domain. For this purpose, a novel and efficient method is proposed to propagate MRF energies throughout the frames via bilateral filters without using any global texture, color or shape model. Recently presented bi-exponential filter is utilized for efficiency, whereas a novel technique is also developed to dynamically solve graph-cuts for varying, non-lattice graphs in general linear filtering scenario. These improvements are experimented for both automatic and interactive video segmentation scenarios. Moreover, in addition to the efficiency, segmentation quality is also tested both quantitatively and qualitatively. Indeed, for some challenging examples, significant time efficiency is observed without loss of segmentation quality.Comment: Multimedia, IEEE Transactions on (Volume:16, Issue: 5, Aug. 2014

    Finding Temporally Consistent Occlusion Boundaries in Videos using Geometric Context

    Full text link
    We present an algorithm for finding temporally consistent occlusion boundaries in videos to support segmentation of dynamic scenes. We learn occlusion boundaries in a pairwise Markov random field (MRF) framework. We first estimate the probability of an spatio-temporal edge being an occlusion boundary by using appearance, flow, and geometric features. Next, we enforce occlusion boundary continuity in a MRF model by learning pairwise occlusion probabilities using a random forest. Then, we temporally smooth boundaries to remove temporal inconsistencies in occlusion boundary estimation. Our proposed framework provides an efficient approach for finding temporally consistent occlusion boundaries in video by utilizing causality, redundancy in videos, and semantic layout of the scene. We have developed a dataset with fully annotated ground-truth occlusion boundaries of over 30 videos ($5000 frames). This dataset is used to evaluate temporal occlusion boundaries and provides a much needed baseline for future studies. We perform experiments to demonstrate the role of scene layout, and temporal information for occlusion reasoning in dynamic scenes.Comment: Applications of Computer Vision (WACV), 2015 IEEE Winter Conference o

    Multi-Cue Structure Preserving MRF for Unconstrained Video Segmentation

    Full text link
    Video segmentation is a stepping stone to understanding video context. Video segmentation enables one to represent a video by decomposing it into coherent regions which comprise whole or parts of objects. However, the challenge originates from the fact that most of the video segmentation algorithms are based on unsupervised learning due to expensive cost of pixelwise video annotation and intra-class variability within similar unconstrained video classes. We propose a Markov Random Field model for unconstrained video segmentation that relies on tight integration of multiple cues: vertices are defined from contour based superpixels, unary potentials from temporal smooth label likelihood and pairwise potentials from global structure of a video. Multi-cue structure is a breakthrough to extracting coherent object regions for unconstrained videos in absence of supervision. Our experiments on VSB100 dataset show that the proposed model significantly outperforms competing state-of-the-art algorithms. Qualitative analysis illustrates that video segmentation result of the proposed model is consistent with human perception of objects

    A discriminative view of MRF pre-processing algorithms

    Full text link
    While Markov Random Fields (MRFs) are widely used in computer vision, they present a quite challenging inference problem. MRF inference can be accelerated by pre-processing techniques like Dead End Elimination (DEE) or QPBO-based approaches which compute the optimal labeling of a subset of variables. These techniques are guaranteed to never wrongly label a variable but they often leave a large number of variables unlabeled. We address this shortcoming by interpreting pre-processing as a classification problem, which allows us to trade off false positives (i.e., giving a variable an incorrect label) versus false negatives (i.e., failing to label a variable). We describe an efficient discriminative rule that finds optimal solutions for a subset of variables. Our technique provides both per-instance and worst-case guarantees concerning the quality of the solution. Empirical studies were conducted over several benchmark datasets. We obtain a speedup factor of 2 to 12 over expansion moves without preprocessing, and on difficult non-submodular energy functions produce slightly lower energy.Comment: ICCV 201

    Markov mezƑk a kĂ©pmodellezĂ©sben, alkalmazĂĄsuk az automatikus kĂ©pszegmentĂĄlĂĄs terĂŒletĂ©n = Markovian Image Models: Applications in Unsupervised Image Segmentation

    Get PDF
    1) KifejlesztettĂŒnk egy olyan szĂ­n Ă©s textĂșra alapĂș szegmentĂĄlĂł MRF algoritmust, amely alkalmas egy kĂ©p automatikus szegmentĂĄlĂĄsĂĄt elvĂ©gezni. Az eredmĂ©nyeinket az Image and Vision Computing folyĂłiratban publikĂĄltuk. 2) KifejlesztettĂŒnk egy Reversible Jump Markov Chain Monte Carlo technikĂĄn alapulĂł automatikus kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, melyet sikeresen alkalmaztunk szĂ­nes kĂ©pek teljesen automatikus szegmentĂĄlĂĄsĂĄra. Az eredmĂ©nyeinket a BMVC 2004 konferenciĂĄn Ă©s az Image and Vision Computing folyĂłiratban publikĂĄltuk. 3) A modell többrĂ©tegƱ tovĂĄbbfejlesztĂ©sĂ©t alkalmaztuk video objektumok szĂ­n Ă©s mozgĂĄs alapĂș szegmentĂĄlĂĄsĂĄra, melynek eredmĂ©nyeit a HACIPPR 2005 illetve az ACCV 2006 nemzetközi konferenciĂĄkon publikĂĄltuk. SzintĂ©n ehhez az alapproblĂ©mĂĄhoz kapcsolĂłdik HorvĂĄth PĂ©ter hallgatĂłmmal az optic flow szamĂ­tĂĄsĂĄval illetve szĂ­n, textĂșra Ă©s mozgĂĄs alapĂș GVF aktĂ­v kontĂșrral kapcsoltos munkĂĄink. TDK dolgozata elsƑ helyezĂ©st Ă©rt el a 2004-es helyi versenyen, az eredmĂ©nyeinket pedig a KEPAF 2004 konferenciĂĄn publikĂĄltuk. 4) HorvĂĄth PĂ©ter PhD hallgatĂłmmal illetve az franciaorszĂĄgi INRIA Ariana csoportjĂĄval, kidolgoztunk egy olyan kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, amely a szegmentĂĄlandĂł objektum alakjĂĄt is figyelembe veszi. Az eredmĂ©nyeinket az ICPR 2006 illetve az ICCVGIP 2006 konferenciĂĄn foglaltuk össze. A modell elƑzmĂ©nyekĂ©nt kidolgoztunk tovĂĄbbĂĄ egy alakzat-momemntumokon alapulĂł aktĂ­v kontĂșr modellt, amelyet a HACIPPR 2005 konferenciĂĄn publikĂĄltunk. | 1) We have proposed a monogrid MRF model which is able to combine color and texture features in order to improve the quality of segmentation results. We have also solved the estimation of model parameters. This work has been published in the Image and Vision Computing journal. 2) We have proposed an RJMCMC sampling method which is able to identify multi-dimensional Gaussian mixtures. Using this technique, we have developed a fully automatic color image segmentation algorithm. Our results have been published at BMVC 2004 international conference and in the Image and Vision Computing journal. 3) A new multilayer MRF model has been proposed which is able to segment an image based on multiple cues (such as color, texture, or motion). This work has been published at HACIPPR 2005 and ACCV 2006 international conferences. The work on optic flow computation and color-, texture-, and motion-based GVF active contours doen with my student, Mr. Peter Horvath, won a first price at the local Student Research Competition in 2004. Results have been presented at KEPAF 2004 conference. 4) A new shape prior, called 'gas of circles' has been introduced using active contour models. This work is done in collaboration with the Ariana group of INRIA, France and my PhD student, Mr. Peter Horvath. Results are published at the ICPR 2006 and ICCVGIP 2006 conferences. A preliminary study on active contour models using shape-moments has also been done, these results are published at HACIPPR 2005

    Detection of dirt impairments from archived film sequences : survey and evaluations

    Get PDF
    Film dirt is the most commonly encountered artifact in archive restoration applications. Since dirt usually appears as a temporally impulsive event, motion-compensated interframe processing is widely applied for its detection. However, motion-compensated prediction requires a high degree of complexity and can be unreliable when motion estimation fails. Consequently, many techniques using spatial or spatiotemporal filtering without motion were also been proposed as alternatives. A comprehensive survey and evaluation of existing methods is presented, in which both qualitative and quantitative performances are compared in terms of accuracy, robustness, and complexity. After analyzing these algorithms and identifying their limitations, we conclude with guidance in choosing from these algorithms and promising directions for future research

    Fast Multi-frame Stereo Scene Flow with Motion Segmentation

    Full text link
    We propose a new multi-frame method for efficiently computing scene flow (dense depth and optical flow) and camera ego-motion for a dynamic scene observed from a moving stereo camera rig. Our technique also segments out moving objects from the rigid scene. In our method, we first estimate the disparity map and the 6-DOF camera motion using stereo matching and visual odometry. We then identify regions inconsistent with the estimated camera motion and compute per-pixel optical flow only at these regions. This flow proposal is fused with the camera motion-based flow proposal using fusion moves to obtain the final optical flow and motion segmentation. This unified framework benefits all four tasks - stereo, optical flow, visual odometry and motion segmentation leading to overall higher accuracy and efficiency. Our method is currently ranked third on the KITTI 2015 scene flow benchmark. Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3 orders of magnitude faster than the top six methods. We also report a thorough evaluation on challenging Sintel sequences with fast camera and object motion, where our method consistently outperforms OSF [Menze and Geiger, 2015], which is currently ranked second on the KITTI benchmark.Comment: 15 pages. To appear at IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo Scene Flow Benchmark in November 201
    • 

    corecore