84 research outputs found

    Dense Corresspondence Estimation for Image Interpolation

    Get PDF
    We evaluate the current state-of-the-art in dense correspondence estimation for the use in multi-image interpolation algorithms. The evaluation is carried out on three real-world scenes and one synthetic scene, each featuring varying challenges for dense correspondence estimation. The primary focus of our study is on the perceptual quality of the interpolation sequences created from the estimated flow fields. Perceptual plausibility is assessed by means of a psychophysical userstudy. Our results show that current state-of-the-art in dense correspondence estimation does not produce visually plausible interpolations.In diesem Bericht evaluieren wir den gegenwärtigen Stand der Technik in dichter Korrespondenzschätzung hinsichtlich der Eignung für die Nutzung in Algorithmen zur Zwischenbildsynthese. Die Auswertung erfolgt auf drei realen und einer synthetischen Szene mit variierenden Herausforderungen für Algorithmen zur Korrespondenzschätzung. Mittels einer perzeptuellen Benutzerstudie werten wir die wahrgenommene Qualität der interpolierten Bildsequenzen aus. Unsere Ergebnisse zeigen dass der gegenwärtige Stand der Technik in dichter Korrespondezschätzung nicht für die Zwischenbildsynthese geeignet ist

    Semantic-aware One-shot Face Re-enactment with Dense Correspondence Estimation

    Full text link
    One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces. Specifically, the suboptimally disentangled identity information of driving subjects would inevitably interfere with the re-enactment results and lead to face shape distortion. To solve this problem, this paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement. Instead of using 3D coefficients alone for re-enactment control, we take the advantage of the generative ability of 3DMM to render textured face proxies. These proxies contain abundant yet compact geometric and semantic information of human faces, which enable us to compute the face motion field between source and driving images by estimating the dense correspondence. In this way, we could approximate re-enactment results by warping source images according to the motion field, and a Generative Adversarial Network (GAN) is adopted to further improve the visual quality of warping results. Extensive experiments on various datasets demonstrate the advantages of the proposed method over existing start-of-the-art benchmarks in both identity preservation and re-enactment fulfillment

    There and Back Again: Self-supervised Multispectral Correspondence Estimation

    Full text link
    Across a wide range of applications, from autonomous vehicles to medical imaging, multi-spectral images provide an opportunity to extract additional information not present in color images. One of the most important steps in making this information readily available is the accurate estimation of dense correspondences between different spectra. Due to the nature of cross-spectral images, most correspondence solving techniques for the visual domain are simply not applicable. Furthermore, most cross-spectral techniques utilize spectra-specific characteristics to perform the alignment. In this work, we aim to address the dense correspondence estimation problem in a way that generalizes to more than one spectrum. We do this by introducing a novel cycle-consistency metric that allows us to self-supervise. This, combined with our spectra-agnostic loss functions, allows us to train the same network across multiple spectra. We demonstrate our approach on the challenging task of dense RGB-FIR correspondence estimation. We also show the performance of our unmodified network on the cases of RGB-NIR and RGB-RGB, where we achieve higher accuracy than similar self-supervised approaches. Our work shows that cross-spectral correspondence estimation can be solved in a common framework that learns to generalize alignment across spectra

    DCTM: Discrete-Continuous Transformation Matching for Semantic Flow

    Full text link
    Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there lack practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks
    corecore