49 research outputs found
Multiview Regenerative Morphing with Dual Flows
This paper aims to address a new task of image morphing under a multiview
setting, which takes two sets of multiview images as the input and generates
intermediate renderings that not only exhibit smooth transitions between the
two input sets but also ensure visual consistency across different views at any
transition state. To achieve this goal, we propose a novel approach called
Multiview Regenerative Morphing that formulates the morphing process as an
optimization to solve for rigid transformation and optimal-transport
interpolation. Given the multiview input images of the source and target
scenes, we first learn a volumetric representation that models the geometry and
appearance for each scene to enable the rendering of novel views. Then, the
morphing between the two scenes is obtained by solving optimal transport
between the two volumetric representations in Wasserstein metrics. Our approach
does not rely on user-specified correspondences or 2D/3D input meshes, and we
do not assume any predefined categories of the source and target scenes. The
proposed view-consistent interpolation scheme directly works on multiview
images to yield a novel and visually plausible effect of multiview free-form
morphing
DiffusionAtlas: High-Fidelity Consistent Diffusion Video Editing
We present a diffusion-based video editing framework, namely DiffusionAtlas,
which can achieve both frame consistency and high fidelity in editing video
object appearance. Despite the success in image editing, diffusion models still
encounter significant hindrances when it comes to video editing due to the
challenge of maintaining spatiotemporal consistency in the object's appearance
across frames. On the other hand, atlas-based techniques allow propagating
edits on the layered representations consistently back to frames. However, they
often struggle to create editing effects that adhere correctly to the
user-provided textual or visual conditions due to the limitation of editing the
texture atlas on a fixed UV mapping field. Our method leverages a
visual-textual diffusion model to edit objects directly on the diffusion
atlases, ensuring coherent object identity across frames. We design a loss term
with atlas-based constraints and build a pretrained text-driven diffusion model
as pixel-wise guidance for refining shape distortions and correcting texture
deviations. Qualitative and quantitative experiments show that our method
outperforms state-of-the-art methods in achieving consistent high-fidelity
video-object editing.Comment: Preprin