31 research outputs found
Parallax-Tolerant Image Stitching with Epipolar Displacement Field
Large parallax image stitching is a challenging task. Existing methods often
struggle to maintain both the local and global structures of the image while
reducing alignment artifacts and warping distortions. In this paper, we propose
a novel approach that utilizes epipolar geometry to establish a warping
technique based on the epipolar displacement field. Initially, the warping rule
for pixels in the epipolar geometry is established through the infinite
homography. Subsequently, Subsequently, the epipolar displacement field, which
represents the sliding distance of the warped pixel along the epipolar line, is
formulated by thin plate splines based on the principle of local elastic
deformation. The stitching result can be generated by inversely warping the
pixels according to the epipolar displacement field. This method incorporates
the epipolar constraints in the warping rule, which ensures high-quality
alignment and maintains the projectivity of the panorama. Qualitative and
quantitative comparative experiments demonstrate the competitiveness of the
proposed method in stitching images large parallax
Implicit Neural Image Stitching With Enhanced and Blended Feature Reconstruction
Existing frameworks for image stitching often provide visually reasonable
stitchings. However, they suffer from blurry artifacts and disparities in
illumination, depth level, etc. Although the recent learning-based stitchings
relax such disparities, the required methods impose sacrifice of image
qualities failing to capture high-frequency details for stitched images. To
address the problem, we propose a novel approach, implicit Neural Image
Stitching (NIS) that extends arbitrary-scale super-resolution. Our method
estimates Fourier coefficients of images for quality-enhancing warps. Then, the
suggested model blends color mismatches and misalignment in the latent space
and decodes the features into RGB values of stitched images. Our experiments
show that our approach achieves improvement in resolving the low-definition
imaging of the previous deep image stitching with favorable accelerated
image-enhancing methods. Our source code is available at
https://github.com/minshu-kim/NIS
Learning Thin-Plate Spline Motion and Seamless Composition for Parallax-Tolerant Unsupervised Deep Image Stitching
Traditional image stitching approaches tend to leverage increasingly complex
geometric features (point, line, edge, etc.) for better performance. However,
these hand-crafted features are only suitable for specific natural scenes with
adequate geometric structures. In contrast, deep stitching schemes overcome the
adverse conditions by adaptively learning robust semantic features, but they
cannot handle large-parallax cases due to homography-based registration. To
solve these issues, we propose UDIS++, a parallax-tolerant unsupervised deep
image stitching technique. First, we propose a robust and flexible warp to
model the image registration from global homography to local thin-plate spline
motion. It provides accurate alignment for overlapping regions and shape
preservation for non-overlapping regions by joint optimization concerning
alignment and distortion. Subsequently, to improve the generalization
capability, we design a simple but effective iterative strategy to enhance the
warp adaption in cross-dataset and cross-resolution applications. Finally, to
further eliminate the parallax artifacts, we propose to composite the stitched
image seamlessly by unsupervised learning for seam-driven composition masks.
Compared with existing methods, our solution is parallax-tolerant and free from
laborious designs of complicated geometric features for specific scenes.
Extensive experiments show our superiority over the SoTA methods, both
quantitatively and qualitatively. The code will be available at
https://github.com/nie-lang/UDIS2
Content-preserving image stitching with piecewise rectangular boundary constraints
This paper proposes an approach to content-preserving image stitching with regular boundary constraints, which aims to stitch multiple images to generate a panoramic image with a piecewise rectangular boundary. Existing methods treat image stitching and rectangling as two separate steps, which may result in suboptimal results as the stitching process is not aware of the further warping needs for rectangling. We address these limitations by formulating image stitching with regular boundaries in a unified optimization. Starting from the initial stitching results produced by the traditional warping-based optimization, we obtain the irregular boundary from the warped meshes by polygon Boolean operations which robustly handle arbitrary mesh compositions. By analyzing the irregular boundary, we construct a piecewise rectangular boundary. Based on this, we further incorporate line and regular boundary preservation constraints into the image stitching framework, and conduct iterative optimization to obtain an optimal piecewise rectangular boundary. Thus we can make the boundary of the stitching results as close as possible to a rectangle, while reducing unwanted distortions. We further extend our method to video stitching, by integrating the temporal coherence into the optimization. Experiments show that our method efficiently produces visually pleasing panoramas with regular boundaries and unnoticeable distortions
Deep Rectangling for Image Stitching: A Learning Baseline
Stitched images provide a wide field-of-view (FoV) but suffer from unpleasant
irregular boundaries. To deal with this problem, existing image rectangling
methods devote to searching an initial mesh and optimizing a target mesh to
form the mesh deformation in two stages. Then rectangular images can be
generated by warping stitched images. However, these solutions only work for
images with rich linear structures, leading to noticeable distortions for
portraits and landscapes with non-linear objects. In this paper, we address
these issues by proposing the first deep learning solution to image
rectangling. Concretely, we predefine a rigid target mesh and only estimate an
initial mesh to form the mesh deformation, contributing to a compact one-stage
solution. The initial mesh is predicted using a fully convolutional network
with a residual progressive regression strategy. To obtain results with high
content fidelity, a comprehensive objective function is proposed to
simultaneously encourage the boundary rectangular, mesh shape-preserving, and
content perceptually natural. Besides, we build the first image stitching
rectangling dataset with a large diversity in irregular boundaries and scenes.
Experiments demonstrate our superiority over traditional methods both
quantitatively and qualitatively.Comment: Accepted by CVPR2022 (oral); Codes and dataset:
https://github.com/nie-lang/DeepRectanglin
An improved adaptive triangular mesh-based image warping method
It is of vital importance to stitch the two images into a panorama in many computer vision applications of motion detection and tracking and virtual reality, panoramic photography, and virtual tours. To preserve more local details and with few artifacts in panoramas, this article presents an improved mesh-based joint optimization image stitching model. Since the uniform vertices are usually used in mesh-based warps, we consider the matched feature points and uniform points as grid vertices to strengthen constraints on deformed vertices. Simultaneously, we define an improved energy function and add a color similarity term to perform the alignment. In addition to good alignment and minimal local distortion, a regularization parameter strategy of combining our method with an as-projective-as-possible (APAP) warp is introduced. Then, controlling the proportion of each part by calculating the distance between the vertex and the nearest matched feature point to the vertex. This ensures a more natural stitching effect in non-overlapping areas. A comprehensive evaluation shows that the proposed method achieves more accurate image stitching, with significantly reduced ghosting effects in the overlapping regions and more natural results in the other areas. The comparative experiments demonstrate that the proposed method outperforms the state-of-the-art image stitching warps and achieves higher precision panorama stitching and less distortion in the overlapping. The proposed algorithm illustrates great application potential in image stitching, which can achieve higher precision panoramic image stitching