30 research outputs found

    Stitching for multi-view videos with large parallax based on adaptive pixel warping

    Get PDF
    Conventional stitching techniques for images and videos are based on smooth warping models, and therefore, they often fail to work on multi-view images and videos with large parallax captured by cameras with wide baselines. In this paper, we propose a novel video stitching algorithm for such challenging multi-view videos. We estimate the parameters of ground plane homography, fundamental matrix, and vertical vanishing points reliably, using both of the appearance and activity based feature matches validated by geometric constraints. We alleviate the parallax artifacts in stitching by adaptively warping the off-plane pixels into geometrically accurate matching positions through their ground plane pixels based on the epipolar geometry. We also exploit the inter-view and inter-frame correspondence matching information together to estimate the ground plane pixels reliably, which are then refined by energy minimization. Experimental results show that the proposed algorithm provides geometrically accurate stitching results of multi-view videos with large parallax and outperforms the state-of-the-art stitching methods qualitatively and quantitatively

    ???????????? ????????? ????????? ??????????????? ?????? ????????? ????????? ??????????????? ?????????

    Get PDF
    Department of Biomedical EngineeringImage stitching is a well-known method to make panoramic image which has a wide field-of-view and high resolution. It has been used in various fields such as digital map, gigapixel imaging, and 360-degree camera. However, commercial stitching tools often fail, require a lot of processing time, and only work on certain images. The problems of existing tools are mainly caused by trying to stitch the wrong image pair. To overcome these problems, it is important to select suitable image pair for stitching in advance. Nevertheless, there are no universal standards to judge the good image pairs. Moreover, the derived stitching algorithms cannot be compatible with each other because they conform to their own available criteria. Here, we present universal stitching parameters and their conditions for selecting good image pairs. The proposed stitching parameters can be easily calculated through analysis of corresponding features and homography, which are basic elements in feature-based image stitching algorithm. In order to specify the conditions of the stitching parameters, we devised a new method to calculate stitching accuracy for qualifying stitching results into 3 classesgood, bad, and fail. With the classed stitching results, the values of the stitching parameters could be checked how they differ in each class. Through experiments with large datasets, the most valid parameter for each class is identified as filtering level which is calculated in corresponding feature analysis. In addition, supplemental experiments were conducted with various datasets to demonstrate the validity of the filtering level. As a result of our study, universal stitching parameters can judge the success of stitching, so that it is possible to prevent stitching errors through parameter verification test in advance. This paper can greatly contribute to guide for creating high performance and high efficiency stitching software by applying the proposed stitching conditions.ope

    An improved adaptive triangular mesh-based image warping method

    Get PDF
    It is of vital importance to stitch the two images into a panorama in many computer vision applications of motion detection and tracking and virtual reality, panoramic photography, and virtual tours. To preserve more local details and with few artifacts in panoramas, this article presents an improved mesh-based joint optimization image stitching model. Since the uniform vertices are usually used in mesh-based warps, we consider the matched feature points and uniform points as grid vertices to strengthen constraints on deformed vertices. Simultaneously, we define an improved energy function and add a color similarity term to perform the alignment. In addition to good alignment and minimal local distortion, a regularization parameter strategy of combining our method with an as-projective-as-possible (APAP) warp is introduced. Then, controlling the proportion of each part by calculating the distance between the vertex and the nearest matched feature point to the vertex. This ensures a more natural stitching effect in non-overlapping areas. A comprehensive evaluation shows that the proposed method achieves more accurate image stitching, with significantly reduced ghosting effects in the overlapping regions and more natural results in the other areas. The comparative experiments demonstrate that the proposed method outperforms the state-of-the-art image stitching warps and achieves higher precision panorama stitching and less distortion in the overlapping. The proposed algorithm illustrates great application potential in image stitching, which can achieve higher precision panoramic image stitching

    Deep Rectangling for Image Stitching: A Learning Baseline

    Full text link
    Stitched images provide a wide field-of-view (FoV) but suffer from unpleasant irregular boundaries. To deal with this problem, existing image rectangling methods devote to searching an initial mesh and optimizing a target mesh to form the mesh deformation in two stages. Then rectangular images can be generated by warping stitched images. However, these solutions only work for images with rich linear structures, leading to noticeable distortions for portraits and landscapes with non-linear objects. In this paper, we address these issues by proposing the first deep learning solution to image rectangling. Concretely, we predefine a rigid target mesh and only estimate an initial mesh to form the mesh deformation, contributing to a compact one-stage solution. The initial mesh is predicted using a fully convolutional network with a residual progressive regression strategy. To obtain results with high content fidelity, a comprehensive objective function is proposed to simultaneously encourage the boundary rectangular, mesh shape-preserving, and content perceptually natural. Besides, we build the first image stitching rectangling dataset with a large diversity in irregular boundaries and scenes. Experiments demonstrate our superiority over traditional methods both quantitatively and qualitatively.Comment: Accepted by CVPR2022 (oral); Codes and dataset: https://github.com/nie-lang/DeepRectanglin
    corecore