25 research outputs found

    Voronoi Centerline-Based Seamline Network Generation Method

    No full text
    Seamline network generation is a crucial step in mosaicking multiple orthoimages. It determines the topological and mosaic contribution area for each orthoimage. Previous methods, such as Voronoi-based and AVOD (area Voronoi)-based, may generate mosaic holes in low-overlap and irregular orthoimage cases. This paper proposes a Voronoi centerline-based seamline network generation method to address this problem. The first step is to detect the edge vector of the valid orthoimage region; the second step is to construct a Voronoi triangle network using the edge vector points and extract the centerline of the network; the third step is to segment each orthoimage by the generated centerlines to construct the image effective mosaic polygon (EMP). The final segmented EMP is the mosaic contribution region. All EMPs are interconnected to form a seamline network. The main contribution of the proposed method is that it solves the mosaic holes in the Voronoi-based method when processing with low overlap, and it solves the limitation of the AVOD-based method polygon shape requirement, which can generate a complete mosaic in any overlap and any shape of the orthoimage. Five sets of experiments were conducted, and the results show that the proposed method surpasses the well-known state-of-the-art method and commercial software in terms of adaptability and effectiveness

    Automatic Matching of High Resolution Satellite Images Based on RFM

    No full text
    A matching method for high resolution satellite images based on RFM is presented.Firstly,the RFM parameters are used to predict the initial parallax of corresponding points and the prediction accuracy is analyzed.Secondly,the approximate epipolar equation is constructed based on projection tracking and its accuracy is analyzed.Thirdly,approximate 1D image matching is executed on pyramid images and least square matching on base images.At last RANSAC is imbedded to eliminate mis-matching points and matching results are obtained.Test results verified the method more robust and with higher matching rate,compared to 2D gray correlation method and the popular SIFT matching method,and the method preferably solved the question of high resolution satellite image matching with different stereo model,different time and large rotation images

    Large Aerial Image Tie Point Matching in Real and Difficult Survey Areas via Deep Learning Method

    No full text
    Image tie point matching is an essential task in real aerial photogrammetry, especially for model tie points. In current photogrammetry production, SIFT is still the main matching algorithm because of the high robustness for most aerial image tie points matching. However, when there is a certain number of weak texture images in a surveying area (mountain, grassland, woodland, etc.), these models often lack tie points, resulting in the failure of building an airline network. Some studies have shown that the image matching method based on deep learning is better than the SIFT method and other traditional methods to some extent (even for weak texture images). Unfortunately, these methods are often only used in small images, and they cannot be directly applied to large image tie point matching in real photogrammetry. Considering the actual photogrammetry needs and motivated by the Block-SIFT and SuperGlue, this paper proposes a SuperGlue-based LR-Superglue matching method for large aerial image tie points matching, which makes learned image matching possible in photogrammetry application and promotes the photogrammetry towards artificial intelligence. Experiments on real and difficult aerial surveying areas show that LR-Superglue obtains more model tie points in forward direction (on average, there are 60 more model points in each model) and more image tie points between airline(on average, there are 36 more model points in each adjacent images). Most importantly, the LR-Superglue method requires a certain number of points between each adjacent model, while the Block-SIFT method made a few models have no tie points. At the same time, the relative orientation accuracy of the image tie points matched by the proposed method is significantly better than block-SIFT, which reduced from 3.64 μm to 2.85 μm on average in each model (the camera pixel is 4.6 μm)

    Fully automatic DOM generation method based on optical flow field dense image matching

    No full text
    ABSTRACTAutomatic Digital Orthophoto Map (DOM) generation plays an important role in many downstream works such as land use and cover detection, urban planning, and disaster assessment. Existing DOM generation methods can generate promising results but always need ground object filtered DEM generation before otho-rectification; this can consume much time and produce building facade contained results. To address this problem, a pixel-by-pixel digital differential rectification-based automatic DOM generation method is proposed in this paper. Firstly, 3D point clouds with texture are generated by dense image matching based on an optical flow field for a stereo pair of images, respectively. Then, the grayscale of the digital differential rectification image is extracted directly from the point clouds element by element according to the nearest neighbor method for matched points. Subsequently, the elevation is repaired grid-by-grid using the multi-layer Locally Refined B-spline (LR-B) interpolation method with triangular mesh constraint for the point clouds void area, and the grayscale is obtained by the indirect scheme of digital differential rectification to generate the pixel-by-pixel digital differentially rectified image of a single image slice. Finally, a seamline network is automatically searched using a disparity map optimization algorithm, and DOM is smartly mosaicked. The qualitative and quantitative experimental results on three datasets were produced and evaluated, which confirmed the feasibility of the proposed method, and the DOM accuracy can reach 1 Ground Sample Distance (GSD) level. The comparison experiment with the state-of-the-art commercial softwares showed that the proposed method generated DOM has a better visual effect on building boundaries and roof completeness with comparable accuracy and computational efficiency

    Large Aerial Image Tie Point Matching in Real and Difficult Survey Areas via Deep Learning Method

    No full text
    Image tie point matching is an essential task in real aerial photogrammetry, especially for model tie points. In current photogrammetry production, SIFT is still the main matching algorithm because of the high robustness for most aerial image tie points matching. However, when there is a certain number of weak texture images in a surveying area (mountain, grassland, woodland, etc.), these models often lack tie points, resulting in the failure of building an airline network. Some studies have shown that the image matching method based on deep learning is better than the SIFT method and other traditional methods to some extent (even for weak texture images). Unfortunately, these methods are often only used in small images, and they cannot be directly applied to large image tie point matching in real photogrammetry. Considering the actual photogrammetry needs and motivated by the Block-SIFT and SuperGlue, this paper proposes a SuperGlue-based LR-Superglue matching method for large aerial image tie points matching, which makes learned image matching possible in photogrammetry application and promotes the photogrammetry towards artificial intelligence. Experiments on real and difficult aerial surveying areas show that LR-Superglue obtains more model tie points in forward direction (on average, there are 60 more model points in each model) and more image tie points between airline(on average, there are 36 more model points in each adjacent images). Most importantly, the LR-Superglue method requires a certain number of points between each adjacent model, while the Block-SIFT method made a few models have no tie points. At the same time, the relative orientation accuracy of the image tie points matched by the proposed method is significantly better than block-SIFT, which reduced from 3.64 μm to 2.85 μm on average in each model (the camera pixel is 4.6 μm)
    corecore