12,448 research outputs found
RFVTM: A Recovery and Filtering Vertex Trichotomy Matching for Remote Sensing Image Registration
Reliable feature point matching is a vital yet challenging process in
feature-based image registration. In this paper,a robust feature point matching
algorithm called Recovery and Filtering Vertex Trichotomy Matching (RFVTM) is
proposed to remove outliers and retain sufficient inliers for remote sensing
images. A novel affine invariant descriptor called vertex trichotomy descriptor
is proposed on the basis of that geometrical relations between any of vertices
and lines are preserved after affine transformations, which is constructed by
mapping each vertex into trichotomy sets. The outlier removals in Vertex
Trichotomy Matching (VTM) are implemented by iteratively comparing the
disparity of corresponding vertex trichotomy descriptors. Some inliers
mistakenly validated by a large amount of outliers are removed in VTM
iterations, and several residual outliers close to correct locations cannot be
excluded with the same graph structures. Therefore, a recovery and filtering
strategy is designed to recover some inliers based on identical vertex
trichotomy descriptors and restricted transformation errors. Assisted with the
additional recovered inliers, residual outliers can also be filtered out during
the process of reaching identical graph for the expanded vertex sets.
Experimental results demonstrate the superior performance on precision and
stability of this algorithm under various conditions, such as remote sensing
images with large transformations, duplicated patterns, or inconsistent
spectral content
Image Registration Techniques: A Survey
Image Registration is the process of aligning two or more images of the same
scene with reference to a particular image. The images are captured from
various sensors at different times and at multiple view-points. Thus to get a
better picture of any change of a scene or object over a considerable period of
time image registration is important. Image registration finds application in
medical sciences, remote sensing and in computer vision. This paper presents a
detailed review of several approaches which are classified accordingly along
with their contributions and drawbacks. The main steps of an image registration
procedure are also discussed. Different performance measures are presented that
determine the registration quality and accuracy. The scope for the future
research are presented as well
Feature-based groupwise registration of historical aerial images to present-day ortho-photo maps
In this paper, we address the registration of historical WWII images to
present-day ortho-photo maps for the purpose of geolocalization. Due to the
challenging nature of this problem, we propose to register the images jointly
as a group rather than in a step-by-step manner. To this end, we exploit Hough
Voting spaces as pairwise registration estimators and show how they can be
integrated into a probabilistic groupwise registration framework that can be
efficiently optimized. The feature-based nature of our registration framework
allows to register images with a-priori unknown translational and rotational
relations, and is also able to handle scale changes of up to 30% in our test
data due to a final geometrically guided matching step. The superiority of the
proposed method over existing pairwise and groupwise registration methods is
demonstrated on eight highly challenging sets of historical images with
corresponding ortho-photo maps.Comment: Under review at Elsevier Pattern Recognitio
An investigation towards wavelet based optimization of automatic image registration techniques
Image registration is the process of transforming different sets of data into
one coordinate system and is required for various remote sensing applications
like change detection, image fusion, and other related areas. The effect of
increased relief displacement, requirement of more control points, and
increased data volume are the challenges associated with the registration of
high resolution image data. The objective of this research work is to study the
most efficient techniques and to investigate the extent of improvement
achievable by enhancing them with Wavelet transform. The SIFT feature based
method uses the Eigen value for extracting thousands of key points based on
scale invariant features and these feature points when further enhanced by the
wavelet transform yields the best results
Sub-Pixel Registration of Wavelet-Encoded Images
Sub-pixel registration is a crucial step for applications such as
super-resolution in remote sensing, motion compensation in magnetic resonance
imaging, and non-destructive testing in manufacturing, to name a few. Recently,
these technologies have been trending towards wavelet encoded imaging and
sparse/compressive sensing. The former plays a crucial role in reducing imaging
artifacts, while the latter significantly increases the acquisition speed. In
view of these new emerging needs for applications of wavelet encoded imaging,
we propose a sub-pixel registration method that can achieve direct wavelet
domain registration from a sparse set of coefficients. We make the following
contributions: (i) We devise a method of decoupling scale, rotation, and
translation parameters in the Haar wavelet domain, (ii) We derive explicit
mathematical expressions that define in-band sub-pixel registration in terms of
wavelet coefficients, (iii) Using the derived expressions, we propose an
approach to achieve in-band subpixel registration, avoiding back and forth
transformations. (iv) Our solution remains highly accurate even when a sparse
set of coefficients are used, which is due to localization of signals in a
sparse set of wavelet coefficients. We demonstrate the accuracy of our method,
and show that it outperforms the state-of-the-art on simulated and real data,
even when the data is sparse
Improving Co-registration for Sentinel-1 SAR and Sentinel-2 Optical images
Co-registering the Sentinel-1 SAR and Sentinel-2 optical data of European
Space Agency (ESA) is of great importance for many remote sensing applications.
However, we find that there are evident misregistration shifts between the
Sentinel-1 SAR and Sentinel-2 optical images that are directly downloaded from
the official website. To address that, this paper presents a fast and effective
registration method for the two types of images. In the proposed method, a
block-based scheme is first designed to extract evenly distributed interest
points. Then the correspondences are detected by using the similarity of
structural features between the SAR and optical images, where the three
dimension (3D) phase correlation (PC) is used as the similarity measure for
accelerating image matching. Finally, the obtained correspondences are employed
to measure the misregistration shifts between the images. Moreover, to
eliminate the misregistration, we use some representative geometric
transformation models such as polynomial models, projective models, and
rational function models for the co-registration of the two types of images,
and compare and analyze their registration accuracy under different numbers of
control points and different terrains. Six pairs of the Sentinel-1 SAR L1 and
Sentinel-2 optical L1C images covering three different terrains are tested in
our experiments. Experimental results show that the proposed method can achieve
precise correspondences between the images, and the 3rd. Order polynomial
achieves the most satisfactory registration results. Its registration accuracy
of the flat areas is less than 1.0 10m pixels, and that of the hilly areas is
about 1.5 10m pixels, and that of the mountainous areas is between 1.7 and 2.3
10m pixels, which significantly improves the co-registration accuracy of the
Sentinel-1 SAR and Sentinel-2 optical images
Machine Learning Techniques and Applications For Ground-based Image Analysis
Ground-based whole sky cameras have opened up new opportunities for
monitoring the earth's atmosphere. These cameras are an important complement to
satellite images by providing geoscientists with cheaper, faster, and more
localized data. The images captured by whole sky imagers can have high spatial
and temporal resolution, which is an important pre-requisite for applications
such as solar energy modeling, cloud attenuation analysis, local weather
prediction, etc.
Extracting valuable information from the huge amount of image data by
detecting and analyzing the various entities in these images is challenging.
However, powerful machine learning techniques have become available to aid with
the image analysis. This article provides a detailed walk-through of recent
developments in these techniques and their applications in ground-based
imaging. We aim to bridge the gap between computer vision and remote sensing
with the help of illustrative examples. We demonstrate the advantages of using
machine learning techniques in ground-based image analysis via three primary
applications -- segmentation, classification, and denoising
Optimizing Auto-correlation for Fast Target Search in Large Search Space
In remote sensing image-blurring is induced by many sources such as
atmospheric scatter, optical aberration, spatial and temporal sensor
integration. The natural blurring can be exploited to speed up target search by
fast template matching. In this paper, we synthetically induce additional
non-uniform blurring to further increase the speed of the matching process. To
avoid loss of accuracy, the amount of synthetic blurring is varied spatially
over the image according to the underlying content. We extend transitive
algorithm for fast template matching by incorporating controlled image blur. To
this end we propose an Efficient Group Size (EGS) algorithm which minimizes the
number of similarity computations for a particular search image. A larger
efficient group size guarantees less computations and more speedup. EGS
algorithm is used as a component in our proposed Optimizing auto-correlation
(OptA) algorithm. In OptA a search image is iteratively non-uniformly blurred
while ensuring no accuracy degradation at any image location. In each iteration
efficient group size and overall computations are estimated by using the
proposed EGS algorithm. The OptA algorithm stops when the number of
computations cannot be further decreased without accuracy degradation. The
proposed algorithm is compared with six existing state of the art exhaustive
accuracy techniques using correlation coefficient as the similarity measure.
Experiments on satellite and aerial image datasets demonstrate the
effectiveness of the proposed algorithm
Complete Scene Reconstruction by Merging Images and Laser Scans
Image based modeling and laser scanning are two commonly used approaches in
large-scale architectural scene reconstruction nowadays. In order to generate a
complete scene reconstruction, an effective way is to completely cover the
scene using ground and aerial images, supplemented by laser scanning on certain
regions with low texture and complicated structure. Thus, the key issue is to
accurately calibrate cameras and register laser scans in a unified framework.
To this end, we proposed a three-step pipeline for complete scene
reconstruction by merging images and laser scans. First, images are captured
around the architecture in a multi-view and multi-scale way and are feed into a
structure-from-motion (SfM) pipeline to generate SfM points. Then, based on the
SfM result, the laser scanning locations are automatically planned by
considering textural richness, structural complexity of the scene and spatial
layout of the laser scans. Finally, the images and laser scans are accurately
merged in a coarse-to-fine manner. Experimental evaluations on two ancient
Chinese architecture datasets demonstrate the effectiveness of our proposed
complete scene reconstruction pipeline.Comment: This manuscript has been accepted by IEEE TCSV
Leveraging Photogrammetric Mesh Models for Aerial-Ground Feature Point Matching Toward Integrated 3D Reconstruction
Integration of aerial and ground images has been proved as an efficient
approach to enhance the surface reconstruction in urban environments. However,
as the first step, the feature point matching between aerial and ground images
is remarkably difficult, due to the large differences in viewpoint and
illumination conditions. Previous studies based on geometry-aware image
rectification have alleviated this problem, but the performance and convenience
of this strategy is limited by several flaws, e.g. quadratic image pairs,
segregated extraction of descriptors and occlusions. To address these problems,
we propose a novel approach: leveraging photogrammetric mesh models for
aerial-ground image matching. The methods of this proposed approach have linear
time complexity with regard to the number of images, can explicitly handle low
overlap using multi-view images and can be directly injected into off-the-shelf
structure-from-motion (SfM) and multi-view stereo (MVS) solutions. First,
aerial and ground images are reconstructed separately and initially
co-registered through weak georeferencing data. Second, aerial models are
rendered to the initial ground views, in which the color, depth and normal
images are obtained. Then, the synthesized color images and the corresponding
ground images are matched by comparing the descriptors, filtered by local
geometrical information, and then propagated to the aerial views using depth
images and patch-based matching. Experimental evaluations using various
datasets confirm the superior performance of the proposed methods in
aerial-ground image matching. In addition, incorporation of the existing SfM
and MVS solutions into these methods enables more complete and accurate models
to be directly obtained.Comment: Accepted for publication in ISPRS Journal of Photogrammetry and
Remote Sensin
- …