88 research outputs found
Non-convex regularization in remote sensing
In this paper, we study the effect of different regularizers and their
implications in high dimensional image classification and sparse linear
unmixing. Although kernelization or sparse methods are globally accepted
solutions for processing data in high dimensions, we present here a study on
the impact of the form of regularization used and its parametrization. We
consider regularization via traditional squared (2) and sparsity-promoting (1)
norms, as well as more unconventional nonconvex regularizers (p and Log Sum
Penalty). We compare their properties and advantages on several classification
and linear unmixing tasks and provide advices on the choice of the best
regularizer for the problem at hand. Finally, we also provide a fully
functional toolbox for the community.Comment: 11 pages, 11 figure
Recent Advances in Image Restoration with Applications to Real World Problems
In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included
A New Variational Approach Based on Proximal Deep Injection and Gradient Intensity Similarity for Spatio-Spectral Image Fusion
Pansharpening is a very debated spatio-spectral fusion problem. It refers to the fusion of a high spatial resolution panchromatic image with a lower spatial but higher spectral resolution multispectral image in order to obtain an image with high resolution in both the domains. In this article, we propose a novel variational optimization-based (VO) approach to address this issue incorporating the outcome of a deep convolutional neural network (DCNN). This solution can take advantages of both the paradigms. On one hand, higher performance can be expected introducing machine learning (ML) methods based on the training by examples philosophy into VO approaches. On other hand, the combination of VO techniques with DCNNs can aid the generalization ability of these latter. In particular, we formulate a -based proximal deep injection term to evaluate the distance between the DCNN outcome, and the desired high spatial resolution multispectral image. This represents the regularization term for our VO model. Furthermore, a new data fitting term measuring the spatial fidelity is proposed. Finally, the proposed convex VO problem is efficiently solved by exploiting the framework of the alternating direction method of multipliers (ADMM), thus guaranteeing the convergence of the algorithm. Extensive experiments both on simulated, and real datasets demonstrate that the proposed approach can outperform state-of-the-art spatio-spectral fusion methods, even showing a significant generalization ability. Please find the project page at https://liangjiandeng.github.io/Projects_Res/DMPIF_2020jstars.html
Fusing Multiple Multiband Images
We consider the problem of fusing an arbitrary number of multiband, i.e.,
panchromatic, multispectral, or hyperspectral, images belonging to the same
scene. We use the well-known forward observation and linear mixture models with
Gaussian perturbations to formulate the maximum-likelihood estimator of the
endmember abundance matrix of the fused image. We calculate the Fisher
information matrix for this estimator and examine the conditions for the
uniqueness of the estimator. We use a vector total-variation penalty term
together with nonnegativity and sum-to-one constraints on the endmember
abundances to regularize the derived maximum-likelihood estimation problem. The
regularization facilitates exploiting the prior knowledge that natural images
are mostly composed of piecewise smooth regions with limited abrupt changes,
i.e., edges, as well as coping with potential ill-posedness of the fusion
problem. We solve the resultant convex optimization problem using the
alternating direction method of multipliers. We utilize the circular
convolution theorem in conjunction with the fast Fourier transform to alleviate
the computational complexity of the proposed algorithm. Experiments with
multiband images constructed from real hyperspectral datasets reveal the
superior performance of the proposed algorithm in comparison with the
state-of-the-art algorithms, which need to be used in tandem to fuse more than
two multiband images
Deep Hyperspectral and Multispectral Image Fusion with Inter-image Variability
Hyperspectral and multispectral image fusion allows us to overcome the
hardware limitations of hyperspectral imaging systems inherent to their lower
spatial resolution. Nevertheless, existing algorithms usually fail to consider
realistic image acquisition conditions. This paper presents a general imaging
model that considers inter-image variability of data from heterogeneous sources
and flexible image priors. The fusion problem is stated as an optimization
problem in the maximum a posteriori framework. We introduce an original image
fusion method that, on the one hand, solves the optimization problem accounting
for inter-image variability with an iteratively reweighted scheme and, on the
other hand, that leverages light-weight CNN-based networks to learn realistic
image priors from data. In addition, we propose a zero-shot strategy to
directly learn the image-specific prior of the latent images in an unsupervised
manner. The performance of the algorithm is illustrated with real data subject
to inter-image variability.Comment: IEEE Trans. Geosci. Remote sens., to be published. Manuscript
submitted August 23, 2022; revised Dec. 15, 2022, and Mar. 13, 2023; and
accepted Apr. 07, 202
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc.
Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset.
One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis.
In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems
Variational Image Segmentation with Constraints
The research of Huizhu Pan addresses the problem of image segmentation with constraints though designing and solving various variational models. A novel constraint term is designed for the use of landmarks in image segmentation. Two region-based segmentation models were proposed where the segmentation contour passes through landmark points. A more stable and memory efficient solution to the self-repelling snakes model, a variational model with the topology preservation constraint, was also designed
- …