981 research outputs found
Compressive Source Separation: Theory and Methods for Hyperspectral Imaging
With the development of numbers of high resolution data acquisition systems
and the global requirement to lower the energy consumption, the development of
efficient sensing techniques becomes critical. Recently, Compressed Sampling
(CS) techniques, which exploit the sparsity of signals, have allowed to
reconstruct signal and images with less measurements than the traditional
Nyquist sensing approach. However, multichannel signals like Hyperspectral
images (HSI) have additional structures, like inter-channel correlations, that
are not taken into account in the classical CS scheme. In this paper we exploit
the linear mixture of sources model, that is the assumption that the
multichannel signal is composed of a linear combination of sources, each of
them having its own spectral signature, and propose new sampling schemes
exploiting this model to considerably decrease the number of measurements
needed for the acquisition and source separation. Moreover, we give theoretical
lower bounds on the number of measurements required to perform reconstruction
of both the multichannel signal and its sources. We also proposed optimization
algorithms and extensive experimentation on our target application which is
HSI, and show that our approach recovers HSI with far less measurements and
computational effort than traditional CS approaches.Comment: 32 page
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Source Modulated Multiplexed Hyperspectral Imaging: Theory, Hardware and Application
The design, analysis and application of a multiplexing hyperspectral imager is presented.
The hyperspectral imager consists of a broadband digital light projector that uses a digital
micromirror array as the optical engine to project light patterns onto a sample object. A
single point spectrometer measures light that is reflected from the sample. Multiplexing
patterns encode the spectral response from the sample, where each spectrum taken is the
sum of a set of spectral responses from a number of pixels. Decoding in software recovers
the spectral response of each pixel. A technique, which we call complement encoding, is
introduced for the removal of background light effects. Complement encoding requires
the use of multiplexing matrices with positive and negative entries.
The theory of multiplexing using the Hadamard matrices is developed. Results from
prior art are incorporated into a singular notational system under which the different
Hadamard matrices are compared with each other and with acquisition of data without
multiplexing (pointwise acquisition). The link between Hadamard matrices with strongly
regular graphs is extended to incorporate all three types of Hadamard matrices. The effect
of the number of measurements used in compressed sensing on measurement precision is
derived by inference using results concerning the eigenvalues of large random matrices.
The literature shows that more measurements increases accuracy of reconstruction. In
contrast we find that more measurement reduces precision, so there is a tradeoff between
precision and accuracy. The effect of error in the reference on the Wilcoxon statistic is
derived. Reference error reduces the estimate of the Wilcoxon, however given an estimate
of theWilcoxon and the proportion of error in the reference, we show thatWilcoxon
without error can be estimated.
Imaging of simple objects and signal to noise ratio (SNR) experiments are used to
test the hyperspectral imager. The simple objects allow us to see that the imager produces
sensible spectra. The experiments involve looking at the SNR itself and the SNR boost,
that is ratio of the SNR from multiplexing to the SNR from pointwise acquisition. The
SNR boost varies dramatically across the spectral domain from 3 to the theoretical maximum
of 16. The range of boost values is due to the relative Poisson to additive noise
variance changing over the spectral domain, an effect that is due to the light bulb output
and detector sensitivity not being flat over the spectral domain. It is shown that the SNR boost is least where the SNR is high and is greatest where the SNR is least, so the boost
is provided where it is needed most. The varying SNR boost is interpreted as a preferential
boost, that is useful when the dominant noise source is indeterminate or varying.
Compressed sensing precision is compared with the accuracy in reconstruction and with
the precision in Hadamard multiplexing. A tradeoff is observed between accuracy and
precision as the number of measurements increases. Generally Hadamard multiplexing is
found to be superior to compressed sensing, but compressed sensing is considered suitable
when shortened data acquisition time is important and poorer data quality is acceptable.
To further show the use of the hyperspectral imager, volumetric mapping and analysis
of beef m. longissimus dorsi are performed. Hyperspectral images are taken of successive
slices down the length of the muscle. Classification of the spectra according to visible
content as lean or nonlean is trialled, resulting in a Wilcoxon value greater than 0.95,
indicating very strong classification power. Analysis of the variation in the spectra down
the length of the muscles is performed using variography. The variation in spectra of a
muscle is small but increases with distance, and there is a periodic effect possibly due to
water seepage from where connective tissue is removed from the meat while cutting from
the carcass. The spectra are compared to parameters concerning the rate and value of
meat bloom (change of colour post slicing), pH and tenderometry reading (shear force).
Mixed results for prediction of blooming parameters are obtained, pH shows strong correlation (R² = 0.797) with the spectral band 598-949 nm despite the narrow range of
pH readings obtained. A likewise narrow range of tenderometry readings resulted in no
useful correlation with the spectra.
Overall the spatial multiplexed imaging with a DMA based light modulation is successful.
The theoretical analysis of multiplexing gives a general description of the system
performance, particularly for multiplexing with the Hadamard matrices. Experiments
show that the Hadamard multiplexing technique improves the SNR of spectra taken over
pointwise imaging. Aspects of the theoretical analysis are demonstrated. Hyperspectral
images are acquired and analysed that demonstrate that the spectra acquired are sensible
and useful
An Integrative Remote Sensing Application of Stacked Autoencoder for Atmospheric Correction and Cyanobacteria Estimation Using Hyperspectral Imagery
Hyperspectral image sensing can be used to effectively detect the distribution of harmful cyanobacteria. To accomplish this, physical- and/or model-based simulations have been conducted to perform an atmospheric correction (AC) and an estimation of pigments, including phycocyanin (PC) and chlorophyll-a (Chl-a), in cyanobacteria. However, such simulations were undesirable in certain cases, due to the difficulty of representing dynamically changing aerosol and water vapor in the atmosphere and the optical complexity of inland water. Thus, this study was focused on the development of a deep neural network model for AC and cyanobacteria estimation, without considering the physical formulation. The stacked autoencoder (SAE) network was adopted for the feature extraction and dimensionality reduction of hyperspectral imagery. The artificial neural network (ANN) and support vector regression (SVR) were sequentially applied to achieve AC and estimate cyanobacteria concentrations (i.e., SAE-ANN and SAE-SVR). Further, the ANN and SVR models without SAE were compared with SAE-ANN and SAE-SVR models for the performance evaluations. In terms of AC performance, both SAE-ANN and SAE-SVR displayed reasonable accuracy with the Nash???Sutcliffe efficiency (NSE) > 0.7. For PC and Chl-a estimation, the SAE-ANN model showed the best performance, by yielding NSE values > 0.79 and > 0.77, respectively. SAE, with fine tuning operators, improved the accuracy of the original ANN and SVR estimations, in terms of both AC and cyanobacteria estimation. This is primarily attributed to the high-level feature extraction of SAE, which can represent the spatial features of cyanobacteria. Therefore, this study demonstrated that the deep neural network has a strong potential to realize an integrative remote sensing application
The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch
Recent and forthcoming advances in instrumentation, and giant new surveys,
are creating astronomical data sets that are not amenable to the methods of
analysis familiar to astronomers. Traditional methods are often inadequate not
merely because of the size in bytes of the data sets, but also because of the
complexity of modern data sets. Mathematical limitations of familiar algorithms
and techniques in dealing with such data sets create a critical need for new
paradigms for the representation, analysis and scientific visualization (as
opposed to illustrative visualization) of heterogeneous, multiresolution data
across application domains. Some of the problems presented by the new data sets
have been addressed by other disciplines such as applied mathematics,
statistics and machine learning and have been utilized by other sciences such
as space-based geosciences. Unfortunately, valuable results pertaining to these
problems are mostly to be found only in publications outside of astronomy. Here
we offer brief overviews of a number of concepts, techniques and developments,
some "old" and some new. These are generally unknown to most of the
astronomical community, but are vital to the analysis and visualization of
complex datasets and images. In order for astronomers to take advantage of the
richness and complexity of the new era of data, and to be able to identify,
adopt, and apply new solutions, the astronomical community needs a certain
degree of awareness and understanding of the new concepts. One of the goals of
this paper is to help bridge the gap between applied mathematics, artificial
intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in
Astronomy, special issue "Robotic Astronomy
Learnable Reconstruction Methods from RGB Images to Hyperspectral Imaging: A Survey
Hyperspectral imaging enables versatile applications due to its competence in
capturing abundant spatial and spectral information, which are crucial for
identifying substances. However, the devices for acquiring hyperspectral images
are expensive and complicated. Therefore, many alternative spectral imaging
methods have been proposed by directly reconstructing the hyperspectral
information from lower-cost, more available RGB images. We present a thorough
investigation of these state-of-the-art spectral reconstruction methods from
the widespread RGB images. A systematic study and comparison of more than 25
methods has revealed that most of the data-driven deep learning methods are
superior to prior-based methods in terms of reconstruction accuracy and quality
despite lower speeds. This comprehensive review can serve as a fruitful
reference source for peer researchers, thus further inspiring future
development directions in related domains
Simplified Energy Landscape for Modularity Using Total Variation
Networks capture pairwise interactions between entities and are frequently
used in applications such as social networks, food networks, and protein
interaction networks, to name a few. Communities, cohesive groups of nodes,
often form in these applications, and identifying them gives insight into the
overall organization of the network. One common quality function used to
identify community structure is modularity. In Hu et al. [SIAM J. App. Math.,
73(6), 2013], it was shown that modularity optimization is equivalent to
minimizing a particular nonconvex total variation (TV) based functional over a
discrete domain. They solve this problem, assuming the number of communities is
known, using a Merriman, Bence, Osher (MBO) scheme.
We show that modularity optimization is equivalent to minimizing a convex
TV-based functional over a discrete domain, again, assuming the number of
communities is known. Furthermore, we show that modularity has no convex
relaxation satisfying certain natural conditions. We therefore, find a
manageable non-convex approximation using a Ginzburg Landau functional, which
provably converges to the correct energy in the limit of a certain parameter.
We then derive an MBO algorithm with fewer hand-tuned parameters than in Hu et
al. and which is 7 times faster at solving the associated diffusion equation
due to the fact that the underlying discretization is unconditionally stable.
Our numerical tests include a hyperspectral video whose associated graph has
2.9x10^7 edges, which is roughly 37 times larger than was handled in the paper
of Hu et al.Comment: 25 pages, 3 figures, 3 tables, submitted to SIAM J. App. Mat
- …