10,452 research outputs found
High-quality Image Restoration from Partial Mixed Adaptive-Random Measurements
A novel framework to construct an efficient sensing (measurement) matrix,
called mixed adaptive-random (MAR) matrix, is introduced for directly acquiring
a compressed image representation. The mixed sampling (sensing) procedure
hybridizes adaptive edge measurements extracted from a low-resolution image
with uniform random measurements predefined for the high-resolution image to be
recovered. The mixed sensing matrix seamlessly captures important information
of an image, and meanwhile approximately satisfies the restricted isometry
property. To recover the high-resolution image from MAR measurements, the total
variation algorithm based on the compressive sensing theory is employed for
solving the Lagrangian regularization problem. Both peak signal-to-noise ratio
and structural similarity results demonstrate the MAR sensing framework shows
much better recovery performance than the completely random sensing one. The
work is particularly helpful for high-performance and lost-cost data
acquisition.Comment: 16 pages, 8 figure
Adaptive Markov random fields for joint unmixing and segmentation of hyperspectral image
Linear spectral unmixing is a challenging problem in hyperspectral imaging that consists of decomposing an observed pixel into a linear combination of pure spectra (or endmembers) with their corresponding proportions (or abundances). Endmember extraction algorithms can be employed for recovering the spectral signatures while abundances are estimated using an inversion step. Recent works have shown that exploiting spatial dependencies between image pixels can improve spectral unmixing. Markov random fields (MRF) are classically used to model these spatial correlations and partition the image into multiple classes with homogeneous abundances. This paper proposes to define the MRF sites using similarity regions. These regions are built using a self-complementary area filter that stems from the morphological theory. This kind of filter divides the original image into flat zones where the underlying pixels have the same spectral values. Once the MRF has been clearly established, a hierarchical Bayesian algorithm is proposed to estimate the abundances, the class labels, the noise variance, and the corresponding hyperparameters. A hybrid Gibbs sampler is constructed to generate samples according to the corresponding posterior distribution of the unknown parameters and hyperparameters. Simulations conducted on synthetic and real AVIRIS data demonstrate the good performance of the algorithm
Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)
Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope
with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through
routing models. The most important input to debris \ufb02ow routing models are the
topographic data, usually in the form of Digital Elevation Models (DEMs). The quality
of DEMs depends on the accuracy, density, and spatial distribution of the sampled
points; on the characteristics of the surface; and on the applied gridding methodology.
Therefore, the choice of the interpolation method affects the realistic representation
of the channel and fan morphology, and thus potentially the debris \ufb02ow routing
modeling outcomes. In this paper, we initially investigate the performance of common
interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor,
Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging)
in building DEMs with the complex topography of a debris \ufb02ow channel located
in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full-
waveform Light Detection And Ranging (LiDAR) data. The investigation is carried
out through a combination of statistical analysis of vertical accuracy, algorithm
robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability
assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms
on the performance of a Geographic Information System (GIS)-based cell model for
simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation
between the DEMs heights uncertainty resulting from the gridding procedure and
that on the corresponding simulated erosion/deposition depths, both the effect of
interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid
discharges, and channel morphology after the event. The comparison among the tested
interpolation methods highlights that the ANUDEM and ordinary kriging algorithms
are not suitable for building DEMs with complex topography. Conversely, the linear
triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy
and shape reliability. Anyway, the evaluation of the effects of gridding techniques on
debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does
not signi\ufb01cantly affect the model outcomes
Object Classification in Astronomical Multi-Color Surveys
We present a photometric method for identifying stars, galaxies and quasars
in multi-color surveys, which uses a library of >65000 color templates. The
method aims for extracting the information content of object colors in a
statistically correct way and performs a classification as well as a redshift
estimation for galaxies and quasars in a unified approach. For the redshift
estimation, we use an advanced version of the MEV estimator which determines
the redshift error from the redshift dependent probability density function.
The method was originally developed for the CADIS survey, where we checked
its performance by spectroscopy. The method provides high reliability (6 errors
among 151 objects with R<24), especially for quasar selection, and redshifts
accurate within sigma ~ 0.03 for galaxies and sigma ~ 0.1 for quasars.
We compare a few model surveys using the same telescope time but different
sets of broad-band and medium-band filters. Their performance is investigated
by Monte-Carlo simulations as well as by analytic evaluation in terms of
classification and redshift estimation. In practice, medium-band surveys show
superior performance. Finally, we discuss the relevance of color calibration
and derive important conclusions for the issues of library design and choice of
filters. The calibration accuracy poses strong constraints on an accurate
classification, and is most critical for surveys with few, broad and deeply
exposed filters, but less severe for many, narrow and less deep filters.Comment: 21 pages including 10 figures. Accepted for publication in Astronomy
& Astrophysic
A Bayesian approach to star-galaxy classification
Star-galaxy classification is one of the most fundamental data-processing
tasks in survey astronomy, and a critical starting point for the scientific
exploitation of survey data. For bright sources this classification can be done
with almost complete reliability, but for the numerous sources close to a
survey's detection limit each image encodes only limited morphological
information. In this regime, from which many of the new scientific discoveries
are likely to come, it is vital to utilise all the available information about
a source, both from multiple measurements and also prior knowledge about the
star and galaxy populations. It is also more useful and realistic to provide
classification probabilities than decisive classifications. All these
desiderata can be met by adopting a Bayesian approach to star-galaxy
classification, and we develop a very general formalism for doing so. An
immediate implication of applying Bayes's theorem to this problem is that it is
formally impossible to combine morphological measurements in different bands
without using colour information as well; however we develop several
approximations that disregard colour information as much as possible. The
resultant scheme is applied to data from the UKIRT Infrared Deep Sky Survey
(UKIDSS), and tested by comparing the results to deep Sloan Digital Sky Survey
(SDSS) Stripe 82 measurements of the same sources. The Bayesian classification
probabilities obtained from the UKIDSS data agree well with the deep SDSS
classifications both overall (a mismatch rate of 0.022, compared to 0.044 for
the UKIDSS pipeline classifier) and close to the UKIDSS detection limit (a
mismatch rate of 0.068 compared to 0.075 for the UKIDSS pipeline classifier).
The Bayesian formalism developed here can be applied to improve the reliability
of any star-galaxy classification schemes based on the measured values of
morphology statistics alone.Comment: Accepted 22 November 2010, 19 pages, 17 figure
New insight on galaxy structure from GALPHAT I. Motivation, methodology, and benchmarks for Sersic models
We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy
PHotometric ATtributes), to provide full posterior probability distributions
and reliable confidence intervals for all model parameters. GALPHAT is designed
to yield a high speed and accurate likelihood computation, using grid
interpolation and Fourier rotation. We benchmark this approach using an
ensemble of simulated Sersic model galaxies over a wide range of observational
conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the PSF
and the image size, and errors in the assumed PSF; and a range of structural
parameters: the half-light radius and the Sersic index . We
characterise the strength of parameter covariance in Sersic model, which
increases with S/N and , and the results strongly motivate the need for the
full posterior probability distribution in galaxy morphology analyses and later
inferences.
The test results for simulated galaxies successfully demonstrate that, with a
careful choice of Markov chain Monte Carlo algorithms and fast model image
generation, GALPHAT is a powerful analysis tool for reliably inferring
morphological parameters from a large ensemble of galaxies over a wide range of
different observational conditions. (abridged)Comment: Submitted to MNRAS. The submitted version with high resolution
figures can be downloaded from
http://www.astro.umass.edu/~iyoon/GALPHAT/galphat1.pd
- …