62,206 research outputs found
Object Classification in Astronomical Multi-Color Surveys
We present a photometric method for identifying stars, galaxies and quasars
in multi-color surveys, which uses a library of >65000 color templates. The
method aims for extracting the information content of object colors in a
statistically correct way and performs a classification as well as a redshift
estimation for galaxies and quasars in a unified approach. For the redshift
estimation, we use an advanced version of the MEV estimator which determines
the redshift error from the redshift dependent probability density function.
The method was originally developed for the CADIS survey, where we checked
its performance by spectroscopy. The method provides high reliability (6 errors
among 151 objects with R<24), especially for quasar selection, and redshifts
accurate within sigma ~ 0.03 for galaxies and sigma ~ 0.1 for quasars.
We compare a few model surveys using the same telescope time but different
sets of broad-band and medium-band filters. Their performance is investigated
by Monte-Carlo simulations as well as by analytic evaluation in terms of
classification and redshift estimation. In practice, medium-band surveys show
superior performance. Finally, we discuss the relevance of color calibration
and derive important conclusions for the issues of library design and choice of
filters. The calibration accuracy poses strong constraints on an accurate
classification, and is most critical for surveys with few, broad and deeply
exposed filters, but less severe for many, narrow and less deep filters.Comment: 21 pages including 10 figures. Accepted for publication in Astronomy
& Astrophysic
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
In this work we address the task of semantic image segmentation with Deep
Learning and make three main contributions that are experimentally shown to
have substantial practical merit. First, we highlight convolution with
upsampled filters, or 'atrous convolution', as a powerful tool in dense
prediction tasks. Atrous convolution allows us to explicitly control the
resolution at which feature responses are computed within Deep Convolutional
Neural Networks. It also allows us to effectively enlarge the field of view of
filters to incorporate larger context without increasing the number of
parameters or the amount of computation. Second, we propose atrous spatial
pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP
probes an incoming convolutional feature layer with filters at multiple
sampling rates and effective fields-of-views, thus capturing objects as well as
image context at multiple scales. Third, we improve the localization of object
boundaries by combining methods from DCNNs and probabilistic graphical models.
The commonly deployed combination of max-pooling and downsampling in DCNNs
achieves invariance but has a toll on localization accuracy. We overcome this
by combining the responses at the final DCNN layer with a fully connected
Conditional Random Field (CRF), which is shown both qualitatively and
quantitatively to improve localization performance. Our proposed "DeepLab"
system sets the new state-of-art at the PASCAL VOC-2012 semantic image
segmentation task, reaching 79.7% mIOU in the test set, and advances the
results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and
Cityscapes. All of our code is made publicly available online.Comment: Accepted by TPAM
- …