3,630 research outputs found
Wide Field Imaging. I. Applications of Neural Networks to object detection and star/galaxy classification
[Abriged] Astronomical Wide Field Imaging performed with new large format CCD
detectors poses data reduction problems of unprecedented scale which are
difficult to deal with traditional interactive tools. We present here NExt
(Neural Extractor): a new Neural Network (NN) based package capable to detect
objects and to perform both deblending and star/galaxy classification in an
automatic way. Traditionally, in astronomical images, objects are first
discriminated from the noisy background by searching for sets of connected
pixels having brightnesses above a given threshold and then they are classified
as stars or as galaxies through diagnostic diagrams having variables choosen
accordingly to the astronomer's taste and experience. In the extraction step,
assuming that images are well sampled, NExt requires only the simplest a priori
definition of "what an object is" (id est, it keeps all structures composed by
more than one pixels) and performs the detection via an unsupervised NN
approaching detection as a clustering problem which has been thoroughly studied
in the artificial intelligence literature. In order to obtain an objective and
reliable classification, instead of using an arbitrarily defined set of
features, we use a NN to select the most significant features among the large
number of measured ones, and then we use their selected features to perform the
classification task. In order to optimise the performances of the system we
implemented and tested several different models of NN. The comparison of the
NExt performances with those of the best detection and classification package
known to the authors (SExtractor) shows that NExt is at least as effective as
the best traditional packages.Comment: MNRAS, in press. Paper with higher resolution images is available at
http://www.na.astro.it/~andreon/listapub.htm
Neural Nets and Star/Galaxy Separation in Wide Field Astronomical Images
One of the most relevant problems in the extraction of scientifically useful
information from wide field astronomical images (both photographic plates and
CCD frames) is the recognition of the objects against a noisy background and
their classification in unresolved (star-like) and resolved (galaxies) sources.
In this paper we present a neural network based method capable to perform both
tasks and discuss in detail the performance of object detection in a
representative celestial field. The performance of our method is compared to
that of other methodologies often used within the astronomical community.Comment: 6 pages, to appear in the proceedings of IJCNN 99, IEEE Press, 199
Single-epoch supernova classification with deep convolutional neural networks
Supernovae Type-Ia (SNeIa) play a significant role in exploring the history
of the expansion of the Universe, since they are the best-known standard
candles with which we can accurately measure the distance to the objects.
Finding large samples of SNeIa and investigating their detailed characteristics
have become an important issue in cosmology and astronomy. Existing methods
relied on a photometric approach that first measures the luminance of supernova
candidates precisely and then fits the results to a parametric function of
temporal changes in luminance. However, it inevitably requires multi-epoch
observations and complex luminance measurements. In this work, we present a
novel method for classifying SNeIa simply from single-epoch observation images
without any complex measurements, by effectively integrating the
state-of-the-art computer vision methodology into the standard photometric
approach. Our method first builds a convolutional neural network for estimating
the luminance of supernovae from telescope images, and then constructs another
neural network for the classification, where the estimated luminance and
observation dates are used as features for classification. Both of the neural
networks are integrated into a single deep neural network to classify SNeIa
directly from observation images. Experimental results show the effectiveness
of the proposed method and reveal classification performance comparable to
existing photometric methods with multi-epoch observations.Comment: 7 pages, published as a workshop paper in ICDCS2017, in June 201
Photometric redshift estimation based on data mining with PhotoRApToR
Photometric redshifts (photo-z) are crucial to the scientific exploitation of
modern panchromatic digital surveys. In this paper we present PhotoRApToR
(Photometric Research Application To Redshift): a Java/C++ based desktop
application capable to solve non-linear regression and multi-variate
classification problems, in particular specialized for photo-z estimation. It
embeds a machine learning algorithm, namely a multilayer neural network trained
by the Quasi Newton learning rule, and special tools dedicated to pre- and
postprocessing data. PhotoRApToR has been successfully tested on several
scientific cases. The application is available for free download from the DAME
Program web site.Comment: To appear on Experimental Astronomy, Springer, 20 pages, 15 figure
New approaches to object classification in synoptic sky surveys
Digital synoptic sky surveys pose several new object classification challenges. In surveys where real-time detection and classification of transient events is a science driver, there is a need for an effective elimination of instrument-related artifacts which can masquerade as transient sources in the detection pipeline, e.g., unremoved large cosmic rays, saturation trails, reflections, crosstalk artifacts, etc. We have implemented such an Artifact Filter, using a supervised neural network,
for the real-time processing pipeline in the Palomar-Quest (PQ) survey. After the training phase, for each object it takes as input a set of measured morphological parameters and returns the probability of it being a real object. Despite the relatively low number of training cases for many kinds of artifacts, the overall artifact classification rate is around 90%, with no genuine transients misclassified during our real-time scans. Another question is how to assign an optimal star-galaxy
classification in a multi-pass survey, where seeing and other conditions change between different epochs, potentially producing inconsistent classifications for the same object. We have implemented a star/galaxy multipass classifier that makes use of external and a priori knowledge to find the optimal classification from the individually derived ones. Both these techniques can be applied to other, similar surveys and data sets
CMU DeepLens: Deep Learning For Automatic Image-based Galaxy-Galaxy Strong Lens Finding
Galaxy-scale strong gravitational lensing is not only a valuable probe of the
dark matter distribution of massive galaxies, but can also provide valuable
cosmological constraints, either by studying the population of strong lenses or
by measuring time delays in lensed quasars. Due to the rarity of galaxy-scale
strongly lensed systems, fast and reliable automated lens finding methods will
be essential in the era of large surveys such as LSST, Euclid, and WFIRST. To
tackle this challenge, we introduce CMU DeepLens, a new fully automated
galaxy-galaxy lens finding method based on Deep Learning. This supervised
machine learning approach does not require any tuning after the training step
which only requires realistic image simulations of strongly lensed systems. We
train and validate our model on a set of 20,000 LSST-like mock observations
including a range of lensed systems of various sizes and signal-to-noise ratios
(S/N). We find on our simulated data set that for a rejection rate of
non-lenses of 99%, a completeness of 90% can be achieved for lenses with
Einstein radii larger than 1.4" and S/N larger than 20 on individual -band
LSST exposures. Finally, we emphasize the importance of realistically complex
simulations for training such machine learning methods by demonstrating that
the performance of models of significantly different complexities cannot be
distinguished on simpler simulations. We make our code publicly available at
https://github.com/McWilliamsCenter/CMUDeepLens .Comment: 12 pages, 9 figures, submitted to MNRA
Photometric redshift estimation via deep learning
The need to analyze the available large synoptic multi-band surveys drives
the development of new data-analysis methods. Photometric redshift estimation
is one field of application where such new methods improved the results,
substantially. Up to now, the vast majority of applied redshift estimation
methods have utilized photometric features. We aim to develop a method to
derive probabilistic photometric redshift directly from multi-band imaging
data, rendering pre-classification of objects and feature extraction obsolete.
A modified version of a deep convolutional network was combined with a mixture
density network. The estimates are expressed as Gaussian mixture models
representing the probability density functions (PDFs) in the redshift space. In
addition to the traditional scores, the continuous ranked probability score
(CRPS) and the probability integral transform (PIT) were applied as performance
criteria. We have adopted a feature based random forest and a plain mixture
density network to compare performances on experiments with data from SDSS
(DR9). We show that the proposed method is able to predict redshift PDFs
independently from the type of source, for example galaxies, quasars or stars.
Thereby the prediction performance is better than both presented reference
methods and is comparable to results from the literature. The presented method
is extremely general and allows us to solve of any kind of probabilistic
regression problems based on imaging data, for example estimating metallicity
or star formation rate of galaxies. This kind of methodology is tremendously
important for the next generation of surveys.Comment: 16 pages, 12 figures, 6 tables. Accepted for publication on A&
Astrophysical Data Analytics based on Neural Gas Models, using the Classification of Globular Clusters as Playground
In Astrophysics, the identification of candidate Globular Clusters through
deep, wide-field, single band HST images, is a typical data analytics problem,
where methods based on Machine Learning have revealed a high efficiency and
reliability, demonstrating the capability to improve the traditional
approaches. Here we experimented some variants of the known Neural Gas model,
exploring both supervised and unsupervised paradigms of Machine Learning, on
the classification of Globular Clusters, extracted from the NGC1399 HST data.
Main focus of this work was to use a well-tested playground to scientifically
validate such kind of models for further extended experiments in astrophysics
and using other standard Machine Learning methods (for instance Random Forest
and Multi Layer Perceptron neural network) for a comparison of performances in
terms of purity and completeness.Comment: Proceedings of the XIX International Conference "Data Analytics and
Management in Data Intensive Domains" (DAMDID/RCDL 2017), Moscow, Russia,
October 10-13, 2017, 8 pages, 4 figure
PhotoRaptor - Photometric Research Application To Redshifts
Due to the necessity to evaluate photo-z for a variety of huge sky survey
data sets, it seemed important to provide the astronomical community with an
instrument able to fill this gap. Besides the problem of moving massive data
sets over the network, another critical point is that a great part of
astronomical data is stored in private archives that are not fully accessible
on line. So, in order to evaluate photo-z it is needed a desktop application
that can be downloaded and used by everyone locally, i.e. on his own personal
computer or more in general within the local intranet hosted by a data center.
The name chosen for the application is PhotoRApToR, i.e. Photometric Research
Application To Redshift (Cavuoti et al. 2015, 2014; Brescia 2014b). It embeds a
machine learning algorithm and special tools dedicated to preand
post-processing data. The ML model is the MLPQNA (Multi Layer Perceptron
trained by the Quasi Newton Algorithm), which has been revealed particularly
powerful for the photo-z calculation on the base of a spectroscopic sample
(Cavuoti et al. 2012; Brescia et al. 2013, 2014a; Biviano et al. 2013).
The PhotoRApToR program package is available, for different platforms, at the
official website (http://dame.dsf.unina.it/dame_photoz.html#photoraptor).Comment: User Manual of the PhotoRaptor tool, 54 pages. arXiv admin note:
substantial text overlap with arXiv:1501.0650
Stellar classification from single-band imaging using machine learning
Information on the spectral types of stars is of great interest in view of
the exploitation of space-based imaging surveys. In this article, we
investigate the classification of stars into spectral types using only the
shape of their diffraction pattern in a single broad-band image. We propose a
supervised machine learning approach to this endeavour, based on principal
component analysis (PCA) for dimensionality reduction, followed by artificial
neural networks (ANNs) estimating the spectral type. Our analysis is performed
with image simulations mimicking the Hubble Space Telescope (HST) Advanced
Camera for Surveys (ACS) in the F606W and F814W bands, as well as the Euclid
VIS imager. We first demonstrate this classification in a simple context,
assuming perfect knowledge of the point spread function (PSF) model and the
possibility of accurately generating mock training data for the machine
learning. We then analyse its performance in a fully data-driven situation, in
which the training would be performed with a limited subset of bright stars
from a survey, and an unknown PSF with spatial variations across the detector.
We use simulations of main-sequence stars with flat distributions in spectral
type and in signal-to-noise ratio, and classify these stars into 13 spectral
subclasses, from O5 to M5. Under these conditions, the algorithm achieves a
high success rate both for Euclid and HST images, with typical errors of half a
spectral class. Although more detailed simulations would be needed to assess
the performance of the algorithm on a specific survey, this shows that stellar
classification from single-band images is well possible.Comment: 10 pages, 9 figures, 2 tables, accepted in A&
- …