3,445 research outputs found
Identifying Galaxy Mergers in Observations and Simulations with Deep Learning
Mergers are an important aspect of galaxy formation and evolution. We aim to
test whether deep learning techniques can be used to reproduce visual
classification of observations, physical classification of simulations and
highlight any differences between these two classifications. With one of the
main difficulties of merger studies being the lack of a truth sample, we can
use our method to test biases in visually identified merger catalogues. A
convolutional neural network architecture was developed and trained in two
ways: one with observations from SDSS and one with simulated galaxies from
EAGLE, processed to mimic the SDSS observations. The SDSS images were also
classified by the simulation trained network and the EAGLE images classified by
the observation trained network. The observationally trained network achieves
an accuracy of 91.5% while the simulation trained network achieves 65.2% on the
visually classified SDSS and physically classified EAGLE images respectively.
Classifying the SDSS images with the simulation trained network was less
successful, only achieving an accuracy of 64.6%, while classifying the EAGLE
images with the observation network was very poor, achieving an accuracy of
only 53.0% with preferential assignment to the non-merger classification. This
suggests that most of the simulated mergers do not have conspicuous merger
features and visually identified merger catalogues from observations are
incomplete and biased towards certain merger types. The networks trained and
tested with the same data perform the best, with observations performing better
than simulations, a result of the observational sample being biased towards
conspicuous mergers. Classifying SDSS observations with the simulation trained
network has proven to work, providing tantalizing prospects for using
simulation trained networks for galaxy identification in large surveys.Comment: Submitted to A&A, revised after first referee report. 20 pages, 22
figures, 14 tables, 1 appendi
Galaxy Zoo: Reproducing Galaxy Morphologies Via Machine Learning
We present morphological classifications obtained using machine learning for
objects in SDSS DR6 that have been classified by Galaxy Zoo into three classes,
namely early types, spirals and point sources/artifacts. An artificial neural
network is trained on a subset of objects classified by the human eye and we
test whether the machine learning algorithm can reproduce the human
classifications for the rest of the sample. We find that the success of the
neural network in matching the human classifications depends crucially on the
set of input parameters chosen for the machine-learning algorithm. The colours
and parameters associated with profile-fitting are reasonable in separating the
objects into three classes. However, these results are considerably improved
when adding adaptive shape parameters as well as concentration and texture. The
adaptive moments, concentration and texture parameters alone cannot distinguish
between early type galaxies and the point sources/artifacts. Using a set of
twelve parameters, the neural network is able to reproduce the human
classifications to better than 90% for all three morphological classes. We find
that using a training set that is incomplete in magnitude does not degrade our
results given our particular choice of the input parameters to the network. We
conclude that it is promising to use machine- learning algorithms to perform
morphological classification for the next generation of wide-field imaging
surveys and that the Galaxy Zoo catalogue provides an invaluable training set
for such purposes.Comment: 13 Pages, 5 figures, 10 tables. Accepted for publication in MNRAS.
Revised to match accepted version
Decision Tree Classifiers for Star/Galaxy Separation
We study the star/galaxy classification efficiency of 13 different decision
tree algorithms applied to photometric objects in the Sloan Digital Sky Survey
Data Release Seven (SDSS DR7). Each algorithm is defined by a set of parameters
which, when varied, produce different final classification trees. We
extensively explore the parameter space of each algorithm, using the set of
SDSS objects with spectroscopic data as the training set. The
efficiency of star-galaxy separation is measured using the completeness
function. We find that the Functional Tree algorithm (FT) yields the best
results as measured by the mean completeness in two magnitude intervals: () and (). We compare the performance of the
tree generated with the optimal FT configuration to the classifications
provided by the SDSS parametric classifier, 2DPHOT and Ball et al. (2006). We
find that our FT classifier is comparable or better in completeness over the
full magnitude range , with much lower contamination than all but
the Ball et al. classifier. At the faintest magnitudes (), our classifier
is the only one able to maintain high completeness (80%) while still
achieving low contamination (). Finally, we apply our FT classifier
to separate stars from galaxies in the full set of SDSS
photometric objects in the magnitude range .Comment: Submitted to A
Galaxy classification: deep learning on the OTELO and COSMOS databases
Context. The accurate classification of hundreds of thousands of galaxies
observed in modern deep surveys is imperative if we want to understand the
universe and its evolution. Aims. Here, we report the use of machine learning
techniques to classify early- and late-type galaxies in the OTELO and COSMOS
databases using optical and infrared photometry and available shape parameters:
either the Sersic index or the concentration index. Methods. We used three
classification methods for the OTELO database: 1) u-r color separation , 2)
linear discriminant analysis using u-r and a shape parameter classification,
and 3) a deep neural network using the r magnitude, several colors, and a shape
parameter. We analyzed the performance of each method by sample bootstrapping
and tested the performance of our neural network architecture using COSMOS
data. Results. The accuracy achieved by the deep neural network is greater than
that of the other classification methods, and it can also operate with missing
data. Our neural network architecture is able to classify both OTELO and COSMOS
datasets regardless of small differences in the photometric bands used in each
catalog. Conclusions. In this study we show that the use of deep neural
networks is a robust method to mine the cataloged dataComment: 20 pages, 10 tables, 14 figures, Astronomy and Astrophysics (in
press
First results from the LUCID-Timepix spacecraft payload onboard the TechDemoSat-1 satellite in Low Earth Orbit
The Langton Ultimate Cosmic ray Intensity Detector (LUCID) is a payload
onboard the satellite TechDemoSat-1, used to study the radiation environment in
Low Earth Orbit (635km). LUCID operated from 2014 to 2017, collecting
over 2.1 million frames of radiation data from its five Timepix detectors on
board. LUCID is one of the first uses of the Timepix detector technology in
open space, with the data providing useful insight into the performance of this
technology in new environments. It provides high-sensitivity imaging
measurements of the mixed radiation field, with a wide dynamic range in terms
of spectral response, particle type and direction. The data has been analysed
using computing resources provided by GridPP, with a new machine learning
algorithm that uses the Tensorflow framework. This algorithm provides a new
approach to processing Medipix data, using a training set of human labelled
tracks, providing greater particle classification accuracy than other
algorithms. For managing the LUCID data, we have developed an online platform
called Timepix Analysis Platform at School (TAPAS). This provides a swift and
simple way for users to analyse data that they collect using Timepix detectors
from both LUCID and other experiments. We also present some possible future
uses of the LUCID data and Medipix detectors in space.Comment: Accepted for publication in Advances in Space Researc
- …