13,973 research outputs found
Galaxy classification: deep learning on the OTELO and COSMOS databases
Context. The accurate classification of hundreds of thousands of galaxies
observed in modern deep surveys is imperative if we want to understand the
universe and its evolution. Aims. Here, we report the use of machine learning
techniques to classify early- and late-type galaxies in the OTELO and COSMOS
databases using optical and infrared photometry and available shape parameters:
either the Sersic index or the concentration index. Methods. We used three
classification methods for the OTELO database: 1) u-r color separation , 2)
linear discriminant analysis using u-r and a shape parameter classification,
and 3) a deep neural network using the r magnitude, several colors, and a shape
parameter. We analyzed the performance of each method by sample bootstrapping
and tested the performance of our neural network architecture using COSMOS
data. Results. The accuracy achieved by the deep neural network is greater than
that of the other classification methods, and it can also operate with missing
data. Our neural network architecture is able to classify both OTELO and COSMOS
datasets regardless of small differences in the photometric bands used in each
catalog. Conclusions. In this study we show that the use of deep neural
networks is a robust method to mine the cataloged dataComment: 20 pages, 10 tables, 14 figures, Astronomy and Astrophysics (in
press
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Deep neural networks have been widely adopted in recent years, exhibiting
impressive performances in several application domains. It has however been
shown that they can be fooled by adversarial examples, i.e., images altered by
a barely-perceivable adversarial noise, carefully crafted to mislead
classification. In this work, we aim to evaluate the extent to which
robot-vision systems embodying deep-learning algorithms are vulnerable to
adversarial examples, and propose a computationally efficient countermeasure to
mitigate this threat, based on rejecting classification of anomalous inputs. We
then provide a clearer understanding of the safety properties of deep networks
through an intuitive empirical analysis, showing that the mapping learned by
such networks essentially violates the smoothness assumption of learning
algorithms. We finally discuss the main limitations of this work, including the
creation of real-world adversarial examples, and sketch promising research
directions.Comment: Accepted for publication at the ICCV 2017 Workshop on Vision in
Practice on Autonomous Robots (ViPAR
- …