3,470 research outputs found
Exploring the spectroscopic diversity of type Ia supernovae with DRACULA: a machine learning approach
The existence of multiple subclasses of type Ia supernovae (SNeIa) has been
the subject of great debate in the last decade. One major challenge inevitably
met when trying to infer the existence of one or more subclasses is the time
consuming, and subjective, process of subclass definition. In this work, we
show how machine learning tools facilitate identification of subtypes of SNeIa
through the establishment of a hierarchical group structure in the continuous
space of spectral diversity formed by these objects. Using Deep Learning, we
were capable of performing such identification in a 4 dimensional feature space
(+1 for time evolution), while the standard Principal Component Analysis barely
achieves similar results using 15 principal components. This is evidence that
the progenitor system and the explosion mechanism can be described by a small
number of initial physical parameters. As a proof of concept, we show that our
results are in close agreement with a previously suggested classification
scheme and that our proposed method can grasp the main spectral features behind
the definition of such subtypes. This allows the confirmation of the velocity
of lines as a first order effect in the determination of SNIa subtypes,
followed by 91bg-like events. Given the expected data deluge in the forthcoming
years, our proposed approach is essential to allow a quick and statistically
coherent identification of SNeIa subtypes (and outliers). All tools used in
this work were made publicly available in the Python package Dimensionality
Reduction And Clustering for Unsupervised Learning in Astronomy (DRACULA) and
can be found within COINtoolbox (https://github.com/COINtoolbox/DRACULA).Comment: 16 pages, 12 figures, accepted for publication in MNRA
Neural Collaborative Subspace Clustering
We introduce the Neural Collaborative Subspace Clustering, a neural model
that discovers clusters of data points drawn from a union of low-dimensional
subspaces. In contrast to previous attempts, our model runs without the aid of
spectral clustering. This makes our algorithm one of the kinds that can
gracefully scale to large datasets. At its heart, our neural model benefits
from a classifier which determines whether a pair of points lies on the same
subspace or not. Essential to our model is the construction of two affinity
matrices, one from the classifier and the other from a notion of subspace
self-expressiveness, to supervise training in a collaborative scheme. We
thoroughly assess and contrast the performance of our model against various
state-of-the-art clustering algorithms including deep subspace-based ones.Comment: Accepted to ICML 201
- …