4,752 research outputs found
Learning Generative Models across Incomparable Spaces
Generative Adversarial Networks have shown remarkable success in learning a
distribution that faithfully recovers a reference distribution in its entirety.
However, in some cases, we may want to only learn some aspects (e.g., cluster
or manifold structure), while modifying others (e.g., style, orientation or
dimension). In this work, we propose an approach to learn generative models
across such incomparable spaces, and demonstrate how to steer the learned
distribution towards target properties. A key component of our model is the
Gromov-Wasserstein distance, a notion of discrepancy that compares
distributions relationally rather than absolutely. While this framework
subsumes current generative models in identically reproducing distributions,
its inherent flexibility allows application to tasks in manifold learning,
relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML
Radar-based Feature Design and Multiclass Classification for Road User Recognition
The classification of individual traffic participants is a complex task,
especially for challenging scenarios with multiple road users or under bad
weather conditions. Radar sensors provide an - with respect to well established
camera systems - orthogonal way of measuring such scenes. In order to gain
accurate classification results, 50 different features are extracted from the
measurement data and tested on their performance. From these features a
suitable subset is chosen and passed to random forest and long short-term
memory (LSTM) classifiers to obtain class predictions for the radar input.
Moreover, it is shown why data imbalance is an inherent problem in automotive
radar classification when the dataset is not sufficiently large. To overcome
this issue, classifier binarization is used among other techniques in order to
better account for underrepresented classes. A new method to couple the
resulting probabilities is proposed and compared to others with great success.
Final results show substantial improvements when compared to ordinary
multiclass classificationComment: 8 pages, 6 figure
On Machine-Learned Classification of Variable Stars with Sparse and Noisy Time-Series Data
With the coming data deluge from synoptic surveys, there is a growing need
for frameworks that can quickly and automatically produce calibrated
classification probabilities for newly-observed variables based on a small
number of time-series measurements. In this paper, we introduce a methodology
for variable-star classification, drawing from modern machine-learning
techniques. We describe how to homogenize the information gleaned from light
curves by selection and computation of real-numbered metrics ("feature"),
detail methods to robustly estimate periodic light-curve features, introduce
tree-ensemble methods for accurate variable star classification, and show how
to rigorously evaluate the classification results using cross validation. On a
25-class data set of 1542 well-studied variable stars, we achieve a 22.8%
overall classification error using the random forest classifier; this
represents a 24% improvement over the best previous classifier on these data.
This methodology is effective for identifying samples of specific science
classes: for pulsational variables used in Milky Way tomography we obtain a
discovery efficiency of 98.2% and for eclipsing systems we find an efficiency
of 99.1%, both at 95% purity. We show that the random forest (RF) classifier is
superior to other machine-learned methods in terms of accuracy, speed, and
relative immunity to features with no useful class information; the RF
classifier can also be used to estimate the importance of each feature in
classification. Additionally, we present the first astronomical use of
hierarchical classification methods to incorporate a known class taxonomy in
the classifier, which further reduces the catastrophic error rate to 7.8%.
Excluding low-amplitude sources, our overall error rate improves to 14%, with a
catastrophic error rate of 3.5%.Comment: 23 pages, 9 figure
Multi-TGDR: a regularization method for multi-class classification in microarray experiments
Background
With microarray technology becoming mature and popular, the selection and use
of a small number of relevant genes for accurate classification of samples is a
hot topic in the circles of biostatistics and bioinformatics. However, most of
the developed algorithms lack the ability to handle multiple classes, which
arguably a common application. Here, we propose an extension to an existing
regularization algorithm called Threshold Gradient Descent Regularization
(TGDR) to specifically tackle multi-class classification of microarray data.
When there are several microarray experiments addressing the same/similar
objectives, one option is to use meta-analysis version of TGDR (Meta-TGDR),
which considers the classification task as combination of classifiers with the
same structure/model while allowing the parameters to vary across studies.
However, the original Meta-TGDR extension did not offer a solution to the
prediction on independent samples. Here, we propose an explicit method to
estimate the overall coefficients of the biomarkers selected by Meta-TGDR. This
extension permits broader applicability and allows a comparison between the
predictive performance of Meta-TGDR and TGDR using an independent testing set.
Results
Using real-world applications, we demonstrated the proposed multi-TGDR
framework works well and the number of selected genes is less than the sum of
all individualized binary TGDRs. Additionally, Meta-TGDR and TGDR on the
batch-effect adjusted pooled data approximately provided same results. By
adding Bagging procedure in each application, the stability and good predictive
performance are warranted.
Conclusions
Compared with Meta-TGDR, TGDR is less computing time intensive, and requires
no samples of all classes in each study. On the adjusted data, it has
approximate same predictive performance with Meta-TGDR. Thus, it is highly
recommended
- …