39,523 research outputs found
A random matrix analysis and improvement of semi-supervised learning for large dimensional data
This article provides an original understanding of the behavior of a class of
graph-oriented semi-supervised learning algorithms in the limit of large and
numerous data. It is demonstrated that the intuition at the root of these
methods collapses in this limit and that, as a result, most of them become
inconsistent. Corrective measures and a new data-driven parametrization scheme
are proposed along with a theoretical analysis of the asymptotic performances
of the resulting approach. A surprisingly close behavior between theoretical
performances on Gaussian mixture models and on real datasets is also
illustrated throughout the article, thereby suggesting the importance of the
proposed analysis for dealing with practical data. As a result, significant
performance gains are observed on practical data classification using the
proposed parametrization
A Graph-Based Semi-Supervised k Nearest-Neighbor Method for Nonlinear Manifold Distributed Data Classification
Nearest Neighbors (NN) is one of the most widely used supervised
learning algorithms to classify Gaussian distributed data, but it does not
achieve good results when it is applied to nonlinear manifold distributed data,
especially when a very limited amount of labeled samples are available. In this
paper, we propose a new graph-based NN algorithm which can effectively
handle both Gaussian distributed data and nonlinear manifold distributed data.
To achieve this goal, we first propose a constrained Tired Random Walk (TRW) by
constructing an -level nearest-neighbor strengthened tree over the graph,
and then compute a TRW matrix for similarity measurement purposes. After this,
the nearest neighbors are identified according to the TRW matrix and the class
label of a query point is determined by the sum of all the TRW weights of its
nearest neighbors. To deal with online situations, we also propose a new
algorithm to handle sequential samples based a local neighborhood
reconstruction. Comparison experiments are conducted on both synthetic data
sets and real-world data sets to demonstrate the validity of the proposed new
NN algorithm and its improvements to other version of NN algorithms.
Given the widespread appearance of manifold structures in real-world problems
and the popularity of the traditional NN algorithm, the proposed manifold
version NN shows promising potential for classifying manifold-distributed
data.Comment: 32 pages, 12 figures, 7 table
A systematic comparison of supervised classifiers
Pattern recognition techniques have been employed in a myriad of industrial,
medical, commercial and academic applications. To tackle such a diversity of
data, many techniques have been devised. However, despite the long tradition of
pattern recognition research, there is no technique that yields the best
classification in all scenarios. Therefore, the consideration of as many as
possible techniques presents itself as an fundamental practice in applications
aiming at high accuracy. Typical works comparing methods either emphasize the
performance of a given algorithm in validation tests or systematically compare
various algorithms, assuming that the practical use of these methods is done by
experts. In many occasions, however, researchers have to deal with their
practical classification tasks without an in-depth knowledge about the
underlying mechanisms behind parameters. Actually, the adequate choice of
classifiers and parameters alike in such practical circumstances constitutes a
long-standing problem and is the subject of the current paper. We carried out a
study on the performance of nine well-known classifiers implemented by the Weka
framework and compared the dependence of the accuracy with their configuration
parameter configurations. The analysis of performance with default parameters
revealed that the k-nearest neighbors method exceeds by a large margin the
other methods when high dimensional datasets are considered. When other
configuration of parameters were allowed, we found that it is possible to
improve the quality of SVM in more than 20% even if parameters are set
randomly. Taken together, the investigation conducted in this paper suggests
that, apart from the SVM implementation, Weka's default configuration of
parameters provides an performance close the one achieved with the optimal
configuration
Semi-supervised Learning for Photometric Supernova Classification
We present a semi-supervised method for photometric supernova typing. Our
approach is to first use the nonlinear dimension reduction technique diffusion
map to detect structure in a database of supernova light curves and
subsequently employ random forest classification on a spectroscopically
confirmed training set to learn a model that can predict the type of each newly
observed supernova. We demonstrate that this is an effective method for
supernova typing. As supernova numbers increase, our semi-supervised method
efficiently utilizes this information to improve classification, a property not
enjoyed by template based methods. Applied to supernova data simulated by
Kessler et al. (2010b) to mimic those of the Dark Energy Survey, our methods
achieve (cross-validated) 95% Type Ia purity and 87% Type Ia efficiency on the
spectroscopic sample, but only 50% Type Ia purity and 50% efficiency on the
photometric sample due to their spectroscopic follow-up strategy. To improve
the performance on the photometric sample, we search for better spectroscopic
follow-up procedures by studying the sensitivity of our machine learned
supernova classification on the specific strategy used to obtain training sets.
With a fixed amount of spectroscopic follow-up time, we find that deeper
magnitude-limited spectroscopic surveys are better for producing training sets.
For supernova Ia (II-P) typing, we obtain a 44% (1%) increase in purity to 72%
(87%) and 30% (162%) increase in efficiency to 65% (84%) of the sample using a
25th (24.5th) magnitude-limited survey instead of the shallower spectroscopic
sample used in the original simulations. When redshift information is
available, we incorporate it into our analysis using a novel method of altering
the diffusion map representation of the supernovae. Incorporating host
redshifts leads to a 5% improvement in Type Ia purity and 13% improvement in
Type Ia efficiency.Comment: 16 pages, 11 figures, accepted for publication in MNRA
- …