185,127 research outputs found
Theoretical analyses of cross-validation error and voting in instance-based learning
This paper begins with a general theory of error in cross-validation testing of algorithms
for supervised learning from examples. It is assumed that the examples are described by
attribute-value pairs, where the values are symbolic. Cross-validation requires a set of
training examples and a set of testing examples. The value of the attribute that is to be
predicted is known to the learner in the training set, but unknown in the testing set. The
theory demonstrates that cross-validation error has two components: error on the training
set (inaccuracy) and sensitivity to noise (instability).
This general theory is then applied to voting in instance-based learning. Given an
example in the testing set, a typical instance-based learning algorithm predicts the designated
attribute by voting among the k nearest neighbors (the k most similar examples) to
the testing example in the training set. Voting is intended to increase the stability (resistance
to noise) of instance-based learning, but a theoretical analysis shows that there are
circumstances in which voting can be destabilizing. The theory suggests ways to minimize
cross-validation error, by insuring that voting is stable and does not adversely affect
accuracy
Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians
This paper presents a general and efficient framework for probabilistic
inference and learning from arbitrary uncertain information. It exploits the
calculation properties of finite mixture models, conjugate families and
factorization. Both the joint probability density of the variables and the
likelihood function of the (objective or subjective) observation are
approximated by a special mixture model, in such a way that any desired
conditional distribution can be directly obtained without numerical
integration. We have developed an extended version of the expectation
maximization (EM) algorithm to estimate the parameters of mixture models from
uncertain training examples (indirect observations). As a consequence, any
piece of exact or uncertain information about both input and output values is
consistently handled in the inference and learning stages. This ability,
extremely useful in certain situations, is not found in most alternative
methods. The proposed framework is formally justified from standard
probabilistic principles and illustrative examples are provided in the fields
of nonparametric pattern classification, nonlinear regression and pattern
completion. Finally, experiments on a real application and comparative results
over standard databases provide empirical evidence of the utility of the method
in a wide range of applications
On the usage of the probability integral transform to reduce the complexity of multi-way fuzzy decision trees in Big Data classification problems
We present a new distributed fuzzy partitioning method to reduce the
complexity of multi-way fuzzy decision trees in Big Data classification
problems. The proposed algorithm builds a fixed number of fuzzy sets for all
variables and adjusts their shape and position to the real distribution of
training data. A two-step process is applied : 1) transformation of the
original distribution into a standard uniform distribution by means of the
probability integral transform. Since the original distribution is generally
unknown, the cumulative distribution function is approximated by computing the
q-quantiles of the training set; 2) construction of a Ruspini strong fuzzy
partition in the transformed attribute space using a fixed number of equally
distributed triangular membership functions. Despite the aforementioned
transformation, the definition of every fuzzy set in the original space can be
recovered by applying the inverse cumulative distribution function (also known
as quantile function). The experimental results reveal that the proposed
methodology allows the state-of-the-art multi-way fuzzy decision tree (FMDT)
induction algorithm to maintain classification accuracy with up to 6 million
fewer leaves.Comment: Appeared in 2018 IEEE International Congress on Big Data (BigData
Congress). arXiv admin note: text overlap with arXiv:1902.0935
- …