400,036 research outputs found
Evolving artificial datasets to improve interpretable classifiers
Differential Evolution can be used to construct effective and compact artificial training datasets for machine learning algorithms. In this paper, a series of comparative experiments are performed in which two simple interpretable supervised classifiers (specifically, Naive Bayes and linear Support Vector Machines) are trained (i) directly on “real” data, as would be the normal case, and (ii) indirectly, using special artificial datasets derived from real data via evolutionary optimization. The results across several challenging test problems show that supervised classifiers trained indirectly using our novel evolution-based approach produce models with superior predictive classification performance. Besides presenting the accuracy of the learned models, we also analyze the sensitivity of our artificial data optimization process to Differential Evolution's parameters, and then we examine the statistical characteristics of the artificial data that is evolved
Combining Static and Dynamic Features for Multivariate Sequence Classification
Model precision in a classification task is highly dependent on the feature
space that is used to train the model. Moreover, whether the features are
sequential or static will dictate which classification method can be applied as
most of the machine learning algorithms are designed to deal with either one or
another type of data. In real-life scenarios, however, it is often the case
that both static and dynamic features are present, or can be extracted from the
data. In this work, we demonstrate how generative models such as Hidden Markov
Models (HMM) and Long Short-Term Memory (LSTM) artificial neural networks can
be used to extract temporal information from the dynamic data. We explore how
the extracted information can be combined with the static features in order to
improve the classification performance. We evaluate the existing techniques and
suggest a hybrid approach, which outperforms other methods on several public
datasets.Comment: Presented at IEEE DSAA 201
A Nonparametric Ensemble Binary Classifier and its Statistical Properties
In this work, we propose an ensemble of classification trees (CT) and
artificial neural networks (ANN). Several statistical properties including
universal consistency and upper bound of an important parameter of the proposed
classifier are shown. Numerical evidence is also provided using various real
life data sets to assess the performance of the model. Our proposed
nonparametric ensemble classifier doesn't suffer from the `curse of
dimensionality' and can be used in a wide variety of feature selection cum
classification problems. Performance of the proposed model is quite better when
compared to many other state-of-the-art models used for similar situations
Nature-Inspired Learning Models
Intelligent learning mechanisms found in natural world are still unsurpassed in their learning performance and eficiency of dealing with uncertain information coming in a variety of forms, yet remain under continuous challenge
from human driven artificial intelligence methods. This work intends to demonstrate how the phenomena observed in physical world can be directly used to guide artificial learning models. An inspiration for the new
learning methods has been found in the mechanics of physical fields found in both micro and macro scale.
Exploiting the analogies between data and particles subjected to gravity, electrostatic and gas particle fields, new algorithms have been developed and applied to classification and clustering while the properties of the
field further reused in regression and visualisation of classification and classifier fusion. The paper covers extensive pictorial examples and visual interpretations of the presented techniques along with some testing over
the well-known real and artificial datasets, compared when possible to the traditional methods
Flexible Mixture Modeling with the Polynomial Gaussian Cluster-Weighted Model
In the mixture modeling frame, this paper presents the polynomial Gaussian
cluster-weighted model (CWM). It extends the linear Gaussian CWM, for bivariate
data, in a twofold way. Firstly, it allows for possible nonlinear dependencies
in the mixture components by considering a polynomial regression. Secondly, it
is not restricted to be used for model-based clustering only being
contextualized in the most general model-based classification framework.
Maximum likelihood parameter estimates are derived using the EM algorithm and
model selection is carried out using the Bayesian information criterion (BIC)
and the integrated completed likelihood (ICL). The paper also investigates the
conditions under which the posterior probabilities of component-membership from
a polynomial Gaussian CWM coincide with those of other well-established
mixture-models which are related to it. With respect to these models, the
polynomial Gaussian CWM has shown to give excellent clustering and
classification results when applied to the artificial and real data considered
in the paper
- …