6,455 research outputs found
RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning Machines for Robustness Improvement
Extreme learning machine (ELM) as an emerging branch of shallow networks has
shown its excellent generalization and fast learning speed. However, for
blended data, the robustness of ELM is weak because its weights and biases of
hidden nodes are set randomly. Moreover, the noisy data exert a negative
effect. To solve this problem, a new framework called RMSE-ELM is proposed in
this paper. It is a two-layer recursive model. In the first layer, the
framework trains lots of ELMs in different groups concurrently, then employs
selective ensemble to pick out an optimal set of ELMs in each group, which can
be merged into a large group of ELMs called candidate pool. In the second
layer, selective ensemble is recursively used on candidate pool to acquire the
final ensemble. In the experiments, we apply UCI blended datasets to confirm
the robustness of our new approach in two key aspects (mean square error and
standard deviation). The space complexity of our method is increased to some
degree, but the results have shown that RMSE-ELM significantly improves
robustness with slightly computational time compared with representative
methods (ELM, OP-ELM, GASEN-ELM, GASEN-BP and E-GASEN). It becomes a potential
framework to solve robustness issue of ELM for high-dimensional blended data in
the future.Comment: Accepted for publication in Mathematical Problems in Engineering,
09/22/201
Myths and Legends of the Baldwin Effect
This position paper argues that the Baldwin effect is widely
misunderstood by the evolutionary computation community. The
misunderstandings appear to fall into two general categories.
Firstly, it is commonly believed that the Baldwin effect is
concerned with the synergy that results when there is an evolving
population of learning individuals. This is only half of the story.
The full story is more complicated and more interesting. The Baldwin
effect is concerned with the costs and benefits of lifetime
learning by individuals in an evolving population. Several
researchers have focussed exclusively on the benefits, but there
is much to be gained from attention to the costs. This paper explains
the two sides of the story and enumerates ten of the costs and
benefits of lifetime learning by individuals in an evolving population.
Secondly, there is a cluster of misunderstandings about the relationship
between the Baldwin effect and Lamarckian inheritance of acquired
characteristics. The Baldwin effect is not Lamarckian. A Lamarckian
algorithm is not better for most evolutionary computing problems than
a Baldwinian algorithm. Finally, Lamarckian inheritance is not a
better model of memetic (cultural) evolution than the Baldwin effect
Predicting Genetic Regulatory Response Using Classification
We present a novel classification-based method for learning to predict gene
regulatory response. Our approach is motivated by the hypothesis that in simple
organisms such as Saccharomyces cerevisiae, we can learn a decision rule for
predicting whether a gene is up- or down-regulated in a particular experiment
based on (1) the presence of binding site subsequences (``motifs'') in the
gene's regulatory region and (2) the expression levels of regulators such as
transcription factors in the experiment (``parents''). Thus our learning task
integrates two qualitatively different data sources: genome-wide cDNA
microarray data across multiple perturbation and mutant experiments along with
motif profile data from regulatory sequences. We convert the regression task of
predicting real-valued gene expression measurement to a classification task of
predicting +1 and -1 labels, corresponding to up- and down-regulation beyond
the levels of biological and measurement noise in microarray measurements. The
learning algorithm employed is boosting with a margin-based generalization of
decision trees, alternating decision trees. This large-margin classifier is
sufficiently flexible to allow complex logical functions, yet sufficiently
simple to give insight into the combinatorial mechanisms of gene regulation. We
observe encouraging prediction accuracy on experiments based on the Gasch S.
cerevisiae dataset, and we show that we can accurately predict up- and
down-regulation on held-out experiments. Our method thus provides predictive
hypotheses, suggests biological experiments, and provides interpretable insight
into the structure of genetic regulatory networks.Comment: 8 pages, 4 figures, presented at Twelfth International Conference on
Intelligent Systems for Molecular Biology (ISMB 2004), supplemental website:
http://www.cs.columbia.edu/compbio/geneclas
An update on statistical boosting in biomedicine
Statistical boosting algorithms have triggered a lot of research during the
last decade. They combine a powerful machine-learning approach with classical
statistical modelling, offering various practical advantages like automated
variable selection and implicit regularization of effect estimates. They are
extremely flexible, as the underlying base-learners (regression functions
defining the type of effect for the explanatory variables) can be combined with
any kind of loss function (target function to be optimized, defining the type
of regression setting). In this review article, we highlight the most recent
methodological developments on statistical boosting regarding variable
selection, functional regression and advanced time-to-event modelling.
Additionally, we provide a short overview on relevant applications of
statistical boosting in biomedicine
Mean-Field Theory of Meta-Learning
We discuss here the mean-field theory for a cellular automata model of
meta-learning. The meta-learning is the process of combining outcomes of
individual learning procedures in order to determine the final decision with
higher accuracy than any single learning method. Our method is constructed from
an ensemble of interacting, learning agents, that acquire and process incoming
information using various types, or different versions of machine learning
algorithms. The abstract learning space, where all agents are located, is
constructed here using a fully connected model that couples all agents with
random strength values. The cellular automata network simulates the higher
level integration of information acquired from the independent learning trials.
The final classification of incoming input data is therefore defined as the
stationary state of the meta-learning system using simple majority rule, yet
the minority clusters that share opposite classification outcome can be
observed in the system. Therefore, the probability of selecting proper class
for a given input data, can be estimated even without the prior knowledge of
its affiliation. The fuzzy logic can be easily introduced into the system, even
if learning agents are build from simple binary classification machine learning
algorithms by calculating the percentage of agreeing agents.Comment: 23 page
- ā¦