5,741 research outputs found
Ensemble learning of linear perceptron; Online learning theory
Within the framework of on-line learning, we study the generalization error
of an ensemble learning machine learning from a linear teacher perceptron. The
generalization error achieved by an ensemble of linear perceptrons having
homogeneous or inhomogeneous initial weight vectors is precisely calculated at
the thermodynamic limit of a large number of input elements and shows rich
behavior. Our main findings are as follows. For learning with homogeneous
initial weight vectors, the generalization error using an infinite number of
linear student perceptrons is equal to only half that of a single linear
perceptron, and converges with that of the infinite case with O(1/K) for a
finite number of K linear perceptrons. For learning with inhomogeneous initial
weight vectors, it is advantageous to use an approach of weighted averaging
over the output of the linear perceptrons, and we show the conditions under
which the optimal weights are constant during the learning process. The optimal
weights depend on only correlation of the initial weight vectors.Comment: 14 pages, 3 figures, submitted to Physical Review
Negative Correlation Learning for Customer Churn Prediction: A Comparison Study
Recently, telecommunication companies have been paying more attention toward the problem of identification of customer churn behavior. In business, it is well known for service providers that attracting new customers is much more expensive than retaining existing ones. Therefore, adopting accurate models that are able to predict customer churn can effectively help in customer retention campaigns and maximizing the profit. In this paper we will utilize an ensemble of Multilayer perceptrons
(MLP) whose training is obtained using negative correlation learning
(NCL) for predicting customer churn in a telecommunication company.
Experiments results confirm that NCL based MLP ensemble can achieve
better generalization performance (high churn rate) compared with ensemble
of MLP without NCL (flat ensemble) and other common data
mining techniques used for churn analysis
Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers
We analyze the generalization performance of a student in a model composed of
nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We
calculate the generalization error of the student analytically or numerically
using statistical mechanics in the framework of on-line learning. We treat two
well-known learning rules: Hebbian learning and perceptron learning. As a
result, it is proven that the nonlinear model shows qualitatively different
behaviors from the linear model. Moreover, it is clarified that Hebbian
learning and perceptron learning show qualitatively different behaviors from
each other. In Hebbian learning, we can analytically obtain the solutions. In
this case, the generalization error monotonically decreases. The steady value
of the generalization error is independent of the learning rate. The larger the
number of teachers is and the more variety the ensemble teachers have, the
smaller the generalization error is. In perceptron learning, we have to
numerically obtain the solutions. In this case, the dynamical behaviors of the
generalization error are non-monotonic. The smaller the learning rate is, the
larger the number of teachers is; and the more variety the ensemble teachers
have, the smaller the minimum value of the generalization error is.Comment: 13 pages, 9 figure
Statistical Mechanics of Time Domain Ensemble Learning
Conventional ensemble learning combines students in the space domain. On the
other hand, in this paper we combine students in the time domain and call it
time domain ensemble learning. In this paper, we analyze the generalization
performance of time domain ensemble learning in the framework of online
learning using a statistical mechanical method. We treat a model in which both
the teacher and the student are linear perceptrons with noises. Time domain
ensemble learning is twice as effective as conventional space domain ensemble
learning.Comment: 10 pages, 10 figure
Optimizing 0/1 Loss for Perceptrons by Random Coordinate Descent
The 0/1 loss is an important cost function for perceptrons. Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose a family of random coordinate descent algorithms to directly minimize the 0/1 loss for perceptrons, and prove their convergence. Our algorithms are computationally efficient, and usually achieve the lowest 0/1 loss compared with other algorithms. Such advantages make them favorable for nonseparable real-world problems. Experiments show that our algorithms are especially useful for ensemble learning, and could achieve the lowest test error for many complex data sets when coupled with AdaBoost
Analysis of ensemble learning using simple perceptrons based on online learning theory
Ensemble learning of nonlinear perceptrons, which determine their outputs
by sign functions, is discussed within the framework of online learning and
statistical mechanics. One purpose of statistical learning theory is to
theoretically obtain the generalization error. This paper shows that ensemble
generalization error can be calculated by using two order parameters, that is,
the similarity between a teacher and a student, and the similarity among
students. The differential equations that describe the dynamical behaviors of
these order parameters are derived in the case of general learning rules. The
concrete forms of these differential equations are derived analytically in the
cases of three well-known rules: Hebbian learning, perceptron learning and
AdaTron learning. Ensemble generalization errors of these three rules are
calculated by using the results determined by solving their differential
equations. As a result, these three rules show different characteristics in
their affinity for ensemble learning, that is ``maintaining variety among
students." Results show that AdaTron learning is superior to the other two
rules with respect to that affinity.Comment: 30 pages, 17 figure
Reverse Engineering Gene Networks with ANN: Variability in Network Inference Algorithms
Motivation :Reconstructing the topology of a gene regulatory network is one
of the key tasks in systems biology. Despite of the wide variety of proposed
methods, very little work has been dedicated to the assessment of their
stability properties. Here we present a methodical comparison of the
performance of a novel method (RegnANN) for gene network inference based on
multilayer perceptrons with three reference algorithms (ARACNE, CLR, KELLER),
focussing our analysis on the prediction variability induced by both the
network intrinsic structure and the available data.
Results: The extensive evaluation on both synthetic data and a selection of
gene modules of "Escherichia coli" indicates that all the algorithms suffer of
instability and variability issues with regards to the reconstruction of the
topology of the network. This instability makes objectively very hard the task
of establishing which method performs best. Nevertheless, RegnANN shows MCC
scores that compare very favorably with all the other inference methods tested.
Availability: The software for the RegnANN inference algorithm is distributed
under GPL3 and it is available at the corresponding author home page
(http://mpba.fbk.eu/grimaldi/regnann-supmat
- …