2 research outputs found
Restricted Boltzmann machine to determine the input weights for extreme learning machines
The Extreme Learning Machine (ELM) is a single-hidden layer feedforward
neural network (SLFN) learning algorithm that can learn effectively and
quickly. The ELM training phase assigns the input weights and bias randomly and
does not change them in the whole process. Although the network works well, the
random weights in the input layer can make the algorithm less effective and
impact on its performance. Therefore, we propose a new approach to determine
the input weights and bias for the ELM using the restricted Boltzmann machine
(RBM), which we call RBM-ELM. We compare our new approach with a well-known
approach to improve the ELM and a state of the art algorithm to select the
weights for the ELM. The results show that the RBM-ELM outperforms both
methodologies and achieve a better performance than the ELM.Comment: 14 pages, 7 figures and 5 table
Evolutionary Cost-sensitive Extreme Learning Machine
Conventional extreme learning machines solve a Moore-Penrose generalized
inverse of hidden layer activated matrix and analytically determine the output
weights to achieve generalized performance, by assuming the same loss from
different types of misclassification. The assumption may not hold in
cost-sensitive recognition tasks, such as face recognition based access control
system, where misclassifying a stranger as a family member may result in more
serious disaster than misclassifying a family member as a stranger. Though
recent cost-sensitive learning can reduce the total loss with a given cost
matrix that quantifies how severe one type of mistake against another, in many
realistic cases the cost matrix is unknown to users. Motivated by these
concerns, this paper proposes an evolutionary cost-sensitive extreme learning
machine (ECSELM), with the following merits: 1) to our best knowledge, it is
the first proposal of ELM in evolutionary cost-sensitive classification
scenario; 2) it well addresses the open issue of how to define the cost matrix
in cost-sensitive learning tasks; 3) an evolutionary backtracking search
algorithm is induced for adaptive cost matrix optimization. Experiments in a
variety of cost-sensitive tasks well demonstrate the effectiveness of the
proposed approaches, with about 5%~10% improvements.Comment: This paper has been accepted for publication in IEEE Transactions on
Neural Networks and Learning System