4,293 research outputs found

    Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks

    Get PDF
    Recently, error minimized extreme learning machines (EM-ELMs) have been proposed as a simple and efficient approach to build single-hidden-layer feed-forward networks (SLFNs) sequentially. They add random hidden nodes one by one (or group by group) and update the output weights incrementally to minimize the sum-of-squares error in the training set. Other very similar methods that also construct SLFNs sequentially had been reported earlier with the main difference that their hidden-layer weights are a subset of the data instead of being random. By analogy with the concept of support vectors original of support vector machines (SVMs), these approaches can be referred to as support vector sequential feed-forward neural networks (SV-SFNNs), and they are a particular case of the Sequential Approximation with Optimal Coefficients and Interacting Frequencies (SAOCIF) method. In this paper, it is firstly shown that EM-ELMs can also be cast as a particular case of SAOCIF. In particular, EM-ELMs can easily be extended to test some number of random candidates at each step and select the best of them, as SAOCIF does. Moreover, it is demonstrated that the cost of the calculation of the optimal output-layer weights in the originally proposed EM-ELMs can be improved if it is replaced by the one included in SAOCIF. Secondly, we present the results of an experimental study on 10 benchmark classification and 10 benchmark regression data sets, comparing EM-ELMs and SV-SFNNs, that was carried out under the same conditions for the two models. Although both models have the same (efficient) computational cost, a statistically significant improvement in generalization performance of SV-SFNNs vs. EM-ELMs was found in 12 out of the 20 benchmark problems.Postprint (published version

    Convolutional auto-encoded extreme learning machine for incremental learning of heterogeneous images

    Get PDF
    In real-world scenarios, a system's continual updating of learning knowledge becomes more critical as the data grows faster, producing vast volumes of data. Moreover, the learning process becomes complex when the data features become varied due to the addition or deletion of classes. In such cases, the generated model should learn effectively. Incremental learning refers to the learning of data which constantly arrives over time. This learning requires continuous model adaptation but with limited memory resources without sacrificing model accuracy. In this paper, we proposed a straightforward knowledge transfer algorithm (convolutional auto-encoded extreme learning machine (CAE-ELM)) implemented through the incremental learning methodology for the task of supervised classification using an extreme learning machine (ELM). Incremental learning is achieved by creating an individual train model for each set of homogeneous data and incorporating the knowledge transfer among the models without sacrificing accuracy with minimal memory resources. In CAE-ELM, convolutional neural network (CNN) extracts the features, stacked autoencoder (SAE) reduces the size, and ELM learns and classifies the images. Our proposed algorithm is implemented and experimented on various standard datasets: MNIST, ORL, JAFFE, FERET and Caltech. The results show a positive sign of the correctness of the proposed algorithm

    Enforcement of the principal component analysis - extreme learning machine algorithm by linear discriminant analysis

    Get PDF
    In the majority of traditional extreme learning machine (ELM) approaches, the parameters of the basis functions are randomly generated and do not need to be tuned, while the weights connecting the hidden layer to the output layer are analytically estimated. The determination of the optimal number of basis functions to be included in the hidden layer is still an open problem. Cross-validation and heuristic approaches (constructive and destructive) are some of the methodologies used to perform this task. Recently, a deterministic algorithm based on the principal component analysis (PCA) and ELM has been proposed to assess the number of basis functions according to the number of principal components necessary to explain the 90 % of the variance in the data. In this work, the PCA part of the PCA–ELM algorithm is joined to the linear discriminant analysis (LDA) as a hybrid means to perform the pruning of the hidden nodes. This is justified by the fact that the LDA approach is outperforming the PCA one on a set of problems. Hence, the idea of combining the two approaches in a LDA–PCA–ELM algorithm is shown to be in average better than its PCA–ELM and LDA–ELM counterparts. Moreover, the performance in classification and the number of basis functions selected by the algorithm, on a set of benchmark problems, have been compared and validated in the experimental section using nonparametric tests against a set of existing ELM techniques

    µG2-ELM: an upgraded implementation of µ G-ELM

    Get PDF
    µG-ELM is a multiobjective evolutionary algorithm which looks for the best (in terms of the MSE) and most compact artificial neural network using the ELM methodology. In this work we present the µG2-ELM, an upgraded version of µG-ELM, previously presented by the authors. The upgrading is based on three key elements: a specifically designed approach for the initialization of the weights of the initial artificial neural networks, the introduction of a re-sowing process when selecting the population to be evolved and a change of the process used to modify the weights of the artificial neural networks. To test our proposal we consider several state-of-the-art Extreme Learning Machine (ELM) algorithms and we confront them using a wide and well-known set of continuous, regression and classification problems. From the conducted experiments it is proved that the µG2-ELM shows a better general performance than the previous version and also than other competitors. Therefore, we can guess that the combination of evolutionary algorithms with the ELM methodology is a promising subject of study since both together allow for the design of better training algorithms for artificial neural networks
    • …
    corecore