286 research outputs found

    On building ensembles of stacked denoising auto-encoding classifiers and their further improvement

    Get PDF
    To aggregate diverse learners and to train deep architectures are the two principal avenues towards increasing the expressive capabilities of neural networks. Therefore, their combinations merit attention. In this contribution, we study how to apply some conventional diversity methods-bagging and label switching- to a general deep machine, the stacked denoising auto-encoding classifier, in order to solve a number of appropriately selected image recognition problems. The main conclusion of our work is that binarizing multi-class problems is the key to obtain benefit from those diversity methods. Additionally, we check that adding other kinds of performance improvement procedures, such as pre-emphasizing training samples and elastic distortion mechanisms, further increases the quality of the results. In particular, an appropriate combination of all the above methods leads us to reach a new absolute record in classifying MNIST handwritten digits. These facts reveal that there are clear opportunities for designing more powerful classifiers by means of combining different improvement techniques. (C) 2017 Elsevier B.V. All rights reserved.This work has been partly supported by research grants CASI- CAM-CM ( S2013/ICE-2845, Madrid Community) and Macro-ADOBE (TEC2015-67719, MINECO-FEDER EU), as well as by the research network DAMA ( TIN2015-70308-REDT, MINECO )

    An improved multiple classifier combination scheme for pattern classification

    Get PDF
    Combining multiple classifiers are considered as a new direction in the pattern recognition to improve classification performance. The main problem of multiple classifier combination is that there is no standard guideline for constructing an accurate and diverse classifier ensemble. This is due to the difficulty in identifying the number of homogeneous classifiers and how to combine the classifier outputs. The most commonly used ensemble method is the random strategy while the majority voting technique is used as the combiner. However, the random strategy cannot determine the number of classifiers and the majority voting technique does not consider the strength of each classifier, thus resulting in low classification accuracy. In this study, an improved multiple classifier combination scheme is proposed. The ant system (AS) algorithm is used to partition feature set in developing feature subsets which represent the number of classifiers. A compactness measure is introduced as a parameter in constructing an accurate and diverse classifier ensemble. A weighted voting technique is used to combine the classifier outputs by considering the strength of the classifiers prior to voting. Experiments were performed using four base classifiers, which are Nearest Mean Classifier (NMC), Naive Bayes Classifier (NBC), k-Nearest Neighbour (k-NN) and Linear Discriminant Analysis (LDA) on benchmark datasets, to test the credibility of the proposed multiple classifier combination scheme. The average classification accuracy of the homogeneous NMC, NBC, k-NN and LDA ensembles are 97.91%, 98.06%, 98.09% and 98.12% respectively. The accuracies are higher than those obtained through the use of other approaches in developing multiple classifier combination. The proposed multiple classifier combination scheme will help to develop other multiple classifier combination for pattern recognition and classification

    Surrogate modeling approximation using a mixture of experts based on EM joint estimation

    Get PDF
    An automatic method to combine several local surrogate models is presented. This method is intended to build accurate and smooth approximation of discontinuous functions that are to be used in structural optimization problems. It strongly relies on the Expectation-Maximization (EM) algorithm for Gaussian mixture models (GMM). To the end of regression, the inputs are clustered together with their output values by means of parameter estimation of the joint distribution. A local expert is then built (linear, quadratic, artificial neural network, moving least squares) on each cluster. Lastly, the local experts are combined using the Gaussian mixture model parameters found by the EM algorithm to obtain a global model. This method is tested over both mathematical test cases and an engineering optimization problem from aeronautics and is found to improve the accuracy of the approximation

    Ensemble of Single‐Layered Complex‐Valued Neural Networks for Classification Tasks

    Get PDF
    This paper presents ensemble approaches in single-layered complex-valued neural network (CVNN) to solve real-valued classification problems. Each component CVNN of an ensemble uses a recently proposed activation function for its complex-valued neurons (CVNs). A gradient-descent based learning algorithm was used to train the component CVNNs. We applied two ensemble methods, negative correlation learning and bagging, to create the ensembles. Experimental results on a number of real-world benchmark problems showed a substantial performance improvement of the ensembles over the individual single-layered CVNN classifiers. Furthermore, the generalization performances were nearly equivalent to those obtained by the ensembles of real-valued multilayer neural networks
    corecore