44,590 research outputs found

    Cooperative coevolution of artificial neural network ensembles for pattern classification

    Get PDF
    This paper presents a cooperative coevolutive approach for designing neural network ensembles. Cooperative coevolution is a recent paradigm in evolutionary computation that allows the effective modeling of cooperative environments. Although theoretically, a single neural network with a sufficient number of neurons in the hidden layer would suffice to solve any problem, in practice many real-world problems are too hard to construct the appropriate network that solve them. In such problems, neural network ensembles are a successful alternative. Nevertheless, the design of neural network ensembles is a complex task. In this paper, we propose a general framework for designing neural network ensembles by means of cooperative coevolution. The proposed model has two main objectives: first, the improvement of the combination of the trained individual networks; second, the cooperative evolution of such networks, encouraging collaboration among them, instead of a separate training of each network. In order to favor the cooperation of the networks, each network is evaluated throughout the evolutionary process using a multiobjective method. For each network, different objectives are defined, considering not only its performance in the given problem, but also its cooperation with the rest of the networks. In addition, a population of ensembles is evolved, improving the combination of networks and obtaining subsets of networks to form ensembles that perform better than the combination of all the evolved networks. The proposed model is applied to ten real-world classification problems of a very different nature from the UCI machine learning repository and proben1 benchmark set. In all of them the performance of the model is better than the performance of standard ensembles in terms of generalization error. Moreover, the size of the obtained ensembles is also smaller

    Cascaded face detection using neural network ensembles

    Get PDF
    We propose a fast face detector using an efficient architecture based on a hierarchical cascade of neural network ensembles with which we achieve enhanced detection accuracy and efficiency. First, we propose a way to form a neural network ensemble by using a number of neural network classifiers, each of which is specialized in a subregion in the face-pattern space. These classifiers complement each other and, together, perform the detection task. Experimental results show that the proposed neural-network ensembles significantly improve the detection accuracy as compared to traditional neural-network-based techniques. Second, in order to reduce the total computation cost for the face detection, we organize the neural network ensembles in a pruning cascade. In this way, simpler and more efficient ensembles used at earlier stages in the cascade are able to reject a majority of nonface patterns in the image backgrounds, thereby significantly improving the overall detection efficiency while maintaining the detection accuracy. An important advantage of the new architecture is that it has a homogeneous structure so that it is suitable for very efficient implementation using programmable devices. Our proposed approach achieves one of the best detection accuracies in literature with significantly reduced training and detection cost

    A generative spike train model with time-structured higher order correlations

    Get PDF
    Emerging technologies are revealing the spiking activity in ever larger neural ensembles. Frequently, this spiking is far from independent, with correlations in the spike times of different cells. Understanding how such correlations impact the dynamics and function of neural ensembles remains an important open problem. Here we describe a new, generative model for correlated spike trains that can exhibit many of the features observed in data. Extending prior work in mathematical finance, this generalized thinning and shift (GTaS) model creates marginally Poisson spike trains with diverse temporal correlation structures. We give several examples which highlight the model's flexibility and utility. For instance, we use it to examine how a neural network responds to highly structured patterns of inputs. We then show that the GTaS model is analytically tractable, and derive cumulant densities of all orders in terms of model parameters. The GTaS framework can therefore be an important tool in the experimental and theoretical exploration of neural dynamics

    Dynamic-based damage identification using neural network ensembles and damage index method

    Full text link
    This paper presents a vibration-based damage identification method that utilises a "damage fingerprint" of a structure in combination with Principal Component Analysis (PCA) and neural network techniques to identify defects. The Damage Index (DI) method is used to extract unique damage patterns from a damaged beam structure with the undamaged structure as baseline. PCA is applied to reduce the effect of measurement noise and optimise neural network training. PCA-compressed DI values are, then, used as inputs for a hierarchy of neural network ensembles to estimate locations and severities of various damage cases. The developed method is verified by a laboratory structure and numerical simulations in which measurement noise is taken into account with different levels of white Gaussian noise added. The damage identification results obtained from the neural network ensembles show that the presented method is capable of overcoming problems inherent in the conventional DI method. Issues associated with field testing conditions are successfully dealt with for numerical and the experimental simulations. Moreover, it is shown that the neural network ensemble produces results that are more accurate than any of the outcomes of the individual neural networks

    TreeGrad: Transferring Tree Ensembles to Neural Networks

    Full text link
    Gradient Boosting Decision Tree (GBDT) are popular machine learning algorithms with implementations such as LightGBM and in popular machine learning toolkits like Scikit-Learn. Many implementations can only produce trees in an offline manner and in a greedy manner. We explore ways to convert existing GBDT implementations to known neural network architectures with minimal performance loss in order to allow decision splits to be updated in an online manner and provide extensions to allow splits points to be altered as a neural architecture search problem. We provide learning bounds for our neural network.Comment: Technical Report on Implementation of Deep Neural Decision Forests Algorithm. To accompany implementation here: https://github.com/chappers/TreeGrad. Update: Please cite as: Siu, C. (2019). "Transferring Tree Ensembles to Neural Networks". International Conference on Neural Information Processing. Springer, 2019. arXiv admin note: text overlap with arXiv:1909.1179
    corecore