17 research outputs found

    Fast Neural Network Ensemble Learning via Negative-Correlation Data Correction

    Get PDF
    This letter proposes a new negative correlation (NC) learning method that is both easy to implement and has the advantages that: 1) it requires much lesser communication overhead than the standard NC method and 2) it is applicable to ensembles of heterogenous networks. © 2005 IEEE

    Identification of structurally conserved residues of proteins in absence of structural homologs using neural network ensemble

    Get PDF
    Motivation: So far various bioinformatics and machine learning techniques applied for identification of sequence and functionally conserved residues in proteins. Although few computational methods are available for the prediction of structurally conserved residues from protein structure, almost all methods require homologous structural information and structure-based alignments, which still prove to be a bottleneck in protein structure comparison studies. In this work, we developed a neural network approach for identification of structurally important residues from a single protein structure without using homologous structural information and structural alignment

    Deep Prediction Of Investor Interest: a Supervised Clustering Approach

    Get PDF
    We propose a novel deep learning architecture suitable for the prediction of investor interest for a given asset in a given timeframe. This architecture performs both investor clustering and modelling at the same time. We first verify its superior performance on a simulated scenario inspired by real data and then apply it to a large proprietary database from BNP Paribas Corporate and Institutional Banking

    A dynamic ensemble learning algorithm for neural networks

    Get PDF

    Towards Better Accuracy-efficiency Trade-offs: Divide and Co-training

    Full text link
    The width of a neural network matters since increasing the width will necessarily increase the model capacity. However, the performance of a network does not improve linearly with the width and soon gets saturated. In this case, we argue that increasing the number of networks (ensemble) can achieve better accuracy-efficiency trade-offs than purely increasing the width. To prove it, one large network is divided into several small ones regarding its parameters and regularization components. Each of these small networks has a fraction of the original one's parameters. We then train these small networks together and make them see various views of the same data to increase their diversity. During this co-training process, networks can also learn from each other. As a result, small networks can achieve better ensemble performance than the large one with few or no extra parameters or FLOPs. Small networks can also achieve faster inference speed than the large one by concurrent running on different devices. We validate our argument with 8 different neural architectures on common benchmarks through extensive experiments. The code is available at \url{https://github.com/mzhaoshuai/Divide-and-Co-training}

    A comparative study of surrogate musculoskeletal models using various neural network configurations

    Get PDF
    Title from PDF of title page, viewed on August 13, 2013Thesis advisor: Reza R. DerakhshaniVitaIncludes bibliographic references (pages 85-88)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2013The central idea in musculoskeletal modeling is to be able to predict body-level (e.g. muscle forces) as well as tissue-level information (tissue-level stress, strain, etc.). To develop computationally efficient techniques to analyze such models, surrogate models have been introduced which concurrently predict both body-level and tissue-level information using multi-body and finite-element analysis, respectively. However, this kind of surrogate model is not an optimum solution as it involves the usage of finite element models which are computation intensive and involve complex meshing methods especially during real-time movement simulations. An alternative surrogate modeling method is the use of artificial neural networks in place of finite-element models. The ultimate objective of this research is to predict tissue-level stresses experienced by the cartilage and ligaments during movement and achieve concurrent simulation of muscle force and tissue stress using various surrogate neural network models, where stresses obtained from finite-element models provide the frame of reference. Over the last decade, neural networks have been successfully implemented in several biomechanical modeling applications. Their adaptive ability to learn from examples, simple implementation techniques, and fast simulation times make neural networks versatile and robust when compared to other techniques. The neural network models are trained with reaction forces from multi-body models and stresses from finite element models obtained at the interested elements. Several configurations of static and dynamic neural networks are modeled, and accuracies close to 93% were achieved, where the correlation coefficient is the chosen measure of goodness. Using neural networks, the simulation time was reduced nearly 40,000 times when compared to the finite-element models. This study also confirms theoretical concepts that special network configurations--including average committee, stacked generalization, and negative correlation learning--provide considerably better results when compared to individual networks themselves.Introduction -- Methods -- Results -- Conclusion -- Future work -- Appendix A. Various linear and non-linear modeling techniques -- Appendix B. Error analysi

    Negatif bağlantılı öğrenme algoritmalı yapay sinir ağları ile mobil cihazlarda optik karakter tanıma uygulaması

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır
    corecore