11,708 research outputs found

    CMOS circuit implementations for neuron models

    Get PDF
    The mathematical neuron basic cells used as basic cells in popular neural network architectures and algorithms are discussed. The most popular neuron models (without training) used in neural network architectures and algorithms (NNA) are considered, focusing on hardware implementation of neuron models used in NAA, and in emulation of biological systems. Mathematical descriptions and block diagram representations are utilized in an independent approach. Nonoscillatory and oscillatory models are discusse

    SEVEN: Deep Semi-supervised Verification Networks

    Full text link
    Verification determines whether two samples belong to the same class or not, and has important applications such as face and fingerprint verification, where thousands or millions of categories are present but each category has scarce labeled examples, presenting two major challenges for existing deep learning models. We propose a deep semi-supervised model named SEmi-supervised VErification Network (SEVEN) to address these challenges. The model consists of two complementary components. The generative component addresses the lack of supervision within each category by learning general salient structures from a large amount of data across categories. The discriminative component exploits the learned general features to mitigate the lack of supervision within categories, and also directs the generative component to find more informative structures of the whole data manifold. The two components are tied together in SEVEN to allow an end-to-end training of the two components. Extensive experiments on four verification tasks demonstrate that SEVEN significantly outperforms other state-of-the-art deep semi-supervised techniques when labeled data are in short supply. Furthermore, SEVEN is competitive with fully supervised baselines trained with a larger amount of labeled data. It indicates the importance of the generative component in SEVEN.Comment: 7 pages, 2 figures, accepted to the 2017 International Joint Conference on Artificial Intelligence (IJCAI-17

    Application of Neural Networks to House Pricing and Bond Rating

    Get PDF
    Feed forward neural networks receive a growing attention as a data modelling tool in economic classification problems. It is well-known that controlling the design of a neural network can be cumbersome. Inaccuracies may lead to a manifold of problems in the application such as higher errors due to local optima, overfitting and ill-conditioning of the network, especially when the number of observations is small. In this paper we provide a method to overcome these difficulties by regulating the flexibility of the network and by rendering measures for validating the final network. In particular a method is proposed to equilibrate the number of hidden neurons and the value of the weight decay parameter based on 5 and 10-fold cross-validation. In the validation process the performance of the neural network is compared with a linear model with the same input variables. The degree of monotonicity with respect to each explanatory variable is calculated by numerical differentiation. The outcomes of this analysis is compared to what is expected from economic theory. Furthermore we propose a scheme for the application of monotonic neural networks to problems where monotonicity with respect to the explanatory variables is known a priori. The methods are illustrated in two case studies: predicting the price of housing in Boston metropolitan area and the classification of bond ratings.Classification;error estimation;monotonicity;finance;neural-network models
    corecore