20 research outputs found

    Adaptive Regularization in Neural Network Modeling

    Get PDF
    . In this paper we address the important problem of optimizing regularization parameters in neural network modeling. The suggested optimization scheme is an extended version of the recently presented algorithm [24]. The idea is to minimize an empirical estimate -- like the cross-validation estimate -- of the generalization error with respect to regularization parameters. This is done by employing a simple iterative gradient descent scheme using virtually no additional programming overhead compared to standard training. Experiments with feed-forward neural network models for time series prediction and classification tasks showed the viability and robustness of the algorithm. Moreover, we provided some simple theoretical examples in order to illustrate the potential and limitations of the proposed regularization framework. 1 Introduction Neural networks are flexible tools for time series processing and pattern recognition. By increasing the number of hidden neurons in a 2-layer architec..

    Sonar discrimination of cylinders from different angles using neural networks neural networks

    Get PDF
    This paper describes an underwater object discrimination system applied to recognize cylinders of various compositions from different angles. The system is based on a new combination of simulated dolphin clicks, simulated auditory filters and artificial neural networks. The model demonstrates its potential on real data collected from four different cylinders in an environment where the angles were controlled in order to evaluate the models capabilities to recognize cylinders independent of angles. 1. INTRODUCTION Dolphins possess an excellent sonar system for solving underwater target discrimination and recognition tasks in shallow water (see e.g., [2]). This has inspired research in new sonar systems based on biological knowledge, i.e. modeling the dolphins discrimination capabilities (see e.g., [4] and [5]). The fact that the inner ear of the dolphin has many similarities with the human inner ear makes it tempting to use knowledge from simulations of the human auditory system when t..

    Revisiting Boltzmann learning: parameter estimation in Markov random fields

    Get PDF
    This contribution concerns a generalization of the Boltzmann Machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization and generalization in the context of Boltzmann Machines. We provide an illustrative example concerning parameter estimation in an inhomogeneous Markov Field

    Adaptive Regularization of Neural Classifiers

    Get PDF
    . In this paper we present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with Optimal Brain Damage pruning to optimize the architecture and to avoid overfitting. Furthermore, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method. INTRODUCTION Neural networks are flexible tools for pattern recognition and by expanding the network architecture any relevant target function can be approximated [6]. In this contribution we present an improved version of the neural classifier architecture based on a feed-forward net with SoftMax [2] normalization presented in [7], [8] avoiding an inherent redundant parameterization. The outputs of the network estimate the class conditional posterior probabilities and the n..

    Design of Robust Neural Network Classifiers

    Get PDF
    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present a modified likelihood function which incorporate the potential risk of outliers in the data. This leads to introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We suggest to adapt the outlier probability and regularization parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrates the potential of the suggested framework. 1. INTRODUCTION Neural networks are flexible tools for pattern rec..

    Fiskeriakustikk og akustisk mÄlklassifisering - Rapport frÄ COGMAR/CRIMAC arbeidsmÞte om maskinlÊring og fiskeriakustikk

    Get PDF
    Source at https://www.hi.no/hi/nettrapporter/rapport-fra-havforskningen-en-2021-25This report documents a workshop organised by the COGMAR and CRIMAC projects. The objective of the workshop was twofold. The first objective was to give an overview of ongoing work using machine learning for Acoustic Target Classification (ATC). Machine learning methods, and in particular deep learning models, are currently being used across a range of different fields, including ATC. The objective was to give an overview of the status of the work. The second objective was to familiarise participants with machine learning background to fisheries acoustics and to discuss a way forward towards a standard framework for sharing data and code. This includes data standards, standard processing steps and algorithms for efficient access to data for machine learning frameworks. The results from the discussion contributes to the process in ICES for developing a community standard for fisheries acoustics data

    Regularized Parameter Estimation in an Inhomogeneous Cellular Network

    No full text
    The Maximum A Posteriori (MAP) approach has found ample use in signal processing [12, 4]. When applied to image data it leads to algorithms that map well onto networks of locally connected, simple processing elements ie. cellular neural networks [2]. Furthermore, we find that the MAP approach in a very convenient way allows for specifiction of qualitative and flexible priors. Using adaptation schemes the parametrized priors may be trained to optimal performance in a particular environment, see e.g., [3, 6]. In most applications, sofar, the MAP approach has been used to derive homogeneous Markov Field models. In many image processing applications the visual field is inhomogeneous, e.g., in sonar or radar imagery where the two axes implement deflection and range respectively. In this note we formulate simple inhomogeneous MAP models hence, involving space-variant parametrizations of the prior. We use the Boltzmann Machine learning rule [8] for parame..
    corecore