23,178 research outputs found

    A novel approach to error function minimization for feedforward neural networks

    Get PDF
    Feedforward neural networks with error backpropagation (FFBP) are widely applied to pattern recognition. One general problem encountered with this type of neural networks is the uncertainty, whether the minimization procedure has converged to a global minimum of the cost function. To overcome this problem a novel approach to minimize the error function is presented. It allows to monitor the approach to the global minimum and as an outcome several ambiguities related to the choice of free parameters of the minimization procedure are removed.Comment: 11 pages, latex, 3 figures appended as uuencoded fil

    Random Neural Networks and Optimisation

    Get PDF
    In this thesis we introduce new models and learning algorithms for the Random Neural Network (RNN), and we develop RNN-based and other approaches for the solution of emergency management optimisation problems. With respect to RNN developments, two novel supervised learning algorithms are proposed. The first, is a gradient descent algorithm for an RNN extension model that we have introduced, the RNN with synchronised interactions (RNNSI), which was inspired from the synchronised firing activity observed in brain neural circuits. The second algorithm is based on modelling the signal-flow equations in RNN as a nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory quasi-Newton algorithm specifically designed for the RNN case. Regarding the investigation of emergency management optimisation problems, we examine combinatorial assignment problems that require fast, distributed and close to optimal solution, under information uncertainty. We consider three different problems with the above characteristics associated with the assignment of emergency units to incidents with injured civilians (AEUI), the assignment of assets to tasks under execution uncertainty (ATAU), and the deployment of a robotic network to establish communication with trapped civilians (DRNCTC). AEUI is solved by training an RNN tool with instances of the optimisation problem and then using the trained RNN for decision making; training is achieved using the developed learning algorithms. For the solution of ATAU problem, we introduce two different approaches. The first is based on mapping parameters of the optimisation problem to RNN parameters, and the second on solving a sequence of minimum cost flow problems on appropriately constructed networks with estimated arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer linear programming formulation, which is based on network flows. Finally, we design and implement distributed heuristic algorithms for the deployment of robots when the civilian locations are known or uncertain

    Cost Functions and Model Combination for VaR-based Asset Allocation using Neural Networks

    Get PDF
    We introduce an asset-allocation framework based on the active control of the value-at- risk of the portfolio. Within this framework, we compare two paradigms for making the allocation using neural networks. The first one uses the network to make a forecast of asset behavior, in conjunction with a traditional mean-variance allocator for constructing the portfolio. The second paradigm uses the network to directly make the portfolio allocation decisions. We consider a method for performing soft input variable selection, and show its considerable utility. We use model combination (committee) methods to systematize the choice of hyperparemeters during training. We show that committees using both paradigms are significantly outperforming the benchmark market performance. Nous introduisons un cadre d'allocation d'actifs basé sur le contrôle actif de la valeur à risque d'un portefeuille. À l'intérieur de ce cadre, nous comparons deux paradigmes pour faire cette allocation à l'aide de réseaux de neurones. Le premier paradigme utilise le réseau de neurones pour faire une prédiction sur le comportement de l'actif, en conjonction avec un allocateur traditionnel de moyenne-variance pour la construction du portefeuille. Le deuxième paradigme utilise le réseau pour faire directement les décisions d'allocation du portefeuille. Nous considérons une méthode qui accomplit une sélection de variable douce sur les entrées, et nous montrons sa très grande utilité. Nous utilisons également des méthodes de combinaison de modèles (comité) pour choisir systématiquement les hyper-paramètres pendant l'entraînement. Finalement, nous montrons que les comités utilisant les deux paradigmes surpassent de façon significative les performances d'un banc d'essai du marché.Value-at-risk, asset allocation, financial performance criterion, model combination, recurrent multilayer neural networks, Valeur à risque, allocation d'actif, critère de performance financière, combinaison de modèles, réseau de neurones récurrents multi-couches

    Theory and modeling of the magnetic field measurement in LISA PathFinder

    Full text link
    The magnetic diagnostics subsystem of the LISA Technology Package (LTP) on board the LISA PathFinder (LPF) spacecraft includes a set of four tri-axial fluxgate magnetometers, intended to measure with high precision the magnetic field at their respective positions. However, their readouts do not provide a direct measurement of the magnetic field at the positions of the test masses, and hence an interpolation method must be designed and implemented to obtain the values of the magnetic field at these positions. However, such interpolation process faces serious difficulties. Indeed, the size of the interpolation region is excessive for a linear interpolation to be reliable while, on the other hand, the number of magnetometer channels does not provide sufficient data to go beyond the linear approximation. We describe an alternative method to address this issue, by means of neural network algorithms. The key point in this approach is the ability of neural networks to learn from suitable training data representing the behavior of the magnetic field. Despite the relatively large distance between the test masses and the magnetometers, and the insufficient number of data channels, we find that our artificial neural network algorithm is able to reduce the estimation errors of the field and gradient down to levels below 10%, a quite satisfactory result. Learning efficiency can be best improved by making use of data obtained in on-ground measurements prior to mission launch in all relevant satellite locations and in real operation conditions. Reliable information on that appears to be essential for a meaningful assessment of magnetic noise in the LTP.Comment: 10 pages, 8 figures, 2 tables, submitted to Physical Review

    Bayesian neural network learning for repeat purchase modelling in direct marketing.

    Get PDF
    We focus on purchase incidence modelling for a European direct mail company. Response models based on statistical and neural network techniques are contrasted. The evidence framework of MacKay is used as an example implementation of Bayesian neural network learning, a method that is fairly robust with respect to problems typically encountered when implementing neural networks. The automatic relevance determination (ARD) method, an integrated feature of this framework, allows to assess the relative importance of the inputs. The basic response models use operationalisations of the traditionally discussed Recency, Frequency and Monetary (RFM) predictor categories. In a second experiment, the RFM response framework is enriched by the inclusion of other (non-RFM) customer profiling predictors. We contribute to the literature by providing experimental evidence that: (1) Bayesian neural networks offer a viable alternative for purchase incidence modelling; (2) a combined use of all three RFM predictor categories is advocated by the ARD method; (3) the inclusion of non-RFM variables allows to significantly augment the predictive power of the constructed RFM classifiers; (4) this rise is mainly attributed to the inclusion of customer\slash company interaction variables and a variable measuring whether a customer uses the credit facilities of the direct mailing company.Marketing; Companies; Models; Model; Problems; Neural networks; Networks; Variables; Credit;

    Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting

    Get PDF
    We introduce the Kronecker factored online Laplace approximation for overcoming catastrophic forgetting in neural networks. The method is grounded in a Bayesian online learning framework, where we recursively approximate the posterior after every task with a Gaussian, leading to a quadratic penalty on changes to the weights. The Laplace approximation requires calculating the Hessian around a mode, which is typically intractable for modern architectures. In order to make our method scalable, we leverage recent block-diagonal Kronecker factored approximations to the curvature. Our algorithm achieves over 90% test accuracy across a sequence of 50 instantiations of the permuted MNIST dataset, substantially outperforming related methods for overcoming catastrophic forgetting.Comment: 13 pages, 6 figure

    Episodic Learning with Control Lyapunov Functions for Uncertain Robotic Systems

    Get PDF
    Many modern nonlinear control methods aim to endow systems with guaranteed properties, such as stability or safety, and have been successfully applied to the domain of robotics. However, model uncertainty remains a persistent challenge, weakening theoretical guarantees and causing implementation failures on physical systems. This paper develops a machine learning framework centered around Control Lyapunov Functions (CLFs) to adapt to parametric uncertainty and unmodeled dynamics in general robotic systems. Our proposed method proceeds by iteratively updating estimates of Lyapunov function derivatives and improving controllers, ultimately yielding a stabilizing quadratic program model-based controller. We validate our approach on a planar Segway simulation, demonstrating substantial performance improvements by iteratively refining on a base model-free controller
    • …
    corecore