1,624 research outputs found
Theoretical Interpretations and Applications of Radial Basis Function Networks
Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains
Learning backward induction: a neural network agent approach
This paper addresses the question of whether neural networks (NNs), a realistic cognitive model of human information processing, can learn to backward induce in a two-stage game with a unique subgame-perfect Nash equilibrium. The NNs were found to predict the Nash equilibrium approximately 70% of the time in new games. Similarly to humans, the neural network agents are also found to suffer from subgame and truncation inconsistency, supporting the contention that they are appropriate models of general learning in humans. The agents were found to behave in a bounded rational manner as a result of the endogenous emergence of decision heuristics. In particular a very simple heuristic socialmax, that chooses the cell with the highest social payoff explains their behavior approximately 60% of the time, whereas the ownmax heuristic that simply chooses the cell with the maximum payoff for that agent fares worse explaining behavior roughly 38%, albeit still significantly better than chance. These two heuristics were found to be ecologically valid for the backward induction problem as they predicted the Nash equilibrium in 67% and 50% of the games respectively. Compared to various standard classification algorithms, the NNs were found to be only slightly more accurate than standard discriminant analyses. However, the latter do not model the dynamic learning process and have an ad hoc postulated functional form. In contrast, a NN agent’s behavior evolves with experience and is capable of taking on any functional form according to the universal approximation theorem.
Improved model identification for nonlinear systems using a random subsampling and multifold modelling (RSMM) approach
In nonlinear system identification, the available observed data are conventionally partitioned into two parts: the training data that are used for model identification and the test data that are used for model performance testing. This sort of ‘hold-out’ or ‘split-sample’ data partitioning
method is convenient and the associated model identification procedure is in general easy to implement. The resultant model obtained from such a once-partitioned single training dataset, however, may occasionally lack robustness and generalisation to represent future unseen data, because the performance of the identified model may be highly dependent on how the data partition is made. To
overcome the drawback of the hold-out data partitioning method, this study presents a new random subsampling and multifold modelling (RSMM) approach to produce less biased or preferably unbiased models. The basic idea and the associated procedure are as follows. Firstly, generate K training datasets (and also K validation datasets), using a K-fold random subsampling method. Secondly, detect
significant model terms and identify a common model structure that fits all the K datasets using a new
proposed common model selection approach, called the multiple orthogonal search algorithm. Finally,
estimate and refine the model parameters for the identified common-structured model using a multifold parameter estimation method. The proposed method can produce robust models with better generalisation performance
Incorporating Second-Order Functional Knowledge for Better Option Pricing
Incorporating prior knowledge of a particular task into the architecture of a learning algorithm can greatly improve generalization performance. We study here a case where we know that the function to be learned is non-decreasing in its two arguments and convex in one of them. For this purpose we propose a class of functions similar to multi-layer neural networks but (1) that has those properties, (2) is a universal approximator of continuous functions with these and other properties. We apply this new class of functions to the task of modeling the price of call options. Experiments show improvements on regressing the price of call options using the new types of function classes that incorporate the a priori constraints. Incorporer une connaissance a priori pour une tache particulière aux algorithmes d'apprentissage peut grandement améliorer leur performance en généralisation. Dans cet article, nous étudions un cas où nous savons que la fonction à apprendre est non-décroissante pour ses deux arguments, et convexe pour l'un d'entre eux. Pour ce cas particulier, nous proposons une classe de fonctions similaires aux réseaux de neurones multi-couches mais (1) avec les propriétés mentionnées plus haut, et (2) est un approximateur universel de fonctions continues avec ces propriétés et avec d'autres. Nous appliquons cette nouvelle classe de fonctions au problème de la modélisation du prix des options d'achat. Nos expériences montrent une amélioration pour la régression sur ces prix d'options d'achat lorsque nous utilisons la nouvelle classe de fonctions qui incorporent les contraintes a priori.Prior knowledge, learning algorithm, universal approximator, call options, Connaissance a priori, algorithme d'apprentissage, approximateur universel, options d'achat
Universal Approximation of Parametric Optimization via Neural Networks with Piecewise Linear Policy Approximation
Parametric optimization solves a family of optimization problems as a
function of parameters. It is a critical component in situations where optimal
decision making is repeatedly performed for updated parameter values, but
computation becomes challenging when complex problems need to be solved in
real-time. Therefore, in this study, we present theoretical foundations on
approximating optimal policy of parametric optimization problem through Neural
Networks and derive conditions that allow the Universal Approximation Theorem
to be applied to parametric optimization problems by constructing piecewise
linear policy approximation explicitly. This study fills the gap on formally
analyzing the constructed piecewise linear approximation in terms of
feasibility and optimality and show that Neural Networks (with ReLU
activations) can be valid approximator for this approximation in terms of
generalization and approximation error. Furthermore, based on theoretical
results, we propose a strategy to improve feasibility of approximated solution
and discuss training with suboptimal solutions.Comment: 17 pages, 2 figures, preprint, under revie
- …