3,167 research outputs found
Neural networks in geophysical applications
Neural networks are increasingly popular in geophysics.
Because they are universal approximators, these
tools can approximate any continuous function with an
arbitrary precision. Hence, they may yield important
contributions to finding solutions to a variety of geophysical applications.
However, knowledge of many methods and techniques
recently developed to increase the performance
and to facilitate the use of neural networks does not seem
to be widespread in the geophysical community. Therefore,
the power of these tools has not yet been explored to
their full extent. In this paper, techniques are described
for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size
and architecture
On Some Integrated Approaches to Inference
We present arguments for the formulation of unified approach to different
standard continuous inference methods from partial information. It is claimed
that an explicit partition of information into a priori (prior knowledge) and a
posteriori information (data) is an important way of standardizing inference
approaches so that they can be compared on a normative scale, and so that
notions of optimal algorithms become farther-reaching. The inference methods
considered include neural network approaches, information-based complexity, and
Monte Carlo, spline, and regularization methods. The model is an extension of
currently used continuous complexity models, with a class of algorithms in the
form of optimization methods, in which an optimization functional (involving
the data) is minimized. This extends the family of current approaches in
continuous complexity theory, which include the use of interpolatory algorithms
in worst and average case settings
Recommended from our members
Estimation of physical variables from multichannel remotely sensed imagery using a neural network: Application to rainfall estimation
Satellite-based remotely sensed data have the potential to provide hydrologically relevant information about spatially and temporally varying physical variables. A methodology for estimating such variables from multichannel remotely sensed data is presented; the approach is based on a modified counterpropagation neural network (MCPN) and is both effective and efficient at building complex nonlinear input-output function mappings from large amounts of data. An application to high-resolution estimation of the spatial and temporal variation of surface rainfall using geostationary satellite infrared and visible imagery is presented. Test results also indicate that spatially and temporally sparse ground-based observations can be assimilated via an adaptive implementation of the MCPN method, thereby allowing on-line improvement of the estimates
Self-growing neural network architecture using crisp and fuzzy entropy
The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed
Bayesian Deep Net GLM and GLMM
Deep feedforward neural networks (DFNNs) are a powerful tool for functional
approximation. We describe flexible versions of generalized linear and
generalized linear mixed models incorporating basis functions formed by a DFNN.
The consideration of neural networks with random effects is not widely used in
the literature, perhaps because of the computational challenges of
incorporating subject specific parameters into already complex models.
Efficient computational methods for high-dimensional Bayesian inference are
developed using Gaussian variational approximation, with a parsimonious but
flexible factor parametrization of the covariance matrix. We implement natural
gradient methods for the optimization, exploiting the factor structure of the
variational covariance matrix in computation of the natural gradient. Our
flexible DFNN models and Bayesian inference approach lead to a regression and
classification method that has a high prediction accuracy, and is able to
quantify the prediction uncertainty in a principled and convenient way. We also
describe how to perform variable selection in our deep learning method. The
proposed methods are illustrated in a wide range of simulated and real-data
examples, and the results compare favourably to a state of the art flexible
regression and classification method in the statistical literature, the
Bayesian additive regression trees (BART) method. User-friendly software
packages in Matlab, R and Python implementing the proposed methods are
available at https://github.com/VBayesLabComment: 35 pages, 7 figure, 10 table
Incorporating Second-Order Functional Knowledge for Better Option Pricing
Incorporating prior knowledge of a particular task into the architecture of a learning algorithm can greatly improve generalization performance. We study here a case where we know that the function to be learned is non-decreasing in its two arguments and convex in one of them. For this purpose we propose a class of functions similar to multi-layer neural networks but (1) that has those properties, (2) is a universal approximator of continuous functions with these and other properties. We apply this new class of functions to the task of modeling the price of call options. Experiments show improvements on regressing the price of call options using the new types of function classes that incorporate the a priori constraints. Incorporer une connaissance a priori pour une tache particulière aux algorithmes d'apprentissage peut grandement améliorer leur performance en généralisation. Dans cet article, nous étudions un cas où nous savons que la fonction à apprendre est non-décroissante pour ses deux arguments, et convexe pour l'un d'entre eux. Pour ce cas particulier, nous proposons une classe de fonctions similaires aux réseaux de neurones multi-couches mais (1) avec les propriétés mentionnées plus haut, et (2) est un approximateur universel de fonctions continues avec ces propriétés et avec d'autres. Nous appliquons cette nouvelle classe de fonctions au problème de la modélisation du prix des options d'achat. Nos expériences montrent une amélioration pour la régression sur ces prix d'options d'achat lorsque nous utilisons la nouvelle classe de fonctions qui incorporent les contraintes a priori.Prior knowledge, learning algorithm, universal approximator, call options, Connaissance a priori, algorithme d'apprentissage, approximateur universel, options d'achat
- …