1,986 research outputs found
Theoretical Properties of Projection Based Multilayer Perceptrons with Functional Inputs
Many real world data are sampled functions. As shown by Functional Data
Analysis (FDA) methods, spectra, time series, images, gesture recognition data,
etc. can be processed more efficiently if their functional nature is taken into
account during the data analysis process. This is done by extending standard
data analysis methods so that they can apply to functional inputs. A general
way to achieve this goal is to compute projections of the functional data onto
a finite dimensional sub-space of the functional space. The coordinates of the
data on a basis of this sub-space provide standard vector representations of
the functions. The obtained vectors can be processed by any standard method. In
our previous work, this general approach has been used to define projection
based Multilayer Perceptrons (MLPs) with functional inputs. We study in this
paper important theoretical properties of the proposed model. We show in
particular that MLPs with functional inputs are universal approximators: they
can approximate to arbitrary accuracy any continuous mapping from a compact
sub-space of a functional space to R. Moreover, we provide a consistency result
that shows that any mapping from a functional space to R can be learned thanks
to examples by a projection based MLP: the generalization mean square error of
the MLP decreases to the smallest possible mean square error on the data when
the number of examples goes to infinity
Generating functionals for computational intelligence: the Fisher information as an objective function for self-limiting Hebbian learning rules
Generating functionals may guide the evolution of a dynamical system and
constitute a possible route for handling the complexity of neural networks as
relevant for computational intelligence. We propose and explore a new objective
function, which allows to obtain plasticity rules for the afferent synaptic
weights. The adaption rules are Hebbian, self-limiting, and result from the
minimization of the Fisher information with respect to the synaptic flux. We
perform a series of simulations examining the behavior of the new learning
rules in various circumstances. The vector of synaptic weights aligns with the
principal direction of input activities, whenever one is present. A linear
discrimination is performed when there are two or more principal directions;
directions having bimodal firing-rate distributions, being characterized by a
negative excess kurtosis, are preferred. We find robust performance and full
homeostatic adaption of the synaptic weights results as a by-product of the
synaptic flux minimization. This self-limiting behavior allows for stable
online learning for arbitrary durations. The neuron acquires new information
when the statistics of input activities is changed at a certain point of the
simulation, showing however, a distinct resilience to unlearn previously
acquired knowledge. Learning is fast when starting with randomly drawn synaptic
weights and substantially slower when the synaptic weights are already fully
adapted
Output feedback NN control for two classes of discrete-time systems with unknown control directions in a unified approach
10.1109/TNN.2008.2003290IEEE Transactions on Neural Networks19111873-1886ITNN
Learning stochastic differential equations using RNN with log signature features
This paper contributes to the challenge of learning a function on streamed
multimodal data through evaluation. The core of the result of our paper is the
combination of two quite different approaches to this problem. One comes from
the mathematically principled technology of signatures and log-signatures as
representations for streamed data, while the other draws on the techniques of
recurrent neural networks (RNN). The ability of the former to manage high
sample rate streams and the latter to manage large scale nonlinear interactions
allows hybrid algorithms that are easy to code, quicker to train, and of lower
complexity for a given accuracy.
We illustrate the approach by approximating the unknown functional as a
controlled differential equation. Linear functionals on solutions of controlled
differential equations are the natural universal class of functions on data
streams. Following this approach, we propose a hybrid Logsig-RNN algorithm that
learns functionals on streamed data. By testing on various datasets, i.e.
synthetic data, NTU RGB+D 120 skeletal action data, and Chalearn2013 gesture
data, our algorithm achieves the outstanding accuracy with superior efficiency
and robustness
Band gap prediction for large organic crystal structures with machine learning
Machine-learning models are capable of capturing the structure-property
relationship from a dataset of computationally demanding ab initio
calculations. Over the past two years, the Organic Materials Database (OMDB)
has hosted a growing number of calculated electronic properties of previously
synthesized organic crystal structures. The complexity of the organic crystals
contained within the OMDB, which have on average 82 atoms per unit cell, makes
this database a challenging platform for machine learning applications. In this
paper, the focus is on predicting the band gap which represents one of the
basic properties of a crystalline materials. With this aim, a consistent
dataset of 12 500 crystal structures and their corresponding DFT band gap are
released, freely available for download at https://omdb.mathub.io/dataset. An
ensemble of two state-of-the-art models reach a mean absolute error (MAE) of
0.388 eV, which corresponds to a percentage error of 13% for an average band
gap of 3.05 eV. Finally, the trained models are employed to predict the band
gap for 260 092 materials contained within the Crystallography Open Database
(COD) and made available online so that the predictions can be obtained for any
arbitrary crystal structure uploaded by a user.Comment: 10 pages, 6 figure
- …