1,631 research outputs found
Absolutely No Free Lunches!
This paper is concerned with learners who aim to learn patterns in infinite
binary sequences: shown longer and longer initial segments of a binary
sequence, they either attempt to predict whether the next bit will be a 0 or
will be a 1 or they issue forecast probabilities for these events. Several
variants of this problem are considered. In each case, a no-free-lunch result
of the following form is established: the problem of learning is a formidably
difficult one, in that no matter what method is pursued, failure is
incomparably more common that success; and difficult choices must be faced in
choosing a method of learning, since no approach dominates all others in its
range of success. In the simplest case, the comparison of the set of situations
in which a method fails and the set of situations in which it succeeds is a
matter of cardinality (countable vs. uncountable); in other cases, it is a
topological matter (meagre vs. co-meagre) or a hybrid computational-topological
matter (effectively meagre vs. effectively co-meagre)
The evolution of auditory contrast
This paper reconciles the standpoint that language users do not aim at improving their sound systems with the observation that languages seem to improve their sound systems. Computer simulations of inventories of sibilants show that Optimality-Theoretic learners who optimize their perception grammars automatically introduce a so-called prototype effect, i.e. the phenomenon that the learner’s preferred auditory realization of a certain phonological category is more peripheral than the average auditory realization of this category in her language environment. In production, however, this prototype effect is counteracted by an articulatory effect that limits the auditory form to something that is not too difficult to pronounce. If the prototype effect and the articulatory effect are of a different size, the learner must end up with an auditorily different sound system from that of her language environment. The computer simulations show that, independently of the initial auditory sound system, a stable equilibrium is reached within a small number of generations. In this stable state, the dispersion of the sibilants of the language strikes an optimal balance between articulatory ease and auditory contrast. The important point is that this is derived within a model without any goal-oriented elements such as dispersion constraints
Calibration: Respice, Adspice, Prospice
“Those who claim for themselves to judge the truth are bound to possess a criterion of truth.” JEL Code: C18, C53, D89calibration, prediction
Enhancing SDO/HMI images using deep learning
The Helioseismic and Magnetic Imager (HMI) provides continuum images and
magnetograms with a cadence better than one per minute. It has been
continuously observing the Sun 24 hours a day for the past 7 years. The obvious
trade-off between full disk observations and spatial resolution makes HMI not
enough to analyze the smallest-scale events in the solar atmosphere. Our aim is
to develop a new method to enhance HMI data, simultaneously deconvolving and
super-resolving images and magnetograms. The resulting images will mimic
observations with a diffraction-limited telescope twice the diameter of HMI.
Our method, which we call Enhance, is based on two deep fully convolutional
neural networks that input patches of HMI observations and output deconvolved
and super-resolved data. The neural networks are trained on synthetic data
obtained from simulations of the emergence of solar active regions. We have
obtained deconvolved and supper-resolved HMI images. To solve this ill-defined
problem with infinite solutions we have used a neural network approach to add
prior information from the simulations. We test Enhance against Hinode data
that has been degraded to a 28 cm diameter telescope showing very good
consistency. The code is open source.Comment: 13 pages, 10 figures. Accepted for publication in Astronomy &
Astrophysic
Networks for Nonlinear Diffusion Problems in Imaging
A multitude of imaging and vision tasks have seen recently a major
transformation by deep learning methods and in particular by the application of
convolutional neural networks. These methods achieve impressive results, even
for applications where it is not apparent that convolutions are suited to
capture the underlying physics.
In this work we develop a network architecture based on nonlinear diffusion
processes, named DiffNet. By design, we obtain a nonlinear network architecture
that is well suited for diffusion related problems in imaging. Furthermore, the
performed updates are explicit, by which we obtain better interpretability and
generalisability compared to classical convolutional neural network
architectures. The performance of DiffNet tested on the inverse problem of
nonlinear diffusion with the Perona-Malik filter on the STL-10 image dataset.
We obtain competitive results to the established U-Net architecture, with a
fraction of parameters and necessary training data
- …