12,735 research outputs found
Deep learning as closure for irreversible processes: A data-driven generalized Langevin equation
The ultimate goal of physics is finding a unique equation capable of
describing the evolution of any observable quantity in a self-consistent way.
Within the field of statistical physics, such an equation is known as the
generalized Langevin equation (GLE). Nevertheless, the formal and exact GLE is
not particularly useful, since it depends on the complete history of the
observable at hand, and on hidden degrees of freedom typically inaccessible
from a theoretical point of view. In this work, we propose the use of deep
neural networks as a new avenue for learning the intricacies of the unknowns
mentioned above. By using machine learning to eliminate the unknowns from GLEs,
our methodology outperforms previous approaches (in terms of efficiency and
robustness) where general fitting functions were postulated. Finally, our work
is tested against several prototypical examples, from a colloidal systems and
particle chains immersed in a thermal bath, to climatology and financial
models. In all cases, our methodology exhibits an excellent agreement with the
actual dynamics of the observables under consideration
Image formation in synthetic aperture radio telescopes
Next generation radio telescopes will be much larger, more sensitive, have
much larger observation bandwidth and will be capable of pointing multiple
beams simultaneously. Obtaining the sensitivity, resolution and dynamic range
supported by the receivers requires the development of new signal processing
techniques for array and atmospheric calibration as well as new imaging
techniques that are both more accurate and computationally efficient since data
volumes will be much larger. This paper provides a tutorial overview of
existing image formation techniques and outlines some of the future directions
needed for information extraction from future radio telescopes. We describe the
imaging process from measurement equation until deconvolution, both as a
Fourier inversion problem and as an array processing estimation problem. The
latter formulation enables the development of more advanced techniques based on
state of the art array processing. We demonstrate the techniques on simulated
and measured radio telescope data.Comment: 12 page
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Iterative image reconstruction algorithms for optoacoustic tomography (OAT),
also known as photoacoustic tomography, have the ability to improve image
quality over analytic algorithms due to their ability to incorporate accurate
models of the imaging physics, instrument response, and measurement noise.
However, to date, there have been few reported attempts to employ advanced
iterative image reconstruction algorithms for improving image quality in
three-dimensional (3D) OAT. In this work, we implement and investigate two
iterative image reconstruction methods for use with a 3D OAT small animal
imager: namely, a penalized least-squares (PLS) method employing a quadratic
smoothness penalty and a PLS method employing a total variation norm penalty.
The reconstruction algorithms employ accurate models of the ultrasonic
transducer impulse responses. Experimental data sets are employed to compare
the performances of the iterative reconstruction algorithms to that of a 3D
filtered backprojection (FBP) algorithm. By use of quantitative measures of
image quality, we demonstrate that the iterative reconstruction algorithms can
mitigate image artifacts and preserve spatial resolution more effectively than
FBP algorithms. These features suggest that the use of advanced image
reconstruction algorithms can improve the effectiveness of 3D OAT while
reducing the amount of data required for biomedical applications
Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks
One of the challenges in modeling cognitive events from electroencephalogram
(EEG) data is finding representations that are invariant to inter- and
intra-subject differences, as well as to inherent noise associated with such
data. Herein, we propose a novel approach for learning such representations
from multi-channel EEG time-series, and demonstrate its advantages in the
context of mental load classification task. First, we transform EEG activities
into a sequence of topology-preserving multi-spectral images, as opposed to
standard EEG analysis techniques that ignore such spatial information. Next, we
train a deep recurrent-convolutional network inspired by state-of-the-art video
classification to learn robust representations from the sequence of images. The
proposed approach is designed to preserve the spatial, spectral, and temporal
structure of EEG which leads to finding features that are less sensitive to
variations and distortions within each dimension. Empirical evaluation on the
cognitive load classification task demonstrated significant improvements in
classification accuracy over current state-of-the-art approaches in this field.Comment: To be published as a conference paper at ICLR 201
Learning feed-forward one-shot learners
One-shot learning is usually tackled by using generative models or
discriminative embeddings. Discriminative methods based on deep learning, which
are very effective in other learning scenarios, are ill-suited for one-shot
learning as they need large amounts of training data. In this paper, we propose
a method to learn the parameters of a deep model in one shot. We construct the
learner as a second deep network, called a learnet, which predicts the
parameters of a pupil network from a single exemplar. In this manner we obtain
an efficient feed-forward one-shot learner, trained end-to-end by minimizing a
one-shot classification objective in a learning to learn formulation. In order
to make the construction feasible, we propose a number of factorizations of the
parameters of the pupil network. We demonstrate encouraging results by learning
characters from single exemplars in Omniglot, and by tracking visual objects
from a single initial exemplar in the Visual Object Tracking benchmark.Comment: The first three authors contributed equally, and are listed in
alphabetical orde
Texture Synthesis Through Convolutional Neural Networks and Spectrum Constraints
This paper presents a significant improvement for the synthesis of texture
images using convolutional neural networks (CNNs), making use of constraints on
the Fourier spectrum of the results. More precisely, the texture synthesis is
regarded as a constrained optimization problem, with constraints conditioning
both the Fourier spectrum and statistical features learned by CNNs. In contrast
with existing methods, the presented method inherits from previous CNN
approaches the ability to depict local structures and fine scale details, and
at the same time yields coherent large scale structures, even in the case of
quasi-periodic images. This is done at no extra computational cost. Synthesis
experiments on various images show a clear improvement compared to a recent
state-of-the art method relying on CNN constraints only
- …