4,148 research outputs found
Neural networks in geophysical applications
Neural networks are increasingly popular in geophysics.
Because they are universal approximators, these
tools can approximate any continuous function with an
arbitrary precision. Hence, they may yield important
contributions to finding solutions to a variety of geophysical applications.
However, knowledge of many methods and techniques
recently developed to increase the performance
and to facilitate the use of neural networks does not seem
to be widespread in the geophysical community. Therefore,
the power of these tools has not yet been explored to
their full extent. In this paper, techniques are described
for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size
and architecture
Evolution of Neural Networks for Helicopter Control: Why Modularity Matters
The problem of the automatic development of controllers for vehicles for which the exact characteristics are not known is considered in the context of miniature helicopter flocking. A methodology is proposed in which neural network based controllers are evolved in a simulation using a dynamic model qualitatively similar to the physical helicopter. Several network architectures and evolutionary sequences are investigated, and two approaches are found that can evolve very competitive controllers. The division of the neural network into modules and of the task into incremental steps seems to be a precondition for success, and we analyse why this might be so
Deep learning as closure for irreversible processes: A data-driven generalized Langevin equation
The ultimate goal of physics is finding a unique equation capable of
describing the evolution of any observable quantity in a self-consistent way.
Within the field of statistical physics, such an equation is known as the
generalized Langevin equation (GLE). Nevertheless, the formal and exact GLE is
not particularly useful, since it depends on the complete history of the
observable at hand, and on hidden degrees of freedom typically inaccessible
from a theoretical point of view. In this work, we propose the use of deep
neural networks as a new avenue for learning the intricacies of the unknowns
mentioned above. By using machine learning to eliminate the unknowns from GLEs,
our methodology outperforms previous approaches (in terms of efficiency and
robustness) where general fitting functions were postulated. Finally, our work
is tested against several prototypical examples, from a colloidal systems and
particle chains immersed in a thermal bath, to climatology and financial
models. In all cases, our methodology exhibits an excellent agreement with the
actual dynamics of the observables under consideration
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
Resampled Priors for Variational Autoencoders
We propose Learned Accept/Reject Sampling (LARS), a method for constructing
richer priors using rejection sampling with a learned acceptance function. This
work is motivated by recent analyses of the VAE objective, which pointed out
that commonly used simple priors can lead to underfitting. As the distribution
induced by LARS involves an intractable normalizing constant, we show how to
estimate it and its gradients efficiently. We demonstrate that LARS priors
improve VAE performance on several standard datasets both when they are learned
jointly with the rest of the model and when they are fitted to a pretrained
model. Finally, we show that LARS can be combined with existing methods for
defining flexible priors for an additional boost in performance
- …