2,552 research outputs found
Multiscale Adaptive Representation of Signals: I. The Basic Framework
We introduce a framework for designing multi-scale, adaptive, shift-invariant
frames and bi-frames for representing signals. The new framework, called
AdaFrame, improves over dictionary learning-based techniques in terms of
computational efficiency at inference time. It improves classical multi-scale
basis such as wavelet frames in terms of coding efficiency. It provides an
attractive alternative to dictionary learning-based techniques for low level
signal processing tasks, such as compression and denoising, as well as high
level tasks, such as feature extraction for object recognition. Connections
with deep convolutional networks are also discussed. In particular, the
proposed framework reveals a drawback in the commonly used approach for
visualizing the activations of the intermediate layers in convolutional
networks, and suggests a natural alternative
Another Note on Forced Burgers Turbulence
The power law range for the velocity gradient probability density function in
forced Burgers turbulence has been an issue of discussion recently. It is shown
in [chao-dyn/9901006] that the negative exponent in the assumed power law range
has to be strictly larger than 3. Here we give another direct argument for that
result, working with finite viscosity. At the same time we compute viscous
correction to the power law range. This should answer the questions raised by
Kraichnan in [chao-dyn/9901023] regarding the results of [chao-dyn/9901006].Comment: Revtex (6 pages, revised version
Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
We propose a new algorithm for solving parabolic partial differential
equations (PDEs) and backward stochastic differential equations (BSDEs) in high
dimension, by making an analogy between the BSDE and reinforcement learning
with the gradient of the solution playing the role of the policy function, and
the loss function given by the error between the prescribed terminal condition
and the solution of the BSDE. The policy function is then approximated by a
neural network, as is done in deep reinforcement learning. Numerical results
using TensorFlow illustrate the efficiency and accuracy of the proposed
algorithms for several 100-dimensional nonlinear PDEs from physics and finance
such as the Allen-Cahn equation, the Hamilton-Jacobi-Bellman equation, and a
nonlinear pricing model for financial derivatives.Comment: 39 pages, 15 figure
Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory
We present for the first time an efficient iterative method to directly solve
the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the
existence of the negative energy continuum in the DKS operator, the existing
iterative techniques for solving the Kohn-Sham systems cannot be efficiently
applied to solve the DKS systems. The key component of our method is a novel
filtering step (F) which acts as a preconditioner in the framework of the
locally optimal block preconditioned conjugate gradient (LOBPCG) method. The
resulting method, dubbed the LOBPCG-F method, is able to compute the desired
eigenvalues and eigenvectors in the positive energy band without computing any
state in the negative energy band. The LOBPCG-F method introduces mild extra
cost compared to the standard LOBPCG method and can be easily implemented. We
demonstrate our method in the pseudopotential framework with a planewave basis
set which naturally satisfies the kinetic balance prescription. Numerical
results for Pt, Au, TlF, and BiSe indicate that the
LOBPCG-F method is a robust and efficient method for investigating the
relativistic effect in systems containing heavy elements.Comment: 31 pages, 5 figure
- …
