7,726 research outputs found
Lagrangian Descriptors for Stochastic Differential Equations: A Tool for Revealing the Phase Portrait of Stochastic Dynamical Systems
In this paper we introduce a new technique for depicting the phase portrait
of stochastic differential equations. Following previous work for deterministic
systems, we represent the phase space by means of a generalization of the
method of Lagrangian descriptors to stochastic differential equations.
Analogously to the deterministic differential equations setting, the Lagrangian
descriptors graphically provide the distinguished trajectories and hyperbolic
structures arising within the stochastic dynamics, such as random fixed points
and their stable and unstable manifolds. We analyze the sense in which
structures form barriers to transport in stochastic systems. We apply the
method to several benchmark examples where the deterministic phase space
structures are well-understood. In particular, we apply our method to the noisy
saddle, the stochastically forced Duffing equation, and the stochastic double
gyre model that is a benchmark for analyzing fluid transport
The adaptive patched cubature filter and its implementation
There are numerous contexts where one wishes to describe the state of a
randomly evolving system. Effective solutions combine models that quantify the
underlying uncertainty with available observational data to form scientifically
reasonable estimates for the uncertainty in the system state. Stochastic
differential equations are often used to mathematically model the underlying
system.
The Kusuoka-Lyons-Victoir (KLV) approach is a higher order particle method
for approximating the weak solution of a stochastic differential equation that
uses a weighted set of scenarios to approximate the evolving probability
distribution to a high order of accuracy. The algorithm can be performed by
integrating along a number of carefully selected bounded variation paths. The
iterated application of the KLV method has a tendency for the number of
particles to increase. This can be addressed and, together with local dynamic
recombination, which simplifies the support of discrete measure without harming
the accuracy of the approximation, the KLV method becomes eligible to solve the
filtering problem in contexts where one desires to maintain an accurate
description of the ever-evolving conditioned measure.
In addition to the alternate application of the KLV method and recombination,
we make use of the smooth nature of the likelihood function and high order
accuracy of the approximations to lead some of the particles immediately to the
next observation time and to build into the algorithm a form of automatic high
order adaptive importance sampling.Comment: to appear in Communications in Mathematical Sciences. arXiv admin
note: substantial text overlap with arXiv:1311.675
Noisy independent component analysis of auto-correlated components
We present a new method for the separation of superimposed, independent,
auto-correlated components from noisy multi-channel measurement. The presented
method simultaneously reconstructs and separates the components, taking all
channels into account and thereby increases the effective signal-to-noise ratio
considerably, allowing separations even in the high noise regime.
Characteristics of the measurement instruments can be included, allowing for
application in complex measurement situations. Independent posterior samples
can be provided, permitting error estimates on all desired quantities. Using
the concept of information field theory, the algorithm is not restricted to any
dimensionality of the underlying space or discretization scheme thereof
A Statistically Principled and Computationally Efficient Approach to Speech Enhancement using Variational Autoencoders
Recent studies have explored the use of deep generative models of speech
spectra based of variational autoencoders (VAEs), combined with unsupervised
noise models, to perform speech enhancement. These studies developed iterative
algorithms involving either Gibbs sampling or gradient descent at each step,
making them computationally expensive. This paper proposes a variational
inference method to iteratively estimate the power spectrogram of the clean
speech. Our main contribution is the analytical derivation of the variational
steps in which the en-coder of the pre-learned VAE can be used to estimate the
varia-tional approximation of the true posterior distribution, using the very
same assumption made to train VAEs. Experiments show that the proposed method
produces results on par with the afore-mentioned iterative methods using
sampling, while decreasing the computational cost by a factor 36 to reach a
given performance .Comment: Submitted to INTERSPEECH 201
Reconstruction with velocities
Reconstruction is becoming a crucial procedure of galaxy clustering analysis for future spectroscopic redshift surveys to obtain subper cent level measurement of the baryon acoustic oscillation scale. Most reconstruction algorithms rely on an estimation of the displacement field from the observed galaxy distribution. However, the displacement reconstruction degrades near the survey boundary due to incomplete data and the boundary effects extend to ∼100 Mpc/h within the interior of the survey volume. We study the possibility of using radial velocities measured from the cosmic microwave background observation through the kinematic Sunyaev-Zeldovich effect to improve performance near the boundary. We find that the boundary effect can be reduced to ∼30 − 40 Mpc/h with the velocity information from Simons Observatory. This is especially helpful for dense low redshift surveys where the volume is relatively small and a large fraction of total volume is affected by the boundary
On Some Integrated Approaches to Inference
We present arguments for the formulation of unified approach to different
standard continuous inference methods from partial information. It is claimed
that an explicit partition of information into a priori (prior knowledge) and a
posteriori information (data) is an important way of standardizing inference
approaches so that they can be compared on a normative scale, and so that
notions of optimal algorithms become farther-reaching. The inference methods
considered include neural network approaches, information-based complexity, and
Monte Carlo, spline, and regularization methods. The model is an extension of
currently used continuous complexity models, with a class of algorithms in the
form of optimization methods, in which an optimization functional (involving
the data) is minimized. This extends the family of current approaches in
continuous complexity theory, which include the use of interpolatory algorithms
in worst and average case settings
- …