1,540 research outputs found
Studies on Dynamics of Financial Markets and Reacting Flows
One of the central problems in financial markets analysis is to understand the nature of the underlying stochastic dynamics. Several intraday behaviors are analyzed to study trading day ensemble averages of both high frequency foreign exchange and stock markets data. These empirical results indicate that the underlying stochastic processes have nonstationary increments. The three most liquid foreign exchange markets and five most actively traded stocks each contains several time intervals during the day where the mean square fluctuation and variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. Based on these empirical results, an intraday stochastic model with linear variable diffusion coefficient is proposed to approximate the real dynamics of financial markets to the lowest order, and to test the effects of time averaging techniques typically used for financial time series analysis. The proposed model replicates major statistical characteristics of empirical financial time series and only ensemble averaging techniques deduce the underlying dynamics correctly. The proposed model also provides new insight into the modeling of financial markets' dynamics in microscopic time scales.
Also discussed are analytical and computational studies of reacting flows. Many dynamical features of the flows can be inferred from modal decompositions and coupling between modes. Both proper orthogonal (POD) and dynamic mode (DMD) decompositions are conducted on high-frequency, high-resolution empirical data and their results and strengths are compared and contrasted. In POD the contribution of each mode to the flow is quantified using the latency only, whereas each DMD mode can be associated a latency as well as a unique complex growth rate. By comparing DMD spectra from multiple nominally identical experiments, it is possible to identify "reproducible" modes in a flow. A similar differentiation cannot be made using POD. Time-dependent coefficients of DMD modes are complex. Even in noisy experimental data, it is found that the phase of these coefficients (but not their magnitude) exhibits repeatable dynamics. Hence it is suggested that dynamical characterizations of complex flows are best analyzed through the phase dynamics of reproducible DMD modes.Physics, Department o
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
RRCNN: An Enhanced Residual Recursive Convolutional Neural Network for Non-stationary Signal Decomposition
Time-frequency analysis is an important and challenging task in many
applications. Fourier and wavelet analysis are two classic methods that have
achieved remarkable success in many fields. They also exhibit limitations when
applied to nonlinear and non-stationary signals. To address this challenge, a
series of nonlinear and adaptive methods, pioneered by the empirical mode
decomposition method have been proposed. Their aim is to decompose a
non-stationary signal into quasi-stationary components which reveal better
features in the time-frequency analysis. Recently, inspired by deep learning,
we proposed a novel method called residual recursive convolutional neural
network (RRCNN). Not only RRCNN can achieve more stable decomposition than
existing methods while batch processing large-scale signals with low
computational cost, but also deep learning provides a unique perspective for
non-stationary signal decomposition. In this study, we aim to further improve
RRCNN with the help of several nimble techniques from deep learning and
optimization to ameliorate the method and overcome some of the limitations of
this technique.Comment: 8 pages, 4 figur
Consistent Dynamic Mode Decomposition
We propose a new method for computing Dynamic Mode Decomposition (DMD)
evolution matrices, which we use to analyze dynamical systems. Unlike the
majority of existing methods, our approach is based on a variational
formulation consisting of data alignment penalty terms and constitutive
orthogonality constraints. Our method does not make any assumptions on the
structure of the data or their size, and thus it is applicable to a wide range
of problems including non-linear scenarios or extremely small observation sets.
In addition, our technique is robust to noise that is independent of the
dynamics and it does not require input data to be sequential. Our key idea is
to introduce a regularization term for the forward and backward dynamics. The
obtained minimization problem is solved efficiently using the Alternating
Method of Multipliers (ADMM) which requires two Sylvester equation solves per
iteration. Our numerical scheme converges empirically and is similar to a
provably convergent ADMM scheme. We compare our approach to various
state-of-the-art methods on several benchmark dynamical systems
- …