8,918 research outputs found
Scaling symmetry, renormalization, and time series modeling
We present and discuss a stochastic model of financial assets dynamics based
on the idea of an inverse renormalization group strategy. With this strategy we
construct the multivariate distributions of elementary returns based on the
scaling with time of the probability density of their aggregates. In its
simplest version the model is the product of an endogenous auto-regressive
component and a random rescaling factor designed to embody also exogenous
influences. Mathematical properties like increments' stationarity and
ergodicity can be proven. Thanks to the relatively low number of parameters,
model calibration can be conveniently based on a method of moments, as
exemplified in the case of historical data of the S&P500 index. The calibrated
model accounts very well for many stylized facts, like volatility clustering,
power law decay of the volatility autocorrelation function, and multiscaling
with time of the aggregated return distribution. In agreement with empirical
evidence in finance, the dynamics is not invariant under time reversal and,
with suitable generalizations, skewness of the return distribution and leverage
effects can be included. The analytical tractability of the model opens
interesting perspectives for applications, for instance in terms of obtaining
closed formulas for derivative pricing. Further important features are: The
possibility of making contact, in certain limits, with auto-regressive models
widely used in finance; The possibility of partially resolving the long-memory
and short-memory components of the volatility, with consistent results when
applied to historical series.Comment: Main text (17 pages, 13 figures) plus Supplementary Material (16
pages, 5 figures
Deep learning as closure for irreversible processes: A data-driven generalized Langevin equation
The ultimate goal of physics is finding a unique equation capable of
describing the evolution of any observable quantity in a self-consistent way.
Within the field of statistical physics, such an equation is known as the
generalized Langevin equation (GLE). Nevertheless, the formal and exact GLE is
not particularly useful, since it depends on the complete history of the
observable at hand, and on hidden degrees of freedom typically inaccessible
from a theoretical point of view. In this work, we propose the use of deep
neural networks as a new avenue for learning the intricacies of the unknowns
mentioned above. By using machine learning to eliminate the unknowns from GLEs,
our methodology outperforms previous approaches (in terms of efficiency and
robustness) where general fitting functions were postulated. Finally, our work
is tested against several prototypical examples, from a colloidal systems and
particle chains immersed in a thermal bath, to climatology and financial
models. In all cases, our methodology exhibits an excellent agreement with the
actual dynamics of the observables under consideration
Long-Term Dependence Characteristics of European Stock Indices
In this paper we show the degrees of persistence of the time series if eight European stock market indices are measured, after their lack of ergodicity and stationarity has been established. The proper identification of the nature of the persistence of financial time series forms a crucial step in deciding whether econometric modeling of such series might provide meaningful results. Testing for ergodicity and stationarity must be the first step in deciding whether the assumptions of numerous time series models are met. Our results indicate that ergodicity and stationarity are very difficult to establish in daily observations of these market indexes and thus various time-series models cannot be successfully identified. However, the measured degrees of persistence point to the existence of certain dependencies, most likely of a nonlinear nature, which, perhaps can be used in the identification of proper empirical econometric models of such dynamic time paths of the European stock market indexes. The paper computes and analyzes the long- term dependence of the equity index data as measured by global Hurst exponents, which are computed from wavelet multi-resolution analysis. For example, the FTSE turns out to be an ultra-efficient market with abnormally fast mean-reversion, faster than theoretically postulated by a Geometric Brownian Motion. Various methodologies appear to produce non-unique empirical measurement results and it is very difficult to obtain definite conclusions regarding the presence or absence of long term dependence phenomena like persistence or anti-persistence based on the global or homogeneous Hurst exponent. More powerful methods, such as the computation of the multifractal spectra of financial time series may be required. However, the visualization of the wavelet resonance coefficients and their power spectrograms in the form of localized scalograms and average scalegrams, forcefully assist with the detection and measurement of several nonlinear types of market price diffusion.Long-Term Dependence, European Stock Indices
Dynamic reconfiguration of human brain networks during learning
Human learning is a complex phenomenon requiring flexibility to adapt
existing brain function and precision in selecting new neurophysiological
activities to drive desired behavior. These two attributes -- flexibility and
selection -- must operate over multiple temporal scales as performance of a
skill changes from being slow and challenging to being fast and automatic. Such
selective adaptability is naturally provided by modular structure, which plays
a critical role in evolution, development, and optimal network function. Using
functional connectivity measurements of brain activity acquired from initial
training through mastery of a simple motor skill, we explore the role of
modularity in human learning by identifying dynamic changes of modular
organization spanning multiple temporal scales. Our results indicate that
flexibility, which we measure by the allegiance of nodes to modules, in one
experimental session predicts the relative amount of learning in a future
session. We also develop a general statistical framework for the identification
of modular architectures in evolving systems, which is broadly applicable to
disciplines where network adaptability is crucial to the understanding of
system performance.Comment: Main Text: 19 pages, 4 figures Supplementary Materials: 34 pages, 4
figures, 3 table
The human ECG - nonlinear deterministic versus stochastic aspects
We discuss aspects of randomness and of determinism in electrocardiographic
signals. In particular, we take a critical look at attempts to apply methods of
nonlinear time series analysis derived from the theory of deterministic
dynamical systems. We will argue that deterministic chaos is not a likely
explanation for the short time variablity of the inter-beat interval times,
except for certain pathologies. Conversely, densely sampled full ECG recordings
possess properties typical of deterministic signals. In the latter case,
methods of deterministic nonlinear time series analysis can yield new insights.Comment: 6 pages, 9 PS figure
Untenable nonstationarity: An assessment of the fitness for purpose of trend tests in hydrology
The detection and attribution of long-term patterns in hydrological time series have been important research topics for decades. A significant portion of the literature regards such patterns as âdeterministic componentsâ or âtrendsâ even though the complexity of hydrological systems does not allow easy deterministic explanations and attributions. Consequently, trend estimation techniques have been developed to make and justify statements about tendencies in the historical data, which are often used to predict future events. Testing trend hypothesis on observed time series is widespread in the hydro-meteorological literature mainly due to the interest in detecting consequences of human activities on the hydrological cycle. This analysis usually relies on the application of some null hypothesis significance tests (NHSTs) for slowly-varying and/or abrupt changes, such as Mann-Kendall, Pettitt, or similar, to summary statistics of hydrological time series (e.g., annual averages, maxima, minima, etc.). However, the reliability of this application has seldom been explored in detail. This paper discusses misuse, misinterpretation, and logical flaws of NHST for trends in the analysis of hydrological data from three different points of view: historic-logical, semantic-epistemological, and practical. Based on a review of NHST rationale, and basic statistical definitions of stationarity, nonstationarity, and ergodicity, we show that even if the empirical estimation of trends in hydrological time series is always feasible from a numerical point of view, it is uninformative and does not allow the inference of nonstationarity without assuming a priori additional information on the underlying stochastic process, according to deductive reasoning. This prevents the use of trend NHST outcomes to support nonstationary frequency analysis and modeling. We also show that the correlation structures characterizing hydrological time series might easily be underestimated, further compromising the attempt to draw conclusions about trends spanning the period of records. Moreover, even though adjusting procedures accounting for correlation have been developed, some of them are insufficient or are applied only to some tests, while some others are theoretically flawed but still widely applied. In particular, using 250 unimpacted stream flow time series across the conterminous United States (CONUS), we show that the test results can dramatically change if the sequences of annual values are reproduced starting from daily stream flow records, whose larger sizes enable a more reliable assessment of the correlation structures
When is a Time Series I(0)?
This paper surveys the extensive recent literature on the problems of deciding what is meant by an I(0) process, and then deciding how to test for the property. A formidable difficulty exists in the construction of consistent and asymptotically correctly sized tests for the I(0) hypothesis, and this may appear to place a question mark over the validity of a large area of econometric theory and practice. To overcome these difficulties in practical applications, the paper proposes that a slightly different question needs to be posed, relating to the adequacy of approximation to asymptotic inference criteria in finite samples. A simulation-based test, aimed at discriminating between data sets on this basis, is examined in a Monte Carlo experiment.
Testing power-law cross-correlations: Rescaled covariance test
We introduce a new test for detection of power-law cross-correlations among a
pair of time series - the rescaled covariance test. The test is based on a
power-law divergence of the covariance of the partial sums of the long-range
cross-correlated processes. Utilizing a heteroskedasticity and auto-correlation
robust estimator of the long-term covariance, we develop a test with desirable
statistical properties which is well able to distinguish between short- and
long-range cross-correlations. Such test should be used as a starting point in
the analysis of long-range cross-correlations prior to an estimation of
bivariate long-term memory parameters. As an application, we show that the
relationship between volatility and traded volume, and volatility and returns
in the financial markets can be labeled as the one with power-law
cross-correlations.Comment: 15 pages, 4 figure
Long Memory Options: Valuation
This paper graphically demonstrates the significant impact of the observed financial market persistence, i.e., long term memory or dependence, on European option valuation. Many empirical researchers have observed non-Fickian degrees of persistence or long memory in the financial markets different from the Fickian neutral independence (i.i.d.) of the returns innovations assumption of Black-Scholes' geometric Brownian motion assumption. Moreover, Elliott and van der Hoek (2003) have now also provided a theoretical framework for incorporating these findings in the Black-Scholes risk-neutral valuation framework. This paper provides the first graphical demonstration why and how such long term memory phenomena change European option values and provides thereby a basis for informed long term memory arbitrage. Risk-neutral valuation is equivalent to valuation by real world probabilities. By using a mono-fractional Brownian motion, it is easy to incorporate the various degrees of persistence into the binomial and Black-Scholes pricing formulas. Long memory options are of considerable importance in Corporate remuneration packages, since warrants are written on a company's own shares for long expiration periods. Therefore, we recommend that for a proper valuation of such warrants, the degrees of persistence of the companies' share markets are measured and properly incorporated in the warrant valuation.Options, Long Memory, Persistence, Hurst Exponent, Executive Remuneration
- âŠ