89 research outputs found
Potentials and Limits to Basin Stability Estimation
Acknowledgments The authors gratefully acknowledge the support of BMBF, CoNDyNet, FK. 03SF0472A.Peer reviewedPublisher PD
Power-laws in recurrence networks from dynamical systems
Recurrence networks are a novel tool of nonlinear time series analysis
allowing the characterisation of higher-order geometric properties of complex
dynamical systems based on recurrences in phase space, which are a fundamental
concept in classical mechanics. In this Letter, we demonstrate that recurrence
networks obtained from various deterministic model systems as well as
experimental data naturally display power-law degree distributions with scaling
exponents that can be derived exclusively from the systems' invariant
densities. For one-dimensional maps, we show analytically that is not
related to the fractal dimension. For continuous systems, we find two distinct
types of behaviour: power-laws with an exponent depending on a
suitable notion of local dimension, and such with fixed .Comment: 6 pages, 7 figure
Recommended from our members
Timing of transients: Quantifying reaching times and transient behavior in complex systems
In dynamical systems, one may ask how long it takes for a trajectory to reach the attractor, i.e. how long it spends in the transient phase. Although for a single trajectory the mathematically precise answer may be infinity, it still makes sense to compare different trajectories and quantify which of them approaches the attractor earlier. In this article, we categorize several problems of quantifying such transient times. To treat them, we propose two metrics, area under distance curve and regularized reaching time, that capture two complementary aspects of transient dynamics. The first, area under distance curve, is the distance of the trajectory to the attractor integrated over time. It measures which trajectories are 'reluctant', i.e. stay distant from the attractor for long, or 'eager' to approach it right away. Regularized reaching time, on the other hand, quantifies the additional time (positive or negative) that a trajectory starting at a chosen initial condition needs to approach the attractor as compared to some reference trajectory. A positive or negative value means that it approaches the attractor by this much 'earlier' or 'later' than the reference, respectively. We demonstrated their substantial potential for application with multiple paradigmatic examples uncovering new features
Recommended from our members
Comparison of correlation analysis techniques for irregularly sampled time series
Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation) or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques. All methods have comparable root mean square errors (RMSEs) for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF) for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF) the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods. We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory) is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data
Comparison of correlation analysis techniques for irregularly sampled time series
Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation) or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques. <br><br> All methods have comparable root mean square errors (RMSEs) for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF) for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF) the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods. <br><br> We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem &delta;<sup>18</sup>O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory) is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data
Recommended from our members
Potentials and limits to basin stability estimation
Stability assessment methods for dynamical systems have recently been complemented by basin stability and derived measures, i.e. probabilistic statements whether systems remain in a basin of attraction given a distribution of perturbations. Their application requires numerical estimation via Monte Carlo sampling and integration of differential equations. Here, we analyse the applicability of basin stability to systems with basin geometries that are challenging for this numerical method, having fractal basin boundaries and riddled or intermingled basins of attraction. We find that numerical basin stability estimation is still meaningful for fractal boundaries but reaches its limits for riddled basins with holes
Deciphering the imprint of topology on nonlinear dynamical network stability
Acknowledgments The authors gratefully acknowledge the support of BMBF, CoNDyNet, FK. 03SF0472A. The authors gratefully acknowledge the European Regional Development Fund (ERDF), the German Federal Ministry of Education and Research and the Land Brandenburg for supporting this project by providing resources on the high performance computer system at the Potsdam Institute for Climate Impact Research. The publication of this article was funded by the Open Access Fund of the Leibniz Association. We further thank Peng Ji for helpful discussions regarding the interpretation of the results.Peer reviewedPublisher PD
Recommended from our members
Emergent inequality and business cycles in a simple behavioral macroeconomic model
Standard macroeconomic models assume that households are rational in the sense that they are perfect utility maximizers and explain economic dynamics in terms of shocks that drive the economy away from the steady state. Here we build on a standard macroeconomic model in which a single rational representative household makes a savings decision of how much to consume or invest. In our model, households are myopic boundedly rational heterogeneous agents embedded in a social network. From time to time each household updates its savings rate by copying the savings rate of its neighbor with the highest consumption. If the updating time is short, the economy is stuck in a poverty trap, but for longer updating times economic output approaches its optimal value, and we observe a critical transition to an economy with irregular endogenous oscillations in economic output, resembling a business cycle. In this regime households divide into two groups: poor households with low savings rates and rich households with high savings rates. Thus, inequality and economic dynamics both occur spontaneously as a consequence of imperfect household decision-making. Adding a few “rational” agents with a fixed savings rate equal to the long-term optimum allows us to match business cycle timescales. Our work here supports an alternative program of research that substitutes utility maximization for behaviorally grounded decision-making
- …