5,884 research outputs found

    Untenable nonstationarity: An assessment of the fitness for purpose of trend tests in hydrology

    Get PDF
    The detection and attribution of long-term patterns in hydrological time series have been important research topics for decades. A significant portion of the literature regards such patterns as ‘deterministic components’ or ‘trends’ even though the complexity of hydrological systems does not allow easy deterministic explanations and attributions. Consequently, trend estimation techniques have been developed to make and justify statements about tendencies in the historical data, which are often used to predict future events. Testing trend hypothesis on observed time series is widespread in the hydro-meteorological literature mainly due to the interest in detecting consequences of human activities on the hydrological cycle. This analysis usually relies on the application of some null hypothesis significance tests (NHSTs) for slowly-varying and/or abrupt changes, such as Mann-Kendall, Pettitt, or similar, to summary statistics of hydrological time series (e.g., annual averages, maxima, minima, etc.). However, the reliability of this application has seldom been explored in detail. This paper discusses misuse, misinterpretation, and logical flaws of NHST for trends in the analysis of hydrological data from three different points of view: historic-logical, semantic-epistemological, and practical. Based on a review of NHST rationale, and basic statistical definitions of stationarity, nonstationarity, and ergodicity, we show that even if the empirical estimation of trends in hydrological time series is always feasible from a numerical point of view, it is uninformative and does not allow the inference of nonstationarity without assuming a priori additional information on the underlying stochastic process, according to deductive reasoning. This prevents the use of trend NHST outcomes to support nonstationary frequency analysis and modeling. We also show that the correlation structures characterizing hydrological time series might easily be underestimated, further compromising the attempt to draw conclusions about trends spanning the period of records. Moreover, even though adjusting procedures accounting for correlation have been developed, some of them are insufficient or are applied only to some tests, while some others are theoretically flawed but still widely applied. In particular, using 250 unimpacted stream flow time series across the conterminous United States (CONUS), we show that the test results can dramatically change if the sequences of annual values are reproduced starting from daily stream flow records, whose larger sizes enable a more reliable assessment of the correlation structures

    Simple wealth distribution model causing inequality-induced crisis without external shocks

    Full text link
    We address the issue of the dynamics of wealth accumulation and economic crisis triggered by extreme inequality, attempting to stick to most possibly intrinsic assumptions. Our general framework is that of pure or modified multiplicative processes, basically geometric Brownian motions. In contrast with the usual approach of injecting into such stochastic agent models either specific, idiosyncratic internal nonlinear interaction patterns, or macroscopic disruptive features, we propose a dynamic inequality model where the attainment of a sizable fraction of the total wealth by very few agents induces a crisis regime with strong intermittency, the explicit coupling between the richest and the rest being a mere normalization mechanism, hence with minimal extrinsic assumptions. The model thus harnesses the recognized lack of ergodicity of geometric Brownian motions. It also provides a statistical intuition to the consequences of Thomas Piketty's recent "r>gr>g" (return rate >> growth rate) paradigmatic analysis of very-long-term wealth trends. We suggest that the "water-divide" of wealth flow may define effective classes, making an objective entry point to calibrate the model. Consistently, we check that a tax mechanism associated to a few percent relative bias on elementary daily transactions is able to slow or stop the build-up of large wealth. When extreme fluctuations are tamed down to a stationary regime with sizable but steadier inequalities, it should still offer opportunities to study the dynamics of crisis and the inner effective classes induced through external or internal factors.Comment: 15 pages, 11 figures. Work initiated from discussion on Aristotle's status revisited by Paul Jorion in the many cases where the law of supply and demand fails. Accepted for publication in Physical Review E on April 19, 201

    Compression and Conditional Emulation of Climate Model Output

    Full text link
    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus it is important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. The statistical model can be used to generate realizations representing the full dataset, along with characterizations of the uncertainties in the generated data. Thus, the methods are capable of both compression and conditional emulation of the climate models. Considerable attention is paid to accurately modeling the original dataset--one year of daily mean temperature data--particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers

    What Economists can learn from physics and finance

    Get PDF
    Some economists (Mirowski, 2002) have asserted that the neoclassical economic model was motivated by Newtonian mechanics. This viewpoint encourages confusion. Theoretical mechanics is firmly grounded in reproducible empirical observations and experiments, and provides a very accurate description of macroscopic motions to within high decimal precision. In stark contrast, neo-classical economics, or ‘rational expectations’ (ratex), is a merely postulated model that cannot be used to describe any real market or economy, even to zeroth order in perturbation theory. In mechanics we study both chaotic and complex dynamics whereas ratex restricts itself to equilibrium. Wigner (1967) has isolated the reasons for what he called ‘the unreasonable effectiveness of mathematics in physics’. In this article we isolate the reason for what Velupillai (2005), who was motivated by Wigner (1960), has called the ineffectiveness of mathematics in economics. I propose a remedy, namely, that economic theory should strive for the same degree of empirical success in modeling markets and economies as is exhibited by finance theory.Nonequilibrium; empirically based modelling; stochastic processes; complexity

    What does the yield curve tell us about the Federal Reserve's implicit inflation target?

    Get PDF
    This paper studies the time variation of the Federal Reserve’s inflation target between 1960 and 2004 using both macro and yield curve data. I estimate a New Keynesian dynamic stochastic general equilibrium model in which the inflation target follows a random-walk process. I compare estimation results obtained from both macroeconomic and yield curve data, two estimates obtained with only macro data, in order to determine what the yield curve tells us about the inflation target. In the joint estimation, the estimated inflation target is much higher during the mid 1980s than in the corresponding macro estimation. Also, some part of the decline in the inflation target during the early or the mid 1980s seems to be perceived as temporary when private agents have to filter out the random walk part of the inflation target from the composite inflation target. My findings suggest that financial market participants were skeptical of the Fed’s commitment to low inflation even after the Volcker disinflation period of the early 1980s.Interest rates ; Inflation (Finance)

    On the Small Sample Properties of Dickey Fuller and Maximum Likelihood Unit Root Tests on Discrete-Sampled Short-Term Interest Rates

    Get PDF
    Testing for unit roots in short-term interest rates plays a key role in the empirical modelling of these series. It is widely assumed that the volatility of interest rates follows some time-varying function which is dependent of the level of the series. This may cause distortions in the performance of conventional tests for unit root nonstationarity since these are typically derived under the assumption of homoskedasticity. Given the relative unfamiliarity on the issue, we conducted an extensive Monte Carlo investigation in order to assess the performance of the DF unit root tests, and examined the effects on the limiting distributions of test procedures (t- and likelihood ratio tests) based on maximum likelihood estimation of models for short-term rates with a linear drift.Unit root, interest rates, CKLS model.
    • …
    corecore