989,399 research outputs found

    Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction

    Get PDF
    Despite the recent popularity of deep generative state space models, few comparisons have been made between network architectures and the inference steps of the Bayesian filtering framework -- with most models simultaneously approximating both state transition and update steps with a single recurrent neural network (RNN). In this paper, we introduce the Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian filtering step, captured by a series of encoders and decoders. Testing this on three real-world time series datasets, we demonstrate that the decoupled representations learnt not only improve the accuracy of one-step-ahead forecasts while providing realistic uncertainty estimates, but also facilitate multistep prediction through the separation of encoder stages

    Coupling geometry on binary bipartite networks: hypotheses testing on pattern geometry and nestedness

    Full text link
    Upon a matrix representation of a binary bipartite network, via the permutation invariance, a coupling geometry is computed to approximate the minimum energy macrostate of a network's system. Such a macrostate is supposed to constitute the intrinsic structures of the system, so that the coupling geometry should be taken as information contents, or even the nonparametric minimum sufficient statistics of the network data. Then pertinent null and alternative hypotheses, such as nestedness, are to be formulated according to the macrostate. That is, any efficient testing statistic needs to be a function of this coupling geometry. These conceptual architectures and mechanisms are by and large still missing in community ecology literature, and rendered misconceptions prevalent in this research area. Here the algorithmically computed coupling geometry is shown consisting of deterministic multiscale block patterns, which are framed by two marginal ultrametric trees on row and column axes, and stochastic uniform randomness within each block found on the finest scale. Functionally a series of increasingly larger ensembles of matrix mimicries is derived by conforming to the multiscale block configurations. Here matrix mimicking is meant to be subject to constraints of row and column sums sequences. Based on such a series of ensembles, a profile of distributions becomes a natural device for checking the validity of testing statistics or structural indexes. An energy based index is used for testing whether network data indeed contains structural geometry. A new version block-based nestedness index is also proposed. Its validity is checked and compared with the existing ones. A computing paradigm, called Data Mechanics, and its application on one real data network are illustrated throughout the developments and discussions in this paper

    Using data network metrics, graphics, and topology to explore network characteristics

    Full text link
    Yehuda Vardi introduced the term network tomography and was the first to propose and study how statistical inverse methods could be adapted to attack important network problems (Vardi, 1996). More recently, in one of his final papers, Vardi proposed notions of metrics on networks to define and measure distances between a network's links, its paths, and also between different networks (Vardi, 2004). In this paper, we apply Vardi's general approach for network metrics to a real data network by using data obtained from special data network tools and testing procedures presented here. We illustrate how the metrics help explicate interesting features of the traffic characteristics on the network. We also adapt the metrics in order to condition on traffic passing through a portion of the network, such as a router or pair of routers, and show further how this approach helps to discover and explain interesting network characteristics.Comment: Published at http://dx.doi.org/10.1214/074921707000000058 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonparametric Neural Network Estimation of Lyapunov Exponents and a Direct Test for Chaos

    Get PDF
    This paper derives the asymptotic distribution of the nonparametric neural network estimator of the Lyapunov exponent in a noisy system. Positivity of the Lyapunov exponent is an operational definition of chaos. We introduce a statistical framework for testing the chaotic hypothesis based on the estimated Lyapunov exponents and a consistent variance estimator. A simulation study to evaluate small sample performance is reported. We also apply our procedures to daily stock return data. In most cases, the hypothesis of chaos in the stock return series is rejected at the 1% level with an exception in some higher power transformed absolute returns.Artificial neural networks, nonlinear dynamics, nonlinear time series, nonparametric regression, sieve estimation

    BUSINESS CYCLE ASYMMETRIES IN STOCK RETURNS: ROBUST EVIDENCE

    Get PDF
    In this study we employ augmented and switching time series models to find possible existence of business cycle asymmetries in U.S. stock returns. Our approach is fully parametric and testing strategy is robust to any conditional heteroskedasticity, and outliers that may be present. We also approximate in sample as well as out-of-sample forecasts from artificial neural networks for testing business cycle nonlinearities in U.S. stock returns. Our results based on nonlinear augmented and switching time series models show a strong evidence of business cycle asymmetries in conditional mean dynamics of U.S. stock returns. These results also show that conditional heteroskedasticity is unimportant when testing for asymmetries in conditional mean. Moreover, the conditional volatility in stock returns is asymmetric and is more pronounced in recessions than in expansion phase of business cycles. Similarly, the results based on neural network models show a statistically significant evidence of business cycle nonlinearities in US stock returns. The magnitude of these nonlinearities is more obvious in post World War II era than in the full sample period.asymmetries; business cycles; conditional heteroskedasticity; long memory; nonlinearities; outliers; excess returns; stable distributions

    Testing for Non-Linear Dependence in Univariate Time Series: An Empirical Investigation of the Austrian Unemployment Rate

    Get PDF
    The modelling of univariate time series is a subject of great importance in a variety of fields, in regional science and economics, and beyond. Time series modelling involves three major stages:model identification, model%0D estimation and diagnostic checking. This current paper focuses its attention on the model identification stage in general and on the issue of testing for non-linear dependence in particular. If the null hypothesis of independence is rejected, then the alternative hypothesis implies the existence of linear or non-linear dependence. The test of this hypothesis is of crucial importance. If the data are linearly dependent, the linear time series models have to be specified (generally within the SARIMA methodology). If the data are non-linearly dependent, then non-linear time series modelling (such as ARCH, GARCH and autoregressive neural network models) must be employed. Several tests have recently been developed for this purpose. In this paper we make a modest attempt to investigate the power of five competing tests (McLeod-Li-test, Hsieh-test, BDS-test, Terävirta''''s neural network test) in a real world application domain of unemployment rate prediction in order to determine what kind of non-linear specification they have good power against, and which not. The results obtained indicate that that all the tests reject the hypothesis of mere linear dependence in our application. But if interest is focused on predicting the conditional mean of the series, the neural network test is most informative for model identification and its use is therefore highly%0D recommended.

    Locality statistics for anomaly detection in time series of graphs

    Full text link
    The ability to detect change-points in a dynamic network or a time series of graphs is an increasingly important task in many applications of the emerging discipline of graph signal processing. This paper formulates change-point detection as a hypothesis testing problem in terms of a generative latent position model, focusing on the special case of the Stochastic Block Model time series. We analyze two classes of scan statistics, based on distinct underlying locality statistics presented in the literature. Our main contribution is the derivation of the limiting distributions and power characteristics of the competing scan statistics. Performance is compared theoretically, on synthetic data, and on the Enron email corpus. We demonstrate that both statistics are admissible in one simple setting, while one of the statistics is inadmissible a second setting.Comment: 15 pages, 6 figure
    corecore