124 research outputs found

    When are Contrarian Profits Due to Stock Market Overreaction?

    Get PDF
    The profitability of contrarian investment strategies need not be the result of stock market overreaction. Even if returns on individual securities are temporally independent, portfolio strategies that attempt to exploit return reversals may still earn positive expected profits. This is due to the effects of cross-autocovariances from which contrarian strategies inadvertently benefit. We provide an informal taxonomy of return-generating processes that yield positive [and negative] expected profits under a particular contrarian portfolio strategy, and use this taxonomy to reconcile the empirical findings of weak negative autocorrelation for returns on individual stocks with the strong positive autocorrelation of portfolio returns. We present empirical evidence against overreaction as the primary source of contrarian profits, and show the presence of important lead-lag relations across securities.

    Data-Snooping Biases in Tests of Financial Asset Pricing Models

    Get PDF
    We investigate the extent to which tests of financial asset pricing models may be biased by using properties of the data to construct the test statistics. Specifically, we focus on tests using returns to portfolios of common stock where portfolios are constructed by sorting on some empirically motivated characteristic of the securities such as market value of equity. We present both analytical calculations and Monte Carlo simulations that show the effects of this type of data-snooping to be substantial. Even when the sorting characteristic is only marginally correlated with individual security statistics, 5 percent tests based on sorted portfolio returns may reject with probability one under the null hypothesis. This bias is shown to worsen as the number of securities increases given a fixed number of portfolios, and as the number of portfolios decreases given a fixed number of securities. We provide an empirical example that illustrates the practical relevance of these biases.

    Stock Market Prices Do Not Follow Random Walks: Evidence From a Simple Specification Test

    Get PDF
    In this paper, we test the random walk hypothesis for weekly stock market returns by comparing variance estimators derived from data sampled at different frequencies. The random walk model is strongly rejected for the entire sample period (1962-1985) and for all sub-periods for a variety of aggregate returns indexes and size-sorted portfolios. Although the rejections are largely due to the behavior of small stocks, they cannot be ascribed to either the effects of infrequent trading or time-varying volatilities. Moreover, the rejection of the random walk cannot be interpreted as supporting a mean-reverting stationary model of asset prices, but is more consistent with a specific nonstationary alternative hypothesis.

    An Econometric Analysis of Nonsynchronous Trading

    Get PDF
    We develop a stochastic model of nonsynchronous asset prices based on sampling with random censoring. In addition to generalizing existing models of non-trading our framework allows the explicit calculation of the effects of infrequent trading on the time series properties of asset returns. These are empirically testable implications for the variances, autocorrelations, and cross-autocorrelations of returns to individual stocks as well as to portfolios. We construct estimators to quantify the magnitude of non-trading effects in commonly used stock returns data bases and show the extent to which this phenomenon is responsible for the recent rejections of the random walk hypothesis.

    The Size and Power of the Variance Ratio Test in Finite Samples: A Monte Carlo Investigation

    Get PDF
    We examine the finite sample properties of the variance ratio test of the random walk hypothesis via Monte Carlo simulations under two null and three alternative hypotheses. These results are compared to the performance of the Dickey-Fuller t and the Box-Pierce Q statistics. Under the null hypothesis of a random walk with independent and identically distributed Gaussian increments, the empirical size of all three tests are comparable. Under a heteroscedastic random walk null, the variance ratio test is more reliable than either the Dickey-Fuller or Box-Pierce tests. We compute the power of these three tests against three alternatives of recent empirical interest: a stationary AR(1), the sum of this AR(1) and a random walk, and an integrated AR( 1). By choosing the sampling frequency appropriately, the variance ratio test is shown to be as powerful as the Dickey-Fuller and Box-Pierce tests against the stationary alternative, and is more powerful than either of the two tests against the two unit-root alternatives.

    Bayesian Physics Informed Neural Networks for Data Assimilation and Spatio-Temporal Modelling of Wildfires

    Full text link
    We apply the Physics Informed Neural Network (PINN) to the problem of wildfire fire-front modelling. We use the PINN to solve the level-set equation, which is a partial differential equation that models a fire-front through the zero-level-set of a level-set function. The result is a PINN that simulates a fire-front as it propagates through the spatio-temporal domain. We show that popular optimisation cost functions used in the literature can result in PINNs that fail to maintain temporal continuity in modelled fire-fronts when there are extreme changes in exogenous forcing variables such as wind direction. We thus propose novel additions to the optimisation cost function that improves temporal continuity under these extreme changes. Furthermore, we develop an approach to perform data assimilation within the PINN such that the PINN predictions are drawn towards observations of the fire-front. Finally, we incorporate our novel approaches into a Bayesian PINN (B-PINN) to provide uncertainty quantification in the fire-front predictions. This is significant as the standard solver, the level-set method, does not naturally offer the capability for data assimilation and uncertainty quantification. Our results show that, with our novel approaches, the B-PINN can produce accurate predictions with high quality uncertainty quantification on real-world data.Comment: Accepted for publication in Spatial Statistic

    Econometric Models of Limit-Order Executions

    Get PDF
    This paper attempts to assess whether money can generate persistent economic" fluctuations in dynamic general equilibrium models of the business cycle. We show that a small" nominal friction in the goods market can make the response of output to monetary shocks large" and persistent if it is amplified by real wage rigidity in the labor market. We also argue that" given the level of real wage rigidity that is observed in developed countries nominal stickiness might be sufficient for money to produce economic fluctuations as persistent" as those observed in the data.

    An Ordered Probit Analysis of Transaction Stock Prices

    Get PDF
    We estimate the conditional distribution of trade-to-trade price changes using ordered probit, a statistical model for discrete random variables. Such an approach takes into account the fact that transaction price changes occur in discrete increments, typically eighths of a dollar, and occur at irregularly spaced time intervals. Unlike existing continuous-time/discrete-state models of discrete transaction prices, ordered probit can capture the effects of other economic variables on price changes, such as volume, past price changes, and the time between trades. Using 1988 transactions data for over 100 randomly chosen U.S. stocks, we estimate the ordered probit model via maximum likelihood and use the parameter estimates to measure several transaction-related quantities, such as the price impact of trades of a given size, the tendency towards price reversals from one transaction to the next, and the empirical significance of price discreteness.

    Detecting modification of biomedical events using a deep parsing approach

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This work describes a system for identifying event mentions in bio-molecular research abstracts that are either speculative (e.g. <it>analysis of IkappaBalpha phosphorylation</it>, where it is not specified whether phosphorylation did or did not occur) or negated (e.g. <it>inhibition of IkappaBalpha phosphorylation</it>, where phosphorylation did <it>not </it>occur). The data comes from a standard dataset created for the BioNLP 2009 Shared Task. The system uses a machine-learning approach, where the features used for classification are a combination of shallow features derived from the words of the sentences and more complex features based on the semantic outputs produced by a deep parser.</p> <p>Method</p> <p>To detect event modification, we use a Maximum Entropy learner with features extracted from the data relative to the trigger words of the events. The shallow features are bag-of-words features based on a small sliding context window of 3-4 tokens on either side of the trigger word. The deep parser features are derived from parses produced by the English Resource Grammar and the <it>RASP </it>parser. The outputs of these parsers are converted into the Minimal Recursion Semantics formalism, and from this, we extract features motivated by linguistics and the data itself. All of these features are combined to create training or test data for the machine learning algorithm.</p> <p>Results</p> <p>Over the test data, our methods produce approximately a 4% absolute increase in F-score for detection of event modification compared to a baseline based only on the shallow bag-of-words features.</p> <p>Conclusions</p> <p>Our results indicate that grammar-based techniques can enhance the accuracy of methods for detecting event modification.</p
    • …
    corecore