606 research outputs found

    Information uncertainty and the reaction of stock prices to news

    Get PDF
    Recent theoretical papers suggest that high uncertainty about firms’ economic prospects can explain delays in the adjustment of their stock prices to economic news. Using analyst forecast revisions and earnings announcements as proxies of news, we find mixed evidence in support of this hypothesis. We confirm that stocks of firms whose prospects are highly uncertain display a relatively large delayed price reaction (so-called continuation) after the release of news, but we argue that this evidence does not necessarily imply a slower adjustment speed. Indeed, for these stocks the immediate reaction to news is also relatively strong. In fact, the magnitude of the delayed price reaction (the price continuation) depends both on the degree of price sluggishness and on the “scale” of the news hitting the stock. We therefore consider both the delayed and immediate responses, and compute measures of adjustment speed that do not depend on the “scale” of the news. We then compare these measures across portfolios of stocks characterized by different degrees of uncertainty. Our findings indicate that: (i) stock prices characterized by high uncertainty tend to adjust to bad news more sluggishly than those characterized by low uncertainty; (ii) the opposite holds true in the case of good news; (iii) stock prices characterized by high uncertainty tend to adjust to bad news more sluggishly than to good news. Previous empirical literature focuses on price continuation patterns but neglects to control for the “scale” of the news, reaching erroneous conclusions.stock price continuation, price adjustment speed, news, earnings announcements, analysts forecasts, post-earnings announcement drift, post-analyst forecast revisions drift, managers incentives

    PARX model for football matches predictions

    Get PDF
    We propose an innovative approach to model and predict the outcome of football matches based on the Poisson Autoregression with eXogenous covariates (PARX) model recently proposed by Agosto, Cavaliere, Kristensen and Rahbek (2016). We show that this methodology is particularly suited to model the goals distribution of a football team and provides a good forecast performance that can be exploited to develop a profitable betting strategy. The betting strategy is based on the idea that the odds proposed by the market do not reflect the true probability of the match because they may incorporate also the betting volumes or strategic price settings in order to exploit bettors’ biases. The out-of-sample performance of the PARX model is better than the reference approach by Dixon and Coles (1997). We also evaluate our approach in a simple betting strategy which is applied to the English football Premier League data for the 2013/2014 and 2014/2015 seasons. The results show that the return from the betting strategy is larger than 35% in all the cases considered and may even exceed 100% if we consider an alternative strategy based on a predetermined threshold which allows to exploit the inefficiency of the betting market

    Are Fiscal Multipliers Estimated with Proxy-SVARs Robust?

    Get PDF
    How large are government spending and tax multipliers? The fiscal proxy-SVAR literature provides heterogenous estimates, depending on which proxies - fiscal or non-fiscal - are used to identify fiscal shocks. We reconcile the existing estimates via a flexible vector autoregressive model that allows to achieve identification in presence of a number of structural shocks larger than that of the available instruments. Our two main findings are the following. First, the estimate of the tax multiplier is sensitive to the assumption of orthogonality between total factor productivity (non-fiscal proxy) and tax shocks. If this correlation is assumed to be zero, the tax multiplier is found to be around one. If such correlation is non-zero, as supported by our empirical evidence, we find a tax multiplier three times as large. Second, we find the spending multiplier to be robustly larger than one across different models that feature different sets of instruments. Our results are robust to the joint employment of different fiscal and non-fiscal instruments

    R&D Subsidization effect and network centralization. Evidence from an agent-based micro-policy simulation

    Get PDF
    This paper presents an agent-based micro-policy simulation model assessing public R&D policy effect when R&D and non-R&D performing companies are located within a network. We set out by illustrating the behavioural structure and the computational logic of the proposed model; then, we provide a simulation experiment where the pattern of the total level of R&D activated by a fixed amount of public support is analysed as function of companies’ network topology. More specifically, the suggested simulation experiment shows that a larger “hubness” of the network is more likely accompanied with a decreasing median of the aggregated total R&D performance of the system. Since the aggregated firm idiosyncratic R&D (i.e., the part of total R&D independent of spillovers) is slightly increasing, we conclude that positive cross-firm spillover effects - in the presence of a given amount of support - have a sizeable impact within less centralized networks, where fewer hubs emerge. This may question the common wisdom suggesting that larger R&D externality effects should be more likely to arise when few central champions receive a support

    Webcrow: A web-based system for crossword solving

    Get PDF
    Language games represent one of the most fascinating challenges of research in artificial intelligence. In this paper we give an overview of WebCrow, a system that tackles crosswords using the Web as a knowledge base. This appears to be a novel approach with respect to the available literature. It is also the first solver for non-English crosswords and it has been designed to be potentially multilingual. Although WebCrow has been implemented only in a preliminary version, it already displays very interesting results reaching the performance of a human beginner: crosswords that are “easy ” for expert humans are solved, within competition time limits, with 80 % of correct words and over 90 % of correct letters

    Bootstrapping DSGE models

    Get PDF
    This paper explores the potential of bootstrap methods in the empirical evalu- ation of dynamic stochastic general equilibrium (DSGE) models and, more generally, in linear rational expectations models featuring unobservable (latent) components. We consider two dimensions. First, we provide mild regularity conditions that suffice for the bootstrap Quasi- Maximum Likelihood (QML) estimator of the structural parameters to mimic the asymptotic distribution of the QML estimator. Consistency of the bootstrap allows to keep the probability of false rejections of the cross-equation restrictions under control. Second, we show that the realizations of the bootstrap estimator of the structural parameters can be constructively used to build novel, computationally straightforward tests for model misspecification, including the case of weak identification. In particular, we show that under strong identification and boot- strap consistency, a test statistic based on a set of realizations of the bootstrap QML estimator approximates the Gaussian distribution. Instead, when the regularity conditions for inference do not hold as e.g. it happens when (part of) the structural parameters are weakly identified, the above result is no longer valid. Therefore, we can evaluate how close or distant is the esti- mated model from the case of strong identification. Our Monte Carlo experimentations suggest that the bootstrap plays an important role along both dimensions and represents a promising evaluation tool of the cross-equation restrictions and, under certain conditions, of the strength of identification. An empirical illustration based on a small-scale DSGE model estimated on U.S. quarterly observations shows the practical usefulness of our approach

    An identification and testing strategy for proxy-SVARs with weak proxies

    Full text link
    When proxies (external instruments) used to identify target structural shocks are weak, inference in proxy-SVARs (SVAR-IVs) is nonstandard and the construction of asymptotically valid confidence sets for the impulse responses of interest requires weak-instrument robust methods. In the presence of multiple target shocks, test inversion techniques require extra restrictions on the proxy-SVAR parameters other those implied by the proxies that may be difficult to interpret and test. We show that frequentist asymptotic inference in these situations can be conducted through Minimum Distance estimation and standard asymptotic methods if the proxy-SVAR can be identified by using `strong' instruments for the non-target shocks; i.e. the shocks which are not of primary interest in the analysis. The suggested identification strategy hinges on a novel pre-test for the null of instrument relevance based on bootstrap resampling which is not subject to pre-testing issues, in the sense that the validity of post-test asymptotic inferences is not affected by the outcomes of the test. The test is robust to conditionally heteroskedasticity and/or zero-censored proxies, is computationally straightforward and applicable regardless of the number of shocks being instrumented. Some illustrative examples show the empirical usefulness of the suggested identification and testing strategy

    Estimation of Quasi-Rational DSGE Models

    Get PDF
    Small-scale dynamic stochastic general equilibrium have been treated as the benchmark of much of the monetary policy literature, given their ability to explain the impact of monetary policy on output, inflation and financial markets. One cause of the empirical failure of New Keynesian models is partially due to the Rational Expectations (RE) paradigm, which entails a tight structure on the dynamics of the system. Under this hypothesis, the agents are assumed to know the data genereting process. In this paper, we propose the econometric analysis of New Keynesian DSGE models under an alternative expectations generating paradigm, which can be regarded as an intermediate position between rational expectations and learning, nameley an adapted version of the "Quasi-Rational" Expectatations (QRE) hypothesis. Given the agents' statistical model, we build a pseudo-structural form from the baseline system of Euler equations, imposing that the length of the reduced form is the same as in the `best' statistical model
    • 

    corecore