28,945 research outputs found

    A comparative assessment of methodologies used to evaluate competition policy

    Get PDF
    Research by academics and competition agencies on evaluating competition policy has grown rapidly during the last two decades. This paper surveys the literature in order to (i) assess the fitness for purpose of the main quantitative methodologies employed, and (ii) identify the main undeveloped areas and unanswered questions for future research. It suggests that policy evaluation is necessarily an imprecise science and that all existing methodologies have strengths and limitations. The areas where the need is most pressing for further work include: understanding why Article 102 cases are only infrequently evaluated; the need to bring conscious discussion of the counterfactual firmly into the foreground; a wider definition of policy to include success in deterrence and detection. At the heart of the discussion is the impact of selection bias on most aspects of evaluation. These topics are the focus of ongoing work in the CCP

    The Profitability of Currency Speculation

    Get PDF
    This paper presents the results of a post-sample simulation of a speculative strategy using a portfolio of foreign currency forward contracts.The main new features of the speculative strategy are (a)the use of Kalman filters to update the forecasting equation, (b) the allowance for transactions,costs and margin requirements and (c) the endogenous determination of the leveraging of the portfolio. While the forecasting model tended to overestimate profit and underestimate risk, the strategy was still profitable over a three year period and it was possible to reject the hypothesis that the sum of profits was zero. Furthermore, the currency portfolio was found to have an extremely low market risk. Combinations of the speculative currency portfolio with traditional portfolios of U.S. equities resulted in considerable improvements in risk-adjusted returns on capital.

    Econometric Analysis of the Market Share Attraction Model

    Get PDF
    Market share attraction models are useful tools for analyzing competitive structures. The models can be used to infer cross-effects of marketing-mix variables, but also the own effects can be adequately estimated while conditioning on competitive reactions. Important features of attraction models are that they incorporate that market shares sum to unity and that the market shares of individual brands are in between 0 and 1. Next to analyzing competitive structures, attraction models are also often considered for forecasting market shares. The econometric analysis of the market share attraction model has not received much attention. Topics as specification, diagnostics, estimation and forecasting have not been thoroughly discussed in the academic marketing literature. In this chapter we go through a range of these topics, and, along the lines, we indicate that there are ample opportunities to improve upon present-day practice.model selection;forecasting;Market share attraction model;diagnostics;estimation

    The miracle of the Septuagint and the promise of data mining in economics

    Get PDF
    This paper argues that the sometimes-conflicting results of a modern revisionist literature on data mining in econometrics reflect different approaches to solving the central problem of model uncertainty in a science of non-experimental data. The literature has entered an exciting phase with theoretical development, methodological reflection, considerable technological strides on the computing front and interesting empirical applications providing momentum for this branch of econometrics. The organising principle for this discussion of data mining is a philosophical spectrum that sorts the various econometric traditions according to their epistemological assumptions (about the underlying data-generating-process DGP) starting with nihilism at one end and reaching claims of encompassing the DGP at the other end; call it the DGP-spectrum. In the course of exploring this spectrum the reader will encounter various Bayesian, specific-to-general (S-G) as well general-to-specific (G-S) methods. To set the stage for this exploration the paper starts with a description of data mining, its potential risks and a short section on potential institutional safeguards to these problems.Data mining, model selection, automated model selection, general to specific modelling, extreme bounds analysis, Bayesian model selection

    Asset Pricing Theories, Models, and Tests

    Get PDF
    An important but still partially unanswered question in the investment field is why different assets earn substantially different returns on average. Financial economists have typically addressed this question in the context of theoretically or empirically motivated asset pricing models. Since many of the proposed “risk” theories are plausible, a common practice in the literature is to take the models to the data and perform “horse races” among competing asset pricing specifications. A “good” asset pricing model should produce small pricing (expected return) errors on a set of test assets and should deliver reasonable estimates of the underlying market and economic risk premia. This chapter provides an up-to-date review of the statistical methods that are typically used to estimate, evaluate, and compare competing asset pricing models. The analysis also highlights several pitfalls in the current econometric practice and offers suggestions for improving empirical tests

    Real-time prediction with U.K. monetary aggregates in the presence of model uncertainty

    Get PDF
    A popular account for the demise of the U.K.’s monetary targeting regime in the 1980s blames the fluctuating predictive relationships between broad money and inflation and real output growth. Yet ex post policy analysis based on heavily revised data suggests no fluctuations in the predictive content of money. In this paper, we investigate the predictive relationships for inflation and output growth using both real-time and heavily revised data. We consider a large set of recursively estimated vector autoregressive (VAR) and vector error correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. We use Bayesian model averaging (BMA) to demonstrate that real-time monetary policymakers faced considerable model uncertainty. The in-sample predictive content of money fluctuated during the 1980s as a result of data revisions in the presence of model uncertainty. This feature is only apparent with real-time data as heavily revised data obscure these fluctuations. Out-of-sample predictive evaluations rarely suggest that money matters for either inflation or real output. We conclude that both data revisions and model uncertainty contributed to the demise of the U.K.’s monetary targeting regime

    Empirical Validation of Agent Based Models: A Critical Survey

    Get PDF
    This paper addresses the problem of finding the appropriate method for conducting empirical validation in agent-based (AB) models, which is often regarded as the Achilles’ heel of the AB approach to economic modelling. The paper has two objectives. First, to identify key issues facing AB economists engaged in empirical validation. Second, to critically appraise the extent to which alternative approaches deal with these issues. We identify a first set of issues that are common to both AB and neoclassical modellers and a second set of issues which are specific to AB modellers. This second set of issues is captured in a novel taxonomy, which takes into consideration the nature of the object under study, the goal of the analysis, the nature of the modelling assumptions, and the methodology of the analysis. Having identified the nature and causes of heterogeneity in empirical validation, we examine three important approaches to validation that have been developed in AB economics: indirect calibration, the Werker-Brenner approach, and the history-friendly approach. We also discuss a set of open questions within empirical validation. These include the trade-off between empirical support and tractability of findings, the issue of over-parameterisation, unconditional objects, counterfactuals, and the non-neutrality of data.Empirical validation, agent-based models, calibration, history-friendly modelling

    Reformulating empirical macro-econometric modelling

    Get PDF
    The policy implications of estimated macro-econometric systems depend on the formulations of their equations, the methodology of empirical model selection and evaluation, the techniques of policy analysis, and their forecast performance. Drawing on recent results in the theory of forecasting, we question the role of `rational expectations'; criticize a common approach to testing economic theories; show that impulse-response methods of evaluating policy are seriously flawed; and question the mechanistic derivation of forecasts from econometric systems. In their place, we propose that expectations should be treated as instrumental to agents' decisions; discuss a powerful new approach to the empirical modelling of econometric relationships; offer viable alternatives to studying policy implications; and note modifications to forecasting devices that can enhance their robustness to unanticipated structural breaks. Keywords; economic policy analysis, macro-econometric systems, empirical model selection and evaluation, forecasting, rational expectations, impulse-response analysis, structural breaks

    Structural Macro-Econometric Modelling in a Policy Environment

    Get PDF
    The paper looks at the development of macroeconometric models over the past sixty years. In particular those that have been used for analysing policy options. We argue that there have been four generations of these. Each generation has evolved new features that have been partly drawn from the developing academic literature and partly from the perceived weaknesses in the previous generation. Overall the evolution has been governed by a desire to answer a set of basic questions and sometimes by what can be achieved using new computational methods. Our account of each generation considers their design, the way in which parameters were quantified and how they were evaluated.DSGE models;Phillips Curve;Macroeconometric Models;Bayesian Estimation
    corecore