205 research outputs found

    Markov chain approximation in bootstrapping autoregressions

    Get PDF
    We propose a bootstrap algorithm for autoregressions based on the approximation of the data generating process by a finite state discrete Markov chain. We discover a close connection of the proposed algorithm with existing bootstrap resampling schemes, run a small Monte-Carlo experiment, and give an illustrative example.Bootstrap resampling in time series

    Local Sensitivity and Diagnostic Tests

    Get PDF
    In this paper we confront sensitivity analysis with diagnostic testing.Every model is misspecified, but a model is useful if the parameters of interest (the focus) are not sensitive to small perturbations in the underlying assumptions. The study of the e ect of these violations on the focus is called sensitivity analysis.Diagnostic testing, on the other hand, attempts to find out whether a nuisance parameter is large or small.Both aspects are important, but traditional applied econometrics tends to use only diagnostics and forget about sensitivity analysis.We develop a theory of sensitivity in a maximum likelihood framework, propose a sensitivity test, give conditions under which the diagnostic and sensitivity tests are asymptotically independent, and demonstrate with three core examples that this independence is the rule rather than the exception, thus underlying the importance of sensitivity analysis.

    Photodynamic Therapy

    Get PDF

    Local Sensitivity and Diagnostic Tests

    Get PDF

    Practical use of sensitivity in econometrics with an illustration to forecast combinations

    Get PDF
    Sensitivity analysis is important for its own sake and also in combination with diagnostic testing. We consider the question how to use sensitivity statistics in practice, in particular how to judge whether sensitivity is large or small. For this purpose we distinguish between absolute and relative sensitivity and highlight the context-dependent nature of any sensitivity analysis. Relative sensitivity is then applied in the context of forecast combination and sensitivity-based weights are introduced. All concepts are illustrated through the European yield curve. In this context it is natural to look at sensitivity to autocorrelation and normality assumptions. Different forecasting models are combined with equal, fit-based and sensitivity-based weights, and compared with the multivariate and random walk benchmarks. We show that the fit-based weights and the sensitivity-based weights are complementary. For long-term maturities the sensitivity-based weights perform better than other weights

    Forecast combination for discrete choice models: predicting FOMC monetary policy decisions

    Get PDF
    This paper provides a methodology for combining forecasts based on several discrete choice models. This is achieved primarily by combining one-step-ahead probability forecast associated with each model. The paper applies well-established scoring rules for qualitative response models in the context of forecast combination. Log-scores and quadratic-scores are both used to evaluate the forecasting accuracy of each model and to combine the probability forecasts. In addition to producing point forecasts, the effect of sampling variation is also assessed. This methodology is applied to forecast the US Federal Open Market Committee (FOMC) decisions in changing the federal funds target rate. Several of the economic fundamentals influencing the FOMC decisions are nonstationary over time and are modelled in a similar fashion to Hu and Phillips (2004a, JoE). The empirical results show that combining forecasted probabilities using scores mostly outperforms both equal weight combination and forecasts based on multivariate models

    Practical considerations for optimal weights in density forecast combination

    Get PDF
    The problem of finding appropriate weights to combine several density forecasts is an important issue currently debated in the forecast combination literature. Recently, a paper by Hall and Mitchell (IJF, 2007) proposes to combine density forecasts with optimal weights obtained from solving an optimization problem. This paper studies the properties of this optimization problem when the number of forecasting periods is relatively small and finds that it often produces corner solutions by allocating all the weight to one density forecast only. This paper’s practical recommendation is to have an additional training sample period for the optimal weights. While reserving a portion of the data for parameter estimation and making pseudo-out-of-sample forecasts are common practices in the empirical literature, employing a separate training sample for the optimal weights is novel, and it is suggested because it decreases the chances of corner solutions. Alternative log-score or quadratic-score weighting schemes do not have this training sample requirement. Januar

    Forecast combination for U.S. recessions with real-time data

    Get PDF
    This paper proposes the use of forecast combination to improve predictive accuracy in forecasting the U.S. business cycle index, as published by the Business Cycle Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly forecast utilising the well-established coincident indicators and yield curve models, allowing for dynamics and real-time data revisions. Forecast combinations use logscore and quadratic-score based weights, which change over time. This paper finds that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's own forecasting performance

    Survival Analysis for Credit Scoring: Incidence and Latency

    Get PDF
    Duration analysis is an analytical tool for time-to-event data that has been borrowed from medicine and engineering to be applied by econometricians to investigate typical economic and finance problems. In applications to credit data, time to the pre-determined maturity events have been treated as censored observations for the events with stochastic latency. A methodology, motivated by the cure rate model framework, is developed in this paper to appropriately analyse a set of mutually exclusive terminal events where at least one event may have a predetermined latency. The methodology is applied to a set of personal loan data provided by one of Australia's largest financial services institutions. This is the first framework to simultaneously model prepayment, write off and maturity events for loans. Furthermore, in the class of cure rate models it is the first fully parametric multinomial model and the first to accommodate for an event with pre-determined latency. The simulation study found this model performed better than the two most common applications of survival analysis to credit data. In addition, the result of the application to personal loans data reveals particular explanatory variables can act in different directions upon incidence and latency of an event and variables exist that may be statistically significant in explaining only incidence or latency

    Combining simple multivariate HAR-like models for portfolio construction

    Get PDF
    Forecasts of the covariance matrix of returns is a crucial input into portfolio construction. In recent years multivariate version of the Heterogenous AutoRegressive (HAR) models have been designed to utilise realised measures of the covariance matrix to generate forecasts. This paper shows that combining forecasts from simple HAR-like models provide more coefficients estimates, stable forecasts and lower portfolio turnover. The economic benefits of the combination approach become crucial when transactions costs are taken into account. This combination approach also provides benefits in the context of direct forecasts of the portfolio weights. Economic benefits are observed at both 1-day and 1-week ahead forecast horizons
    corecore