43 research outputs found

    Markov chain approximation in bootstrapping autoregressions

    Get PDF
    We propose a bootstrap algorithm for autoregressions based on the approximation of the data generating process by a finite state discrete Markov chain. We discover a close connection of the proposed algorithm with existing bootstrap resampling schemes, run a small Monte-Carlo experiment, and give an illustrative example.Bootstrap resampling in time series

    Practical use of sensitivity in econometrics with an illustration to forecast combinations

    Get PDF
    Sensitivity analysis is important for its own sake and also in combination with diagnostic testing. We consider the question how to use sensitivity statistics in practice, in particular how to judge whether sensitivity is large or small. For this purpose we distinguish between absolute and relative sensitivity and highlight the context-dependent nature of any sensitivity analysis. Relative sensitivity is then applied in the context of forecast combination and sensitivity-based weights are introduced. All concepts are illustrated through the European yield curve. In this context it is natural to look at sensitivity to autocorrelation and normality assumptions. Different forecasting models are combined with equal, fit-based and sensitivity-based weights, and compared with the multivariate and random walk benchmarks. We show that the fit-based weights and the sensitivity-based weights are complementary. For long-term maturities the sensitivity-based weights perform better than other weights

    Forecast combination for discrete choice models: predicting FOMC monetary policy decisions

    Get PDF
    This paper provides a methodology for combining forecasts based on several discrete choice models. This is achieved primarily by combining one-step-ahead probability forecast associated with each model. The paper applies well-established scoring rules for qualitative response models in the context of forecast combination. Log-scores and quadratic-scores are both used to evaluate the forecasting accuracy of each model and to combine the probability forecasts. In addition to producing point forecasts, the effect of sampling variation is also assessed. This methodology is applied to forecast the US Federal Open Market Committee (FOMC) decisions in changing the federal funds target rate. Several of the economic fundamentals influencing the FOMC decisions are nonstationary over time and are modelled in a similar fashion to Hu and Phillips (2004a, JoE). The empirical results show that combining forecasted probabilities using scores mostly outperforms both equal weight combination and forecasts based on multivariate models

    Practical considerations for optimal weights in density forecast combination

    Get PDF
    The problem of finding appropriate weights to combine several density forecasts is an important issue currently debated in the forecast combination literature. Recently, a paper by Hall and Mitchell (IJF, 2007) proposes to combine density forecasts with optimal weights obtained from solving an optimization problem. This paper studies the properties of this optimization problem when the number of forecasting periods is relatively small and finds that it often produces corner solutions by allocating all the weight to one density forecast only. This paper’s practical recommendation is to have an additional training sample period for the optimal weights. While reserving a portion of the data for parameter estimation and making pseudo-out-of-sample forecasts are common practices in the empirical literature, employing a separate training sample for the optimal weights is novel, and it is suggested because it decreases the chances of corner solutions. Alternative log-score or quadratic-score weighting schemes do not have this training sample requirement. Januar

    Forecast combination for U.S. recessions with real-time data

    Get PDF
    This paper proposes the use of forecast combination to improve predictive accuracy in forecasting the U.S. business cycle index, as published by the Business Cycle Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly forecast utilising the well-established coincident indicators and yield curve models, allowing for dynamics and real-time data revisions. Forecast combinations use logscore and quadratic-score based weights, which change over time. This paper finds that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's own forecasting performance

    Combining simple multivariate HAR-like models for portfolio construction

    Get PDF
    Forecasts of the covariance matrix of returns is a crucial input into portfolio construction. In recent years multivariate version of the Heterogenous AutoRegressive (HAR) models have been designed to utilise realised measures of the covariance matrix to generate forecasts. This paper shows that combining forecasts from simple HAR-like models provide more coefficients estimates, stable forecasts and lower portfolio turnover. The economic benefits of the combination approach become crucial when transactions costs are taken into account. This combination approach also provides benefits in the context of direct forecasts of the portfolio weights. Economic benefits are observed at both 1-day and 1-week ahead forecast horizons

    Survival Analysis for Credit Scoring: Incidence and Latency

    Get PDF
    Duration analysis is an analytical tool for time-to-event data that has been borrowed from medicine and engineering to be applied by econometricians to investigate typical economic and finance problems. In applications to credit data, time to the pre-determined maturity events have been treated as censored observations for the events with stochastic latency. A methodology, motivated by the cure rate model framework, is developed in this paper to appropriately analyse a set of mutually exclusive terminal events where at least one event may have a predetermined latency. The methodology is applied to a set of personal loan data provided by one of Australia's largest financial services institutions. This is the first framework to simultaneously model prepayment, write off and maturity events for loans. Furthermore, in the class of cure rate models it is the first fully parametric multinomial model and the first to accommodate for an event with pre-determined latency. The simulation study found this model performed better than the two most common applications of survival analysis to credit data. In addition, the result of the application to personal loans data reveals particular explanatory variables can act in different directions upon incidence and latency of an event and variables exist that may be statistically significant in explaining only incidence or latency

    Multiple Event Incidence and Duration Analysis for Credit Data Incorporating Non-Stochastic Loan Maturity

    Get PDF
    Applications of duration analysis in Economics and Finance exclusively employ methods for events of stochastic duration. In application to credit data, previous research incorrectly treats the time to pre-determined maturity events as censored stochastic event times. The medical literature has binary parametric ‘cure rate’ models that deal with populations that never experienced the modelled event. We propose and develop a Multinomial parametric incidence and duration model, incorporating such populations. In the class of cure rate models, this is the first fully parametric multinomial model and is the first framework to accommodate an event with pre-determined duration. The methodology is applied to unsecured personal loan credit data provided by one of Australia’s largest financial services organizations. This framework is shown to be more flexible and predictive through a simulation and empirical study that reveals: simulation results of estimated parameters with a large reduction in bias; superior forecasting of duration; explanatory variables can act in different directions upon incidence and duration; and, variables exist that are statistically significant in explaining only incidence or duration

    Global combinations of expert forecasts

    Get PDF
    Expert forecast combination -- the aggregation of individual forecasts from multiple subject-matter experts -- is a proven approach to economic forecasting. To date, research in this area has exclusively concentrated on local combination methods, which handle separate but related forecasting tasks in isolation. Yet, it has been known for over two decades in the machine learning community that global methods, which exploit task-relatedness, can improve on local methods that ignore it. Motivated by the possibility for improvement, this paper introduces a framework for globally combining expert forecasts. Through our framework, we develop global versions of several existing forecast combinations. To evaluate the efficacy of these new global forecast combinations, we conduct extensive comparisons using synthetic and real data. Our real data comparisons, which involve expert forecasts of core economic indicators in the Eurozone, are the first empirical evidence that the accuracy of global combinations of expert forecasts can surpass local combinations

    Generalized Laplacian Regularized Framelet GCNs

    Full text link
    This paper introduces a novel Framelet Graph approach based on p-Laplacian GNN. The proposed two models, named p-Laplacian undecimated framelet graph convolution (pL-UFG) and generalized p-Laplacian undecimated framelet graph convolution (pL-fUFG) inherit the nature of p-Laplacian with the expressive power of multi-resolution decomposition of graph signals. The empirical study highlights the excellent performance of the pL-UFG and pL-fUFG in different graph learning tasks including node classification and signal denoising
    corecore