11,205 research outputs found

    Modelling and Forecasting the Yield Curve under Model uncertainty

    Get PDF
    This paper proposes a procedure to investigate the nature and persistence of the forces governing the yield curve and to use the extracted information for forecasting purposes. The latent factors of a model of the Nelson-Siegel type are directly linked to the maturity of the yields through the explicit description of the cross-sectional dynamics of the interest rates. The intertemporal dynamics of the factors is then modeled as driven by long-run forces giving rise to enduring effects, and by medium- and short-run forces producing transitory effects. These forces are re-constructed in real time with a dynamic filter whose embedded feedback control recursively corrects for model uncertainty, including additive and parameter uncertainty and possible equation misspecifications and approximations. This correction sensibly enhances the robustness of the estimates and the accuracy of the out-of-sample forecasts, both at short and long forecast horizons. JEL Classification: G1, E4, C5Frequency decomposition, Model uncertainty, monetary policy, yield curve

    Sensitivity study of generalised frequency response functions

    Get PDF
    The dependence and independence of input signal amplitudes for Generalised Frequency Response Functions(GFRF’s) are discussed based on parametric modelling

    Linear processes in high-dimension: phase space and critical properties

    Full text link
    In this work we investigate the generic properties of a stochastic linear model in the regime of high-dimensionality. We consider in particular the Vector AutoRegressive model (VAR) and the multivariate Hawkes process. We analyze both deterministic and random versions of these models, showing the existence of a stable and an unstable phase. We find that along the transition region separating the two regimes, the correlations of the process decay slowly, and we characterize the conditions under which these slow correlations are expected to become power-laws. We check our findings with numerical simulations showing remarkable agreement with our predictions. We finally argue that real systems with a strong degree of self-interaction are naturally characterized by this type of slow relaxation of the correlations.Comment: 40 pages, 5 figure

    Testing Full Consumption Insurance in the Frequency Domain

    Get PDF
    Full consumption insurance implies that consumers are able to perfectly share risk by equalizing state by state their inter-temporal marginal rates of substitution in the presence of idiosyncratic endowment shocks. In this paper I test the implications of full consumption insurance using band spectrum regression methods. I argue that moving to the frequency domain provides a possible solution to many difficulties tied to tests of perfect risk sharing. In particular, it provides a unifying framework to test consumption smoothing, both over time and across states of nature. Full consumption insurance is soundly rejected at business cycle frequencies.Consumption insurance ; Idiosyncratic risk ; Frequency domain

    Data-driven Inverse Optimization with Imperfect Information

    Full text link
    In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent's objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent's true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the {\em predicted} decision ({\em i.e.}, the decision implied by a particular candidate objective) differs from the agent's {\em actual} response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used. We show through extensive numerical tests that the proposed distributionally robust approach to inverse optimization attains often better out-of-sample performance than the state-of-the-art approaches
    • …
    corecore