7 research outputs found

    Modeling the term structure of zero-coupon bonds

    Get PDF
    In this thesis we model the term structure of zero-coupon bonds. Firstly, in the static setting by norm optimization Hilbert space techniques and starting from a set of benchmark fixed income instruments, we obtain a closed from expression for a smooth discount curve. Moving on to the dynamic setting, we describe the stochastic modeling of the fixed income market. Finally, we introduce the Heath-Jarrow-Morton (HJM) methodology. We derive the evolution of zero-coupon bond prices implied by the HJM methodology and prove the HJM drift condition for non arbitrage pricing in the fixed income market under a dynamic setting. Knowing the current discount curve is crucial for pricing and hedging fixed income securities as it is a basic input to the HJM valuation methodology. Starting from the non arbitrage prices of a set of benchmark fixed income instruments, we find a smooth discount curve which perfectly reproduces the current market quotes by minimizing a suitably defined norm related to the flatness of the forward curve. The regularity of the discount curve estimated makes it suitable for use as an input in the HJM methodlogy. This thesis includes a self-contained introduction to the mathematical modeling of the most commonly traded fixed income securities. In addition, we present the mathematical background necessary for modeling the fixed income market in a dynamic setting. Some familiarity with analysis, basic probability theory and functional analysis is assumed

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    No full text
    URL des documents de travail : https://centredeconomiesorbonne.cnrs.fr/publications/Documents de travail du Centre d'Economie de la Sorbonne 2021.33 - ISSN : 1955-611XVoir aussi ce document de travail sur SSRN: https://ssrn.com/abstract=3981597In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    Full text link
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants

    Non-standard errors

    No full text
    corecore