81 research outputs found

    Permanent and Transitory Factors Affecting the Dynamics of the Term Structure of Interest Rates

    Get PDF
    This paper proposes a novel methodology, based on the Common Principal Component analysis, allowing one to estimate the factors driving the term structure of interest rates, in the presence of time-varying covariance structure. The advantages of this method are first, that, unlike classical principal component analysis, common factors can be estimated without assuming that the volatility of the factors is constant; and second, that the factor structure can be decomposed into permanent and transitory common factors. We conclude that only permanent factors are relevant for modeling the dynamics of interest rates, and that the common principal component approach appears to be more accurate than the classical principal component one to estimate the risk factor structure.Term Structure of Interest Rates, Principal Component Analy-sis, Common Principal Component Analysis

    Evolution of Market Uncertainty around Earnings Announcements

    Get PDF
    This paper investigates theoretically and empirically the dynamics of the implied volatility (or implied standard deviation - ISD) around earnings announcements dates. The volatility implied by option prices can be interpreted as the level of volatility expected by the market over the remaining life of the option. We propose a theoretical framework for the evolution of the ISD that takes into account two well-known features of the instantaneous volatility: volatility clustering and the leverage effect. In this context, the ISD should decrease after an earnings announcement but the post-announcement ISD path depends on the content of the earnings announcement: good news or bad news. An empirical investigation is conducted on the Swiss market over the period 1989-1998.

    Measuring the Driving Forces of Predictive Performance: Application to Credit Scoring

    Full text link
    In credit scoring, machine learning models are known to outperform standard parametric models. As they condition access to credit, banking supervisors and internal model validation teams need to monitor their predictive performance and to identify the features with the highest impact on performance. To facilitate this, we introduce the XPER methodology to decompose a performance metric (e.g., AUC, R2R^2) into specific contributions associated with the various features of a classification or regression model. XPER is theoretically grounded on Shapley values and is both model-agnostic and performance metric-agnostic. Furthermore, it can be implemented either at the model level or at the individual level. Using a novel dataset of car loans, we decompose the AUC of a machine-learning model trained to forecast the default probability of loan applicants. We show that a small number of features can explain a surprisingly large part of the model performance. Furthermore, we find that the features that contribute the most to the predictive performance of the model may not be the ones that contribute the most to individual forecasts (SHAP). We also show how XPER can be used to deal with heterogeneity issues and significantly boost out-of-sample performance

    Multi-CPU and multi-GPU hybrid computations of multi-scale scalar transport

    No full text
    International audienceThe aim of this work is to propose an hybrid implementation of a semi-Lagrangian particle method on a multi-CPU and multi-GPU architecture. The applications we have in view deal with the transport of a passive scalar in a turbulent flow. For high Schmidt numbers (ratio of flow viscosity to scalar diffusivity), these problems exhibit two different scales: one related to the flow and the other -a smaller scale - to the scalar fluctuations. This scale separation motivates the use of hybrid methods where scalar and flow dynamics can be solved with different algorithms and at different resolutions. The coupling between these scales is done through the velocity field

    What if dividends were tax-exempt?: evidence from a natural experiment

    Get PDF
    We study the effect of dividend taxes on the payout and investment policy of listed firms and discuss their implications for agency problems. To do so, we exploit a unique setting in Switzerland where some, but not all, firms were suddenly able to pay tax-exempted dividends to their shareholders following the corporate tax reform of 2011. Using a difference-in-differences specification, we show that treated firms increased their payout much more than control firms after the tax cut. Differently, treated firms did not concurrently or subsequently increase investment. We show that the tax-inelasticity of investment was due to a significant drop in retained earnings - as the rise in dividends was not compensated by an equally-sized reduction in share repurchases. Furthermore, treated firms did not raise more equity and/or did not reduce their cash holdings to compensate for the contraction in retained earnings. Finally, we show that an unintended consequence of cutting dividend taxes is to mitigate the agency problems that arise between insiders and minority shareholders

    Multi-scale problems, high performance computing and hybrid numerical methods

    No full text
    International audienceThe turbulent transport of a passive scalar is an important and challenging problem in many applications in fluid mechanics. It involves different range of scales in the fluid and in the scalar and requires important computational resources. In this work we show how hybrid numerical methods, combining Eulerian and Lagrangian schemes, are natural tools to address this multi-scale problem. One in particular shows that in homogeneous turbulence experiments at various Schmidt numbers these methods allow to recover the theoretical predictions of universal scaling at a minimal cost. We also outline hybrid methods can take advantage of heterogeneous platforms combining CPU and GPU processors

    A high order purely frequency-based harmonic balance formulation for continuation of periodic solutions

    Full text link
    Combinig the harmonic balance method (HBM) and a continuation method is a well-known technique to follow the periodic solutions of dynamical systems when a control parameter is varied. However, since deriving the algebraic system containing the Fourier coefficients can be a highly cumbersome procedure, the classical HBM is often limited to polynomial (quadratic and cubic) nonlinearities and/or a few harmonics. Several variations on the classical HBM, such as the incremental HBM or the alternating frequency/time domain HBM, have been presented in the literature to overcome this shortcoming. Here, we present an alternative approach that can be applied to a very large class of dynamical systems (autonomous or forced) with smooth equations. The main idea is to systematically recast the dynamical system in quadratic polynomial form before applying the HBM. Once the equations have been rendered quadratic, it becomes obvious to derive the algebraic system and solve it by the so-called ANM continuation technique. Several classical examples are presented to illustrate the use of this numerical approach.Comment: PACS numbers: 02.30.Mv, 02.30.Nw, 02.30.Px, 02.60.-x, 02.70.-

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Systemic Risk Score: A Suggestion

    No full text
    Abstract: We identify a potential bias in the methodology disclosed in July 2013 by the Basel Committee on Banking Supervision (BCBS) for identifying systemically important financial banks. Contrary to the original objective, the relative importance of the five categories of risk importance (size, cross‐ jurisdictional activity, interconnectedness, substitutability/financial institution infrastructure, and complexity) may not be equal and the resulting systemic risk scores are mechanically dominated by the most volatile categories. In practice, this bias proved to be serious enough that the substitutability category had to be capped by the BCBS. We show that the bias can be removed by simply standardizing each input prior to computing the systemic risk scores. 1
    corecore