8 research outputs found

    Delayed Expected Loss Recognition and the Risk Profile of Banks

    Full text link
    This paper investigates the extent to which delayed expected loan loss recognition (DELR) is associated with greater vulnerability of banks to three distinct dimensions of risk: (1) stock market liquidity risk, (2) downside tail risk of individual banks, and (3) codependence of downside tail risk among banks. We hypothesize that DELR increases vulnerability to downside risk by creating expected loss overhangs that threaten future capital adequacy and by degrading bank transparency, which increases financing frictions and opportunities for risk‐shifting. We find that DELR is associated with higher correlations between bank‐level illiquidity and both aggregate banking sector illiquidity and market returns (i.e., higher liquidity risks) during recessions, suggesting that high DELR banks as a group may simultaneously face elevated financing frictions and enhanced opportunities for risk‐shifting behavior in crisis periods. With respect to downside risk, we find that during recessions DELR is associated with significantly higher risk of individual banks suffering severe drops in their equity values, where this association is magnified for banks with low capital levels. Consistent with increased systemic risk, we find that DELR is associated with significantly higher codependence between downside risk of individual banks and downside risk of the banking sector. We theorize that downside risk vulnerability at the individual bank level can translate into systemic risk by virtue of DELR creating a common source of risk vulnerability across high DELR banks simultaneously, which leads to risk codependence among banks and systemic effects from banks acting as part of a herd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/111770/1/joar12079.pd

    ENergy and Power Evaluation Program

    Full text link

    Scalability in the Presence of Variability

    Get PDF
    Supercomputers are used to solve some of the world’s most computationally demanding problems. Exascale systems, to be comprised of over one million cores and capable of 10^18 floating point operations per second, will probably exist by the early 2020s, and will provide unprecedented computational power for parallel computing workloads. Unfortunately, while these machines hold tremendous promise and opportunity for applications in High Performance Computing (HPC), graph processing, and machine learning, it will be a major challenge to fully realize their potential, because to do so requires balanced execution across the entire system and its millions of processing elements. When different processors take different amounts of time to perform the same amount of work, performance imbalance arises, large portions of the system sit idle, and time and energy are wasted. Larger systems incorporate more processors and thus greater opportunity for imbalance to arise, as well as larger performance/energy penalties when it does. This phenomenon is referred to as performance variability and is the focus of this dissertation. In this dissertation, we explain how to design system software to mitigate variability on large scale parallel machines. Our approaches span (1) the design, implementation, and evaluation of a new high performance operating system to reduce some classes of performance variability, (2) a new performance evaluation framework to holistically characterize key features of variability on new and emerging architectures, and (3) a distributed modeling framework that derives predictions of how and where imbalance is manifesting in order to drive reactive operations such as load balancing and speed scaling. Collectively, these efforts provide a holistic set of tools to promote scalability through the mitigation of variability
    corecore