83 research outputs found

    Concurrent Credit Portfolio Losses

    Full text link
    We consider the problem of concurrent portfolio losses in two non-overlapping credit portfolios. In order to explore the full statistical dependence structure of such portfolio losses, we estimate their empirical pairwise copulas. Instead of a Gaussian dependence, we typically find a strong asymmetry in the copulas. Concurrent large portfolio losses are much more likely than small ones. Studying the dependences of these losses as a function of portfolio size, we moreover reveal that not only large portfolios of thousands of contracts, but also medium-sized and small ones with only a few dozens of contracts exhibit notable portfolio loss correlations. Anticipated idiosyncratic effects turn out to be negligible. These are troublesome insights not only for investors in structured fixed-income products, but particularly for the stability of the financial sector

    On Modeling and Assessing Uncertainty Estimates in Neural Learning Systems

    Get PDF
    While neural networks are universal function approximators when looked at from a theoretical perspective, we face, in practice, model size constraints and highly sparse data samples from open-world contexts. These limitations of models and data introduce uncertainty, i.e., they render it unclear whether a model's output for a given input datapoint can be relied on. This lack of information hinders the use of learned models in critical applications, as unrecognized erroneous predictions may occur. A promising safeguard against such failures is uncertainty estimation, which seeks to measure a model's input-dependent reliability. Theory, modeling, and operationalization of uncertainty techniques are, however, often studied in isolation. In this work, we combine these perspectives to enable the effective use of uncertainty estimators in practice. In particular, it is necessary to address (the interplay of) three points. First, we need to better understand the theoretical properties of uncertainty estimators, specifically, their shortcomings stemming from constrained model capacity. Second, we must find a way to closely model data and error distributions that are not explicitly given. Third, for real-world use cases, we need a deeper understanding of uncertainty estimation requirements and their test-based evaluations. Regarding the first point, we study how the estimation of uncertainty is affected (and limited) by a learning system's capacity. Beginning with a simple model for uncertain dynamics, a hidden Markov model, we integrate (neural) word2vec-inspired representation learning into it to control its model complexity more directly and, as a result, identify two regimes of differing model quality. Expanding this analysis on model capacity to fully neural models, we investigate Monte Carlo (MC) dropout, which adds complexity control and uncertainty by randomly dropping neurons. In particular, we analyze the different types of output distributions this procedure can induce. While it is commonly assumed that output distributions can be treated as Gaussians, we show by explicit construction that wider tails can occur. As to the second point, we borrow ideas from MC dropout and construct a novel uncertainty technique for regression tasks: Wasserstein dropout. It captures heteroscedastic aleatoric uncertainty by input-dependent matchings of model output and data distributions, while preserving the beneficial properties of MC dropout. An extensive empirical analysis shows that Wasserstein dropout outperforms various state-of-the-art methods regarding uncertainty quality, both on vanilla test data and under distributional shifts. It can also be used for critical tasks like object detection for autonomous driving. Moreover, we extend uncertainty assessment beyond distribution-averaged metrics and measure the quality of uncertainty estimation in worst-case scenarios. To address the third point, we need not only granular evaluations but also have to consider the context of the intended machine learning use case. To this end, we propose a framework that i) structures and shapes application requirements, ii) guides the selection of a suitable uncertainty estimation method and iii) provides systematic test strategies that validate this choice. The proposed strategies are data-driven and range from general tests to identify capacity issues to specific ones to validate heteroscedastic calibration or risks stemming from worst- or rare-case scenarios

    Guideline for Trustworthy Artificial Intelligence -- AI Assessment Catalog

    Full text link
    Artificial Intelligence (AI) has made impressive progress in recent years and represents a key technology that has a crucial impact on the economy and society. However, it is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards and are effectively protected against new AI risks. For instance, AI bears the risk of unfair treatment of individuals when processing personal data e.g., to support credit lending or staff recruitment decisions. The emergence of these new risks is closely linked to the fact that the behavior of AI applications, particularly those based on Machine Learning (ML), is essentially learned from large volumes of data and is not predetermined by fixed programmed rules. Thus, the issue of the trustworthiness of AI applications is crucial and is the subject of numerous major publications by stakeholders in politics, business and society. In addition, there is mutual agreement that the requirements for trustworthy AI, which are often described in an abstract way, must now be made clear and tangible. One challenge to overcome here relates to the fact that the specific quality criteria for an AI application depend heavily on the application context and possible measures to fulfill them in turn depend heavily on the AI technology used. Lastly, practical assessment procedures are needed to evaluate whether specific AI applications have been developed according to adequate quality standards. This AI assessment catalog addresses exactly this point and is intended for two target groups: Firstly, it provides developers with a guideline for systematically making their AI applications trustworthy. Secondly, it guides assessors and auditors on how to examine AI applications for trustworthiness in a structured way

    Comparing Notes: Recording and Criticism

    Get PDF
    This chapter charts the ways in which recording has changed the nature of music criticism. It both provides an overview of the history of recording and music criticism, from the advent of Edison’s Phonograph to the present day, and examines the issues arising from this new technology and the consequent transformation of critical thought and practice
    corecore