4,073 research outputs found

    Systemic Risk from Global Financial Derivatives; A Network Analysis of Contagion and Its Mitigation with Super-Spreader Tax

    Get PDF
    Financial network analysis is used to provide firm level bottom-up holistic visualizations of interconnections of financial obligations in global OTC derivatives markets. This helps to identify Systemically Important Financial Intermediaries (SIFIs), analyse the nature of contagion propagation, and also monitor and design ways of increasing robustness in the network. Based on 2009 FDIC and individually collected firm level data covering gross notional, gross positive (negative) fair value and the netted derivatives assets and liabilities for 202 financial firms which includes 20 SIFIs, the bilateral flows are empirically calibrated to reflect data-based constraints. This produces a tiered network with a distinct highly clustered central core of 12 SIFIs that account for 78 percent of all bilateral exposures and a large number of  financial intermediaries (FIs) on the periphery. The topology of the network results in the “Too- Interconnected-To-Fail” (TITF) phenomenon in that the failure of any member of the central tier will bring down other members with the contagion coming to an abrupt end when the ‘super-spreaders’ have demised. As these SIFIs account for the bulk of capital in the system, ipso facto no bank among the top tier can be allowed to fail, highlighting the untenable implicit socialized guarantees needed for these markets to operate at their current levels. Systemic risk costs of highly connected SIFIs nodes are not priced into their holding of capital or collateral. An eigenvector centrality based ‘super-spreader’ tax has been designed and tested for its capacity to reduce the potential socialized losses from failure of SIFIs

    Using data-driven rules to predict mortality in severe community acquired pneumonia

    Get PDF
    Prediction of patient-centered outcomes in hospitals is useful for performance benchmarking, resource allocation, and guidance regarding active treatment and withdrawal of care. Yet, their use by clinicians is limited by the complexity of available tools and amount of data required. We propose to use Disjunctive Normal Forms as a novel approach to predict hospital and 90-day mortality from instance-based patient data, comprising demographic, genetic, and physiologic information in a large cohort of patients admitted with severe community acquired pneumonia. We develop two algorithms to efficiently learn Disjunctive Normal Forms, which yield easy-to-interpret rules that explicitly map data to the outcome of interest. Disjunctive Normal Forms achieve higher prediction performance quality compared to a set of state-of-the-art machine learning models, and unveils insights unavailable with standard methods. Disjunctive Normal Forms constitute an intuitive set of prediction rules that could be easily implemented to predict outcomes and guide criteria-based clinical decision making and clinical trial execution, and thus of greater practical usefulness than currently available prediction tools. The Java implementation of the tool JavaDNF will be publicly available. © 2014 Wu et al

    Risk Assessment for National Natural Resource Conservation Programs

    Get PDF
    This paper reviews the risk assessments prepared by the U.S. Department of Agriculture (USDA) in support of regulations implementing the Conservation Reserve Program (CRP) and Environmental Quality Incentives Program (EQIP). These two natural resource conservation programs were authorized as part of the 1996 Farm Bill. The risk assessments were required under the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994. The framework used for the assessments was appropriate, but the assessments could be improved in the areas of assessments endpoint selection, definition, and estimation. Many of the assessment endpoints were too diffuse or ill-defined to provide an adequate characterization of the program benefits. Two reasons for this lack of clarity were apparent: 1) the large, unprioritized set of natural resource conservation objectives for the two programs and 2) there is little agreement about what changes in environmental attributes caused by agriculture should be considered adverse and which may be considered negligible. There is also some "double counting" of program benefits. Although the CRP and EQIP are, in part, intended to assist agricultural producers with regulatory compliance, the resultant environmental benefits would occur absent the programs. The paper concludes with a set of recommendations for continuing efforts to conduct regulatory analyses of these major conservation programs. The central recommendation is that future risk assessments go beyond efforts to identify the natural resources at greatest risk due to agricultural production activities and instead provide scientific input for analyses of the cost-effectiveness of the conservation programs.

    Institutional paraconsciousness and its pathologies

    Get PDF
    This analysis extends a recent mathematical treatment of the Baars consciousness model to analogous, but far more complicated, phenomena of institutional cognition. Individual consciousness is limited to a single, tunable, giant component of interacting cognitive modules, instantiating a Global Workspace. Human institutions, by contrast, support several, sometimes many, such giant components simultaneously, although their behavior remains constrained to a topology generated by cultural context and by the path-dependence inherent to organizational history. Such highly parallel multitasking - institutional paraconsciousness - while clearly limiting inattentional blindness and the consequences of failures within individual workspaces, does not eliminate them, and introduces new characteristic dysfunctions involving the distortion of information sent between global workspaces. Consequently, organizations (or machines designed along these principles), while highly efficient at certain kinds of tasks, remain subject to canonical and idiosyncratic failure patterns similar to, but more complicated than, those afflicting individuals. Remediation is complicated by the manner in which pathogenic externalities can write images of themselves on both institutional function and therapeutic intervention, in the context of relentless market selection pressures. The approach is broadly consonant with recent work on collective efficacy, collective consciousness, and distributed cognition

    Algorithms in future capital markets: A survey on AI, ML and associated algorithms in capital markets

    Get PDF
    This paper reviews Artificial Intelligence (AI), Machine Learning (ML) and associated algorithms in future Capital Markets. New AI algorithms are constantly emerging, with each 'strain' mimicking a new form of human learning, reasoning, knowledge, and decisionmaking. The current main disrupting forms of learning include Deep Learning, Adversarial Learning, Transfer and Meta Learning. Albeit these modes of learning have been in the AI/ML field more than a decade, they now are more applicable due to the availability of data, computing power and infrastructure. These forms of learning have produced new models (e.g., Long Short-Term Memory, Generative Adversarial Networks) and leverage important applications (e.g., Natural Language Processing, Adversarial Examples, Deep Fakes, etc.). These new models and applications will drive changes in future Capital Markets, so it is important to understand their computational strengths and weaknesses. Since ML algorithms effectively self-program and evolve dynamically, financial institutions and regulators are becoming increasingly concerned with ensuring there remains a modicum of human control, focusing on Algorithmic Interpretability/Explainability, Robustness and Legality. For example, the concern is that, in the future, an ecology of trading algorithms across different institutions may 'conspire' and become unintentionally fraudulent (cf. LIBOR) or subject to subversion through compromised datasets (e.g. Microsoft Tay). New and unique forms of systemic risks can emerge, potentially coming from excessive algorithmic complexity. The contribution of this paper is to review AI, ML and associated algorithms, their computational strengths and weaknesses, and discuss their future impact on the Capital Markets
    corecore