1,036,675 research outputs found

    Abominable KK Failures

    Get PDF
    KK is the thesis that if you can know p, you can know that you can know p. Though it’s unpopular, a flurry of considerations has recently emerged in its favour. Here we add fuel to the fire: standard resources allow us to show that any failure of KK will lead to the knowability and assertability of abominable indicative conditionals of the form ‘If I don’t know it, p’. Such conditionals are manifestly not assertable—a fact that KK defenders can easily explain. I survey a variety of KK-denying responses and find them wanting. Those who object to the knowability of such conditionals must either deny the possibility of harmony between knowledge and belief, or deny well-supported connections between conditional and unconditional attitudes. Meanwhile, those who grant knowability owe us an explanation of such conditionals’ unassertability—yet no successful explanations are on offer. Upshot: we have new evidence for KK

    Merger failures

    Get PDF
    This paper proposes an explanation as to why some mergers fail, based on the interaction between the pre- and post-merger processes. We argue that failure may stem from informational asymmetries arising from the pre-merger period, and problems of cooperation and coordination within recently merged firms. We show that a partner may optimally agree to merge and abstain from putting forth any post-merger effort, counting on the other partner to make the necessary efforts. If both follow the same course of action, the merger goes ahead but fails. Our unique equilibrium allows us to make predictions on which mergers are more likely to fail.Mergers, Synergies, Asymmetric Information, Complementarities

    The risks of multiple breadbasket failures in the 21st century: a science research agenda

    Full text link
    Thomson ReutersThis report stems from an international, interdisciplinary workshop organized by Knowledge Systems for Sustainability and hosted by the Frederick S. Pardee Center for the Study of the Longer-Range Future, with support from Thomson Reuters, in November 2014.Written by an interdisciplinary team of leading researchers, this report describes a science research agenda toward improved probabilistic modeling and prediction of multiple breadbasket failures and the potential consequences for global food systems. The authors highlight gaps in the existing empirical foundation and analytical capabilities and offer general approaches to address these gaps. They also suggest the need to fuse diverse data sources, recent observations, and new suites of dynamic models capable of connecting agricultural outcomes to elements of the global food system. The goal of these efforts is to provide better information concerning potential systemic risks to breadbaskets in various regions of the world to inform policies and decisions that have the potential for global impacts

    Failures in childbirth care

    Get PDF
    The study, first published in 2003, looks at the root causes of adverse events and near misses in obstetrics at seven hospital maternity units by interviewing 93 members of staff, identifying the areas of mismanagement in each case and thematically analysing them

    Quantitative Games under Failures

    Get PDF
    We study a generalisation of sabotage games, a model of dynamic network games introduced by van Benthem. The original definition of the game is inherently finite and therefore does not allow one to model infinite processes. We propose an extension of the sabotage games in which the first player (Runner) traverses an arena with dynamic weights determined by the second player (Saboteur). In our model of quantitative sabotage games, Saboteur is now given a budget that he can distribute amongst the edges of the graph, whilst Runner attempts to minimise the quantity of budget witnessed while completing his task. We show that, on the one hand, for most of the classical cost functions considered in the literature, the problem of determining if Runner has a strategy to ensure a cost below some threshold is EXPTIME-complete. On the other hand, if the budget of Saboteur is fixed a priori, then the problem is in PTIME for most cost functions. Finally, we show that restricting the dynamics of the game also leads to better complexity

    Modelling interdependencies between the electricity and information infrastructures

    Full text link
    The aim of this paper is to provide qualitative models characterizing interdependencies related failures of two critical infrastructures: the electricity infrastructure and the associated information infrastructure. The interdependencies of these two infrastructures are increasing due to a growing connection of the power grid networks to the global information infrastructure, as a consequence of market deregulation and opening. These interdependencies increase the risk of failures. We focus on cascading, escalating and common-cause failures, which correspond to the main causes of failures due to interdependencies. We address failures in the electricity infrastructure, in combination with accidental failures in the information infrastructure, then we show briefly how malicious attacks in the information infrastructure can be addressed

    An Exploratory Study of Field Failures

    Get PDF
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time

    Why Catastrophic Organizational Failures Happen

    Get PDF
    Excerpt from the introduction: The purpose of this chapter is to examine the major streams of research about catastrophic failures, describing what we have learned about why these failures occur as well as how they can be prevented. The chapter begins by describing the most prominent sociological school of thought with regard to catastrophic failures, namely normal accident theory. That body of thought examines the structure of organizational systems that are most susceptible to catastrophic failures. Then, we turn to several behavioral perspectives on catastrophic failures, assessing a stream of research that has attempted to understand the cognitive, group and organizational processes that develop and unfold over time, leading ultimately to a catastrophic failure. For an understanding of how to prevent such failures, we then assess the literature on high reliability organizations (HRO). These scholars have examined why some complex organizations operating in extremely hazardous conditions manage to remain nearly error free. The chapter closes by assessing how scholars are trying to extend the HRO literature to develop more extensive prescriptions for managers trying to avoid catastrophic failures

    Recent bank failures

    Get PDF
    Bank failures

    An Exploratory Study of Field Failures

    Full text link
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time
    • 

    corecore