5,731 research outputs found

    Reply to Valverde

    Get PDF
    Professor Thompson responds to Valverde\u27s argument, in the last issue, that his approach to Risk puts too much emphasis on the distinction between Risk subjectivism and Risk objectivism. In doing so, he asserts, inter alia, that anchoring Risk judgments in a probabilistic framework does not go far enough in rejecting reigning Risk-analysis notions of real Risk

    How much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles

    Get PDF
    Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities

    Sequential Two-Player Games with Ambiguity

    Get PDF
    If players' beliefs are strictly non-additive, the Dempster-Shafer updating rule can be used to define beliefs off the equilibrium path. We define an equilibrium concept in sequential two-person games where players update their beliefs with the Dempster-Shafer updating rule. We show that in the limit as uncertainty tends to zero, our equilibrium approximates Bayesian Nash equilibrium by imposing context-dependent constraints on beliefs under uncertainty.

    A Taxonomy of Explainable Bayesian Networks

    Get PDF
    Artificial Intelligence (AI), and in particular, the explainability thereof, has gained phenomenal attention over the last few years. Whilst we usually do not question the decision-making process of these systems in situations where only the outcome is of interest, we do however pay close attention when these systems are applied in areas where the decisions directly influence the lives of humans. It is especially noisy and uncertain observations close to the decision boundary which results in predictions which cannot necessarily be explained that may foster mistrust among end-users. This drew attention to AI methods for which the outcomes can be explained. Bayesian networks are probabilistic graphical models that can be used as a tool to manage uncertainty. The probabilistic framework of a Bayesian network allows for explainability in the model, reasoning and evidence. The use of these methods is mostly ad hoc and not as well organised as explainability methods in the wider AI research field. As such, we introduce a taxonomy of explainability in Bayesian networks. We extend the existing categorisation of explainability in the model, reasoning or evidence to include explanation of decisions. The explanations obtained from the explainability methods are illustrated by means of a simple medical diagnostic scenario. The taxonomy introduced in this paper has the potential not only to encourage end-users to efficiently communicate outcomes obtained, but also support their understanding of how and, more importantly, why certain predictions were made

    Narration in judiciary fact-finding : a probabilistic explication

    Get PDF
    Legal probabilism is the view that juridical fact-finding should be modeled using Bayesian methods. One of the alternatives to it is the narration view, according to which instead we should conceptualize the process in terms of competing narrations of what (allegedly) happened. The goal of this paper is to develop a reconciliatory account, on which the narration view is construed from the Bayesian perspective within the framework of formal Bayesian epistemology

    Regulation for Conservatives: Behavioral Economics and the Case for "Asymmetric Paternalism"

    Get PDF
    Regulation by the state can take a variety of forms. Some regulations are aimed entirely at redistribution, such as when we tax the rich and give to the poor. Other regulations seek to counteract externalities by restricting behavior in a way that imposes harm on an individual basis but yields net societal benefits. A good example is taxation to fund public goods such as roads. In such situations, an individual would be better off if she alone were exempt from the tax; she benefits when everyone (including herself) must pay the tax

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    Components of the Czech Koruna Risk Premium in a Multiple-Dealer FX Market

    Get PDF
    The paper proposes a continuous time model of an FX market organized as a multiple dealership. The model reflects a number of salient features of the Czech koruna spot market. The dealers have costly access to the best available quotes. They interpret signals from the joint dealer-customer order flow and decide upon their own quotes and trades in the inter-dealer market. Each dealer uses the observed order flow to improve the subjective estimates of the relevant aggregate variables, which are the sources of uncertainty. One of the risk factors is the size of the cross-border dealer transactions in the FX market. These uncertainties have diffusion form and are dealt with according to the principles of portfolio optimization in continuous time. The model is used to explain the country, or risk, premium in the uncovered national return parity equation for the koruna/euro exchange rate. The two country premium terms that I identify in excess of the usual covariance term (a consequence of the 'Jensen inequality effect') are the dealer heterogeneity-induced inter-dealer market order flow component and the dealer Bayesian learning component. As a result, a 'dealer-based total return parity' formula links the exchange rate to both the 'fundamental' factors represented by the differential of the national asset returns, and the microstructural factors represented by heterogeneous dealer knowledge of the aggregate order flow and the fundamentals. Evidence on the cross-border order flow dependence of the Czech koruna risk premium, in accordance with the model prediction, is documented.Bayesian learning, FX microstructure, optimizing dealer, uncovered parity.

    The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

    Get PDF
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature
    • …
    corecore