132 research outputs found

    Principles and Practice of Explainable Machine Learning

    Get PDF
    Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in diverse areas such as computational biology, law and finance. However, such a highly positive impact is coupled with significant challenges: how do we understand the decisions suggested by these systems in order that we can trust them? In this report, we focus specifically on data-driven methods -- machine learning (ML) and pattern recognition models in particular -- so as to survey and distill the results and observations from the literature. The purpose of this report can be especially appreciated by noting that ML models are increasingly deployed in a wide range of businesses. However, with the increasing prevalence and complexity of methods, business stakeholders in the very least have a growing number of concerns about the drawbacks of models, data-specific biases, and so on. Analogously, data science practitioners are often not aware about approaches emerging from the academic literature, or may struggle to appreciate the differences between different methods, so end up using industry standards such as SHAP. Here, we have undertaken a survey to help industry practitioners (but also data scientists more broadly) understand the field of explainable machine learning better and apply the right tools. Our latter sections build a narrative around a putative data scientist, and discuss how she might go about explaining her models by asking the right questions

    Multi-Agent Only-Knowing Revisited

    Get PDF
    Levesque introduced the notion of only-knowing to precisely capture the beliefs of a knowledge base. He also showed how only-knowing can be used to formalize non-monotonic behavior within a monotonic logic. Despite its appeal, all attempts to extend only-knowing to the many agent case have undesirable properties. A belief model by Halpern and Lakemeyer, for instance, appeals to proof-theoretic constructs in the semantics and needs to axiomatize validity as part of the logic. It is also not clear how to generalize their ideas to a first-order case. In this paper, we propose a new account of multi-agent only-knowing which, for the first time, has a natural possible-world semantics for a quantified language with equality. We then provide, for the propositional fragment, a sound and complete axiomatization that faithfully lifts Levesque's proof theory to the many agent case. We also discuss comparisons to the earlier approach by Halpern and Lakemeyer.Comment: Appears in Principles of Knowledge Representation and Reasoning 201

    Excursions in first-order logic and probability: infinitely many random variables, continuous distributions, recursive programs and beyond

    Get PDF
    The unification of the first-order logic and probability has been seen as a long-standing concern in philosophy, AI and mathematics. In this talk, I will briefly review our recent results on revisiting that unification. Although there are plenty of approaches in communities such as statistical relational learning, automated planning, and neuro-symbolic AI that leverage and develop languages with logical and probabilistic aspects, they almost always restrict the representation as well as the semantic framework in various ways that does not fully explain how to combine first-order logic and probability theory in a general way. In many cases, this restriction is justified because it may be necessary to focus on practicality and efficiency. However, the search for a restriction-free mathematical theory remains ongoing. In this article, we discuss our recent results regarding the development of languages that support arbitrary quantification, possibly infinitely many ran- dom variables, both discrete and continuous distributions, as well as programming languages built on top of such features to include recursion and branching control

    Knowledge Representation and Acquisition for Ethical AI: Challenges and Opportunities

    Get PDF

    Actions, Continuous Distributions and Meta-Beliefs

    Get PDF

    Abstracting Probabilistic Models: Relations, Constraints and Beyond

    Get PDF

    Probabilistic Planning by Probabilistic Programming

    Get PDF
    Automated planning is a major topic of research in artificial intelligence, and enjoys a long and distinguished history. The classical paradigm assumes a distinguished initial state, comprised of a set of facts, and is defined over a set of actions which change that state in one way or another. Planning in many real-world settings, however, is much more involved: an agent's knowledge is almost never simply a set of facts that are true, and actions that the agent intends to execute never operate the way they are supposed to. Thus, probabilistic planning attempts to incorporate stochastic models directly into the planning process. In this article, we briefly report on probabilistic planning through the lens of probabilistic programming: a programming paradigm that aims to ease the specification of structured probability distributions. In particular, we provide an overview of the features of two systems, HYPE and ALLEGRO, which emphasise different strengths of probabilistic programming that are particularly useful for complex modelling issues raised in probabilistic planning. Among other things, with these systems, one can instantiate planning problems with growing and shrinking state spaces, discrete and continuous probability distributions, and non-unique prior distributions in a first-order setting.Comment: Article at AAAI-18 Workshop on Planning and Inferenc

    Tractable Probabilistic Models for Ethical AI

    Get PDF
    • …
    corecore