3,448 research outputs found

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. The text introduces the conceptual (internalism, externalism), quantitative (probabilism) and logical perspectives (logics for reasoning about probabilities by Fagin, Halpern, Megiddo and MEL by Banerjee, Dubois) for the framework

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. Starting with a thorough discussion of the conceptual embedding in existing schools of thought and liter- ature we develop a framework that aims to be empirically adequate yet scalable to epistemic states where an agent might testify to uncertainly believe a propositional formula based on the acceptance that a propositional formula is possible, called accepted truth. The familiarity of human agents with probability assignments make probabilism particularly appealing as quantitative modelling framework for defeasible reasoning that aspires empirical adequacy for gradual belief expressed as credence functions. We employ the inner measure induced by the probability measure, going back to Halmos, interpreted as estimate for uncertainty. Doing so omits generally requiring direct probability assignments testi�ed as strength of belief and uncertainty by a human agent. We provide a logical setting of the two concepts uncertain belief and accepted truth, completely relying on the the formal frameworks of 'Reasoning about Probabilities' developed by Fagin, Halpern and Megiddo and the 'Metaepistemic logic MEL' developed by Banerjee and Dubois. The purport of Probabilistic Uncertainty is a framework allowing with a single quantitative concept (an inner measure induced by a probability measure) expressing two epistemological concepts: possibilities as belief simpliciter called accepted truth, and the agents' credence called uncertain belief for a criterion of evaluation, called rationality. The propositions accepted to be possible form the meta-epistemic context(s) in which the agent can reason and testify uncertain belief or suspend judgement

    Flexibly Instructable Agents

    Full text link
    This paper presents an approach to learning from situated, interactive tutorial instruction within an ongoing agent. Tutorial instruction is a flexible (and thus powerful) paradigm for teaching tasks because it allows an instructor to communicate whatever types of knowledge an agent might need in whatever situations might arise. To support this flexibility, however, the agent must be able to learn multiple kinds of knowledge from a broad range of instructional interactions. Our approach, called situated explanation, achieves such learning through a combination of analytic and inductive techniques. It combines a form of explanation-based learning that is situated for each instruction with a full suite of contextually guided responses to incomplete explanations. The approach is implemented in an agent called Instructo-Soar that learns hierarchies of new tasks and other domain knowledge from interactive natural language instructions. Instructo-Soar meets three key requirements of flexible instructability that distinguish it from previous systems: (1) it can take known or unknown commands at any instruction point; (2) it can handle instructions that apply to either its current situation or to a hypothetical situation specified in language (as in, for instance, conditional instructions); and (3) it can learn, from instructions, each class of knowledge it uses to perform tasks.Comment: See http://www.jair.org/ for any accompanying file

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. Starting with a thorough discussion of the conceptual embedding in existing schools of thought and liter- ature we develop a framework that aims to be empirically adequate yet scalable to epistemic states where an agent might testify to uncertainly believe a propositional formula based on the acceptance that a propositional formula is possible, called accepted truth. The familiarity of human agents with probability assignments make probabilism particularly appealing as quantitative modelling framework for defeasible reasoning that aspires empirical adequacy for gradual belief expressed as credence functions. We employ the inner measure induced by the probability measure, going back to Halmos, interpreted as estimate for uncertainty. Doing so omits generally requiring direct probability assignments testi�ed as strength of belief and uncertainty by a human agent. We provide a logical setting of the two concepts uncertain belief and accepted truth, completely relying on the the formal frameworks of 'Reasoning about Probabilities' developed by Fagin, Halpern and Megiddo and the 'Metaepistemic logic MEL' developed by Banerjee and Dubois. The purport of Probabilistic Uncertainty is a framework allowing with a single quantitative concept (an inner measure induced by a probability measure) expressing two epistemological concepts: possibilities as belief simpliciter called accepted truth, and the agents' credence called uncertain belief for a criterion of evaluation, called rationality. The propositions accepted to be possible form the meta-epistemic context(s) in which the agent can reason and testify uncertain belief or suspend judgement

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. The text introduces the conceptual (internalism, externalism), quantitative (probabilism) and logical perspectives (logics for reasoning about probabilities by Fagin, Halpern, Megiddo and MEL by Banerjee, Dubois) for the framework

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file

    Representational fluidity in embodied (artificial) cognition

    Get PDF
    Theories of embodied cognition agree that the body plays some role in human cognition, but disagree on the precise nature of this role. While it is (together with the environment) fundamentally engrained in the so-called 4E (or multi-E) cognition stance, there also exists interpretations wherein the body is merely an input/output interface for cognitive processes that are entirely computational. In the present paper, we show that even if one takes such a strong computationalist position, the role of the body must be more than an interface to the world. To achieve human cognition, the computational mechanisms of a cognitive agent must be capable not only of appropriate reasoning over a given set of symbolic representations; they must in addition be capable of updating the representational framework itself (leading to the titular representational fluidity). We demonstrate this by considering the necessary properties that an artificial agent with these abilities need to possess. The core of the argument is that these updates must be falsifiable in the Popperian sense while simultaneously directing representational shifts in a direction that benefits the agent. We show that this is achieved by the progressive, bottom-up symbolic abstraction of low-level sensorimotor connections followed by top-down instantiation of testable perception-action hypotheses. We then discuss the fundamental limits of this representational updating capacity, concluding that only fully embodied learners exhibiting such a priori perception-action linkages are able to sufficiently ground spontaneously-generated symbolic representations and exhibit the full range of human cognitive capabilities. The present paper therefore has consequences both for the theoretical understanding of human cognition, and for the design of autonomous artificial agents
    corecore