25,927 research outputs found

    SDRL: Interpretable and Data-efficient Deep Reinforcement Learning Leveraging Symbolic Planning

    Full text link
    Deep reinforcement learning (DRL) has gained great success by learning directly from high-dimensional sensory inputs, yet is notorious for the lack of interpretability. Interpretability of the subtasks is critical in hierarchical decision-making as it increases the transparency of black-box-style DRL approach and helps the RL practitioners to understand the high-level behavior of the system better. In this paper, we introduce symbolic planning into DRL and propose a framework of Symbolic Deep Reinforcement Learning (SDRL) that can handle both high-dimensional sensory inputs and symbolic planning. The task-level interpretability is enabled by relating symbolic actions to options.This framework features a planner -- controller -- meta-controller architecture, which takes charge of subtask scheduling, data-driven subtask learning, and subtask evaluation, respectively. The three components cross-fertilize each other and eventually converge to an optimal symbolic plan along with the learned subtasks, bringing together the advantages of long-term planning capability with symbolic knowledge and end-to-end reinforcement learning directly from a high-dimensional sensory input. Experimental results validate the interpretability of subtasks, along with improved data efficiency compared with state-of-the-art approaches

    Can Rats Reason?

    Get PDF
    Since at least the mid-1980s claims have been made for rationality in rats. For example, that rats are capable of inferential reasoning (Blaisdell, Sawa, Leising, & Waldmann, 2006; Bunsey & Eichenbaum, 1996), or that they can make adaptive decisions about future behavior (Foote & Crystal, 2007), or that they are capable of knowledge in propositional-like form (Dickinson, 1985). The stakes are rather high, because these capacities imply concept possession and on some views (e.g., Rödl, 2007; Savanah, 2012) rationality indicates self-consciousness. I evaluate the case for rat rationality by analyzing 5 key research paradigms: spatial navigation, metacognition, transitive inference, causal reasoning, and goal orientation. I conclude that the observed behaviors need not imply rationality by the subjects. Rather, the behavior can be accounted for by noncognitive processes such as hard-wired species typical predispositions or associative learning or (nonconceptual) affordance detection. These mechanisms do not necessarily require or implicate the capacity for rationality. As such there is as yet insufficient evidence that rats can reason. I end by proposing the ‘Staircase Test,’ an experiment designed to provide convincing evidence of rationality in rats

    Building machines that learn and think about morality

    Get PDF
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research
    • …
    corecore