3,949 research outputs found

    What should a robot learn from an infant? Mechanisms of action interpretation and observational learning in infancy

    Get PDF
    The paper provides a summary of our recent research on preverbal infants (using violation-of-expectation and observational learning paradigms) demonstrating that one-year-olds interpret and draw systematic inferences about other’s goal-directed actions, and can rely on such inferences when imitating other’s actions or emulating their goals. To account for these findings it is proposed that one-year-olds apply a non-mentalistic action interpretational system, the ’teleological stance’ that represents actions by relating relevant aspects of reality (action, goal-state, and situational constraints) through the principle of rational action, which assumes that actions function to realize goal-states by the most efficient means available in the actor’s situation. The relevance of these research findings and the proposed theoretical model for how to realize the goal of epigenetic robotics of building a ’socially relevant’ humanoid robot is discussed

    PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales

    Full text link
    Neural language models (LMs) have achieved impressive results on various language-based reasoning tasks by utilizing latent knowledge encoded in their own pretrained parameters. To make this reasoning process more explicit, recent works retrieve a rationalizing LM's internal knowledge by training or prompting it to generate free-text rationales, which can be used to guide task predictions made by either the same LM or a separate reasoning LM. However, rationalizing LMs require expensive rationale annotation and/or computation, without any assurance that their generated rationales improve LM task performance or faithfully reflect LM decision-making. In this paper, we propose PINTO, an LM pipeline that rationalizes via prompt-based learning, and learns to faithfully reason over rationales via counterfactual regularization. First, PINTO maps out a suitable reasoning process for the task input by prompting a frozen rationalizing LM to generate a free-text rationale. Second, PINTO's reasoning LM is fine-tuned to solve the task using the generated rationale as context, while regularized to output less confident predictions when the rationale is perturbed. Across four datasets, we show that PINTO significantly improves the generalization ability of the reasoning LM, yielding higher performance on both in-distribution and out-of-distribution test sets. Also, we find that PINTO's rationales are more faithful to its task predictions than those generated by competitive baselines.Comment: 19 pages, 6 figures, preprin

    How Spinal Neural Networks Reduce Discrepancies between Motor Intention and Motor Realization

    Full text link
    This paper attempts a rational, step-by-step reconstruction of many aspects of the mammalian neural circuitry known to be involved in the spinal cord's regulation of opposing muscles acting on skeletal segments. Mathematical analyses and local circuit simulations based on neural membrane equations are used to clarify the behavioral function of five fundamental cell types, their complex connectivities, and their physiological actions. These cell types are: α-MNs, γ-MNs, IaINs, IbINs, and Renshaw cells. It is shown that many of the complexities of spinal circuitry are necessary to ensure near invariant realization of motor intentions when descending signals of two basic types independently vary over large ranges of magnitude and rate of change. Because these two types of signal afford independent control, or Factorization, of muscle LEngth and muscle TEnsion, our construction was named the FLETE model (Bullock and Grossberg, 1988b, 1989). The present paper significantly extends the range of experimental data encompassed by this evolving model.National Science Foundation (IRI-87-16960, IRI-90-24877); Instituto Tecnológico y de Estudios Superiores de Monterre

    Sum-of-Parts Models: Faithful Attributions for Groups of Features

    Full text link
    An explanation of a machine learning model is considered "faithful" if it accurately reflects the model's decision-making process. However, explanations such as feature attributions for deep learning are not guaranteed to be faithful, and can produce potentially misleading interpretations. In this work, we develop Sum-of-Parts (SOP), a class of models whose predictions come with grouped feature attributions that are faithful-by-construction. This model decomposes a prediction into an interpretable sum of scores, each of which is directly attributable to a sparse group of features. We evaluate SOP on benchmarks with standard interpretability metrics, and in a case study, we use the faithful explanations from SOP to help astrophysicists discover new knowledge about galaxy formation

    Cultures of Compliance

    Get PDF
    There has been a cultural turn in discussion and debates about the promise of corporate compliance efforts. These efforts are occurring quickly, without great confidence in their efficacy. Thus the interest in culture. This article explores what a culture of compliance means and why it is so hard to achieve. The dark side that enables non-compliance in organizations is powerful and often hidden from view, working via scripts that rationalize or normalize, denigrations of regulation, and celebrations of beliefs and attitudes that bring with them compliance dangers. The article addresses how both culture and compliance should be judged by those wishing for better corporate behavior

    Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods

    Full text link
    Saliency maps can explain a neural model's predictions by identifying important input features. They are difficult to interpret for laypeople, especially for instances with many features. In order to make them more accessible, we formalize the underexplored task of translating saliency maps into natural language and compare methods that address two key challenges of this approach -- what and how to verbalize. In both automatic and human evaluation setups, using token-level attributions from text classification tasks, we compare two novel methods (search-based and instruction-based verbalizations) against conventional feature importance representations (heatmap visualizations and extractive rationales), measuring simulatability, faithfulness, helpfulness and ease of understanding. Instructing GPT-3.5 to generate saliency map verbalizations yields plausible explanations which include associations, abstractive summarization and commonsense reasoning, achieving by far the highest human ratings, but they are not faithfully capturing numeric information and are inconsistent in their interpretation of the task. In comparison, our search-based, model-free verbalization approach efficiently completes templated verbalizations, is faithful by design, but falls short in helpfulness and simulatability. Our results suggest that saliency map verbalization makes feature attribution explanations more comprehensible and less cognitively challenging to humans than conventional representations.Comment: ACL 2023 Workshop on Natural Language Reasoning and Structured Explanations (NLRSE
    • …
    corecore