664 research outputs found

    Value withdrawal explanations: a theoretical tool for programming environments

    Full text link
    Constraint logic programming combines declarativity and efficiency thanks to constraint solvers implemented for specific domains. Value withdrawal explanations have been efficiently used in several constraints programming environments but there does not exist any formalization of them. This paper is an attempt to fill this lack. Furthermore, we hope that this theoretical tool could help to validate some programming environments. A value withdrawal explanation is a tree describing the withdrawal of a value during a domain reduction by local consistency notions and labeling. Domain reduction is formalized by a search tree using two kinds of operators: operators for local consistency notions and operators for labeling. These operators are defined by sets of rules. Proof trees are built with respect to these rules. For each removed value, there exists such a proof tree which is the withdrawal explanation of this value.Comment: 14 pages; Alexandre Tessier, editor; WLPE 2002, http://xxx.lanl.gov/abs/cs.SE/020705

    Towards declarative diagnosis of constraint programs over finite domains

    Full text link
    The paper proposes a theoretical approach of the debugging of constraint programs based on a notion of explanation tree. The proposed approach is an attempt to adapt algorithmic debugging to constraint programming. In this theoretical framework for domain reduction, explanations are proof trees explaining value removals. These proof trees are defined by inductive definitions which express the removals of values as consequences of other value removals. Explanations may be considered as the essence of constraint programming. They are a declarative view of the computation trace. The diagnosis consists in locating an error in an explanation rooted by a symptom.Comment: In M. Ronsse, K. De Bosschere (eds), proceedings of the Fifth International Workshop on Automated Debugging (AADEBUG 2003), September 2003, Ghent. cs.SE/030902

    Explanations and Proof Trees

    Get PDF
    This paper proposes a model for explanations in a set theoretical framework using the notions of closure or fixpoint. In this approach, sets of rules associated with monotonic operators allow to define proof trees. The proof trees may be considered as a declarative view of the trace of a computation. We claim they are explanations of the results of a computation. This notion of explanation is applied to constraint logic programming, and it is used for declarative error diagnosis. It is also applied to constraint programming, and used for constraint retraction

    Simulated Annealing Applied to the Resolution o Phylogenetic Reconstruction with Maximum Parsimony

    Get PDF

    Modélisation du conditionnement animal par représentations factorisées dans un système d'apprentissage dual : explication des différences inter-individuelles aux niveaux comportemental et neurophysiologique

    Get PDF
    Pavlovian conditioning, the acquisition of responses to neutral stimuli previously paired with rewards, and instrumental conditioning, the acquisition of goal-oriented responses, are central to our learning capacities. However, despite some evidences of entanglement, they are mainly studied separately. Reinforcement learning (RL), learning by trials and errors to reach goals, is central to models of instrumental conditioning, while models of Pavlovian conditioning rely on more dedicated and often incompatible architectures. This complicates the study of their interactions. We aim at finding concepts which combined with RL models may provide a unifying architecture to allow such a study. We develop a model that combines a classical RL system, learning values over states, with a revised RL system, learning values over individual stimuli and biasing the behaviour towards reward-related ones. It explains maladaptive behaviours in pigeons by the detrimental interaction of systems, and inter-individual differences in rats by a simple variation at the population level in the contribution of each system to the overall behaviour. It explains unexpected dopaminergic patterns with regard to the dominant hypothesis that dopamine parallels a reward prediction error signal by computing such signal over features rather than states, and makes it compatible with an alternative hypothesis that dopamine also contributes to the acquisition of incentive salience, making reward-related stimuli wanted for themselves. The present model shows promising properties for the investigation of Pavlovian conditioning, instrumental conditioning and their interactions.Le conditionnement Pavlovien, l'acquisition de réponses vers des stimuli neutres associés à des récompenses, et le conditionnement instrumental, l'expression de comportements pour atteindre des buts, sont au cœur de nos capacités d'apprentissage. Ils sont souvent étudiés séparément malgré les preuves de leur enchevêtrement. Les modèles de conditionnement instrumental reposent sur le formalisme de l'apprentissage par renforcement (RL), alors que les modèles du conditionnement Pavlovien reposent surtout sur des architectures dédiées souvent incompatibles avec ce formalisme, compliquant l'étude de leurs interactions.Notre objectif est de trouver des concepts, qui combinés à des modèles RL puissent offrir une architecture unifiée permettant une telle étude. Nous développons un modèle qui combine un système RL classique, qui apprend une valeur par état, avec un système RL révisé, évaluant les stimuli séparément et biaisant le comportement vers ceux associés aux récompenses. Le modèle explique certaines réponses inadaptées par l'interaction néfaste des systèmes, ainsi que certaines différences inter-individuelles par une simple variation au niveau de la population de la contribution de chaque système dans le comportement global.Il explique une activité inattendue de la dopamine, vis-à-vis de l'hypothèse qu'elle encode un signal d'erreur, par son calcul sur les stimuli et non les états. Il est aussi compatible avec une hypothèse alternative que la dopamine contribue aussi à rendre certains stimuli recherchés pour eux-mêmes. Le modèle présente des propriétés prometteuses pour l'étude du conditionnement Pavlovien,du conditionnement instrumental et de leurs interactions

    A Bottom-Up Implementation of Path-Relinking for Phylogenetic Reconstruction Applied to Maximum Parsimony

    Get PDF
    In this article we describe a bottom-up implementation of Path-Relinking for Phylogenetic Trees in the context of the resolution of the Maximum Parsimony problem with Fitch optimality criterion. This bottom-up implementation is compared to two versions of an existing top-down implementation. We show that our implementation is more efficient, more interesting to compare trees and to give an estimation of the distance between two trees in terms of the number of transformation
    corecore