21 research outputs found

    On Exploiting Hitting Sets for Model Reconciliation

    Full text link
    In human-aware planning, a planning agent may need to provide an explanation to a human user on why its plan is optimal. A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the human's model such that the plan is also optimal in the human's model. In this paper, we present a logic-based framework for model reconciliation that extends beyond the realm of planning. More specifically, given a knowledge base KB1KB_1 entailing a formula φ\varphi and a second knowledge base KB2KB_2 not entailing it, model reconciliation seeks an explanation, in the form of a cardinality-minimal subset of KB1KB_1, whose integration into KB2KB_2 makes the entailment possible. Our approach, based on ideas originating in the context of analysis of inconsistencies, exploits the existing hitting set duality between minimal correction sets (MCSes) and minimal unsatisfiable sets (MUSes) in order to identify an appropriate explanation. However, differently from those works targeting inconsistent formulas, which assume a single knowledge base, MCSes and MUSes are computed over two distinct knowledge bases. We conclude our paper with an empirical evaluation of the newly introduced approach on planning instances, where we show how it outperforms an existing state-of-the-art solver, and generic non-planning instances from recent SAT competitions, for which no other solver exists

    Cautious Reasoning in ASP via Minimal models and Unsatisfiable Cores

    Get PDF
    Answer Set Programming (ASP) is a logic-based knowledge representation framework, supporting-among other reasoning modes-the central task of query answering. In the propositional case, query answering amounts to computing cautious consequences of the input program among the atoms in a given set of candidates, where a cautious consequence is an atom belonging to all stable models. Currently, the most efficient algorithms either iteratively verify the existence of a stable model of the input program extended with the complement of one candidate, where the candidate is heuristically selected, or introduce a clause enforcing the falsity of at least one candidate, so that the solver is free to choose which candidate to falsify at any time during the computation of a stable model. This paper introduces new algorithms for the computation of cautious consequences, with the aim of driving the solver to search for stable models discarding more candidates. Specifically, one of such algorithms enforces minimality on the set of true candidates, where different notions of minimality can be used, and another takes advantage of unsatisfiable cores computation. The algorithms are implemented in WASP, and experiments on benchmarks from the latest ASP competitions show that the new algorithms perform better than the state of the art.Peer reviewe

    On Exploiting Hitting Sets for Model Reconciliation

    No full text
    In human-aware planning, a planning agent may need to provide an explanation to a human user on why its plan is optimal. A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the human's model such that the plan is also optimal in the human's model. In this paper, we present a logic-based framework for model reconciliation that extends beyond the realm of planning. More specifically, given a knowledge base KB1 entailing a formula phi and a second knowledge base KB2 not entailing it, model reconciliation seeks an explanation, in the form of a cardinality-minimal subset of KB1, whose integration into KB2 makes the entailment possible. Our approach, based on ideas originating in the context of analysis of inconsistencies, exploits the existing hitting set duality between minimal correction sets (MCSes) and minimal unsatisfiable sets (MUSes) in order to identify an appropriate explanation. However, differently from those works targeting inconsistent formulas, which assume a single knowledge base, MCSes and MUSes are computed over two distinct knowledge bases. We conclude our paper with an empirical evaluation of the newly introduced approach on planning instances, where we show how it outperforms an existing state-of-the-art solver, and generic non-planning instances from recent SAT competitions, for which no other solver exists

    Monte-Carlo style UCT search for boolean satisfiability

    No full text
    In this paper, we investigate the feasibility of applying algorithms based on the Uniform Confidence bounds applied to Trees [12] to the satisfiability of CNF formulas. We develop a new family of algorithms based on the idea of balancing exploitation (depth-first search) and exploration (breadth-first search), that can be combined with two different techniques to generate random playouts or with a heuristics-based evaluation function. We compare our algorithms with a DPLL-based algorithm and with WalkSAT, using the size of the tree and the number of flips as the performance measure. While our algorithms perform on par with DPLL on instances with little structure, they do quite well on structured instances where they can effectively reuse information gathered from one iteration on the next. We also discuss the pros and cons of our different algorithms and we conclude with a discussion of a number of avenues for future work. © 2011 Springer-Verlag Berlin Heidelberg

    Cautious reasoning in ASP via minimal models and unsatisfiable cores

    No full text
    Answer Set Programming (ASP) is a logic-based knowledge representation framework, supporting - among other reasoning modes - the central task of query answering. In the propositional case, query answering amounts to computing cautious consequences of the input program among the atoms in a given set of candidates, where a cautious consequence is an atom belonging to all stable models. Currently, the most efficient algorithms either iteratively verify the existence of a stable model of the input program extended with the complement of one candidate, where the candidate is heuristically selected, or introduce a clause enforcing the falsity of at least one candidate, so that the solver is free to choose which candidate to falsify at any time during the computation of a stable model. This paper introduces new algorithms for the computation of cautious consequences, with the aim of driving the solver to search for stable models discarding more candidates. Specifically, one of such algorithms enforces minimality on the set of true candidates, where different notions of minimality can be used, and another takes advantage of unsatisfiable cores computation. The algorithms are implemented in wasp, and experiments on benchmarks from the latest ASP competitions show that the new algorithms perform better than the state of the art. Copyright \ua9 Cambridge University Press 2018

    Glycemic control after metabolic surgery:a Granger causality and graph analysis

    Get PDF
    The purpose of this study was to examine the contribution of nonesterified fatty acids (NEFA) and incretin to insulin resistance and diabetes amelioration after malabsorptive metabolic surgery that induces steatorrhea. In fact, NEFA infusion reduces glucose-stimulated insulin secretion, and high-fat diets predict diabetes development. Six healthy controls, 11 obese subjects, and 10 type 2 diabetic (T2D) subjects were studied before and 1 mo after biliopancreatic diversion (BPD). Twenty-four-hour plasma glucose, NEFA, insulin, C-peptide, glucagon-like peptide-1 (GLP-1), and gastric inhibitory polypeptide (GIP) time courses were obtained and analyzed by Granger causality and graph analyses. Insulin sensitivity and secretion were computed by the oral glucose minimal model. Before metabolic surgery, NEFA levels had the strongest influence on the other variables in both obese and T2D subjects. After surgery, GLP-1 and C-peptide levels controlled the system in obese and T2D subjects. Twenty-four-hour GIP levels were markedly reduced after BPD. Finally, not only did GLP-1 levels play a central role, but also insulin and C-peptide levels had a comparable relevance in the network of healthy controls. After BPD, insulin sensitivity was completely normalized in both obese and T2D individuals. Increased 24-h GLP-1 circulating levels positively influenced glucose homeostasis in both obese and T2D subjects who underwent a malabsorptive bariatric operation. In the latter, the reduction of plasma GIP levels also contributed to the improvement of glucose metabolism. It is possible that the combination of a pharmaceutical treatment reducing GIP and increasing GLP-1 plasma levels will contribute to better glycemic control in T2D. The application of Granger causality and graph analyses sheds new light on the pathophysiology of metabolic surgery
    corecore