25 research outputs found
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches
Top-Down Knowledge Compilation for Counting Modulo Theories
Propositional model counting (#SAT) can be solved efficiently when the input
formula is in deterministic decomposable negation normal form (d-DNNF).
Translating an arbitrary formula into a representation that allows inference
tasks, such as counting, to be performed efficiently, is called knowledge
compilation. Top-down knowledge compilation is a state-of-the-art technique for
solving #SAT problems that leverages the traces of exhaustive DPLL search to
obtain d-DNNF representations. While knowledge compilation is well studied for
propositional approaches, knowledge compilation for the (quantifier free)
counting modulo theory setting (#SMT) has been studied to a much lesser degree.
In this paper, we discuss compilation strategies for #SMT. We specifically
advocate for a top-down compiler based on the traces of exhaustive DPLL(T)
search.Comment: 9 pages; submitted to Workshop on Counting and Sampling 2023 at
SAT202
Neural Probabilistic Logic Programming in Discrete-Continuous Domains
Neural-symbolic AI (NeSy) allows neural networks to exploit symbolic
background knowledge in the form of logic. It has been shown to aid learning in
the limited data regime and to facilitate inference on out-of-distribution
data. Probabilistic NeSy focuses on integrating neural networks with both logic
and probability theory, which additionally allows learning under uncertainty. A
major limitation of current probabilistic NeSy systems, such as DeepProbLog, is
their restriction to finite probability distributions, i.e., discrete random
variables. In contrast, deep probabilistic programming (DPP) excels in
modelling and optimising continuous probability distributions. Hence, we
introduce DeepSeaProbLog, a neural probabilistic logic programming language
that incorporates DPP techniques into NeSy. Doing so results in the support of
inference and learning of both discrete and continuous probability
distributions under logical constraints. Our main contributions are 1) the
semantics of DeepSeaProbLog and its corresponding inference algorithm, 2) a
proven asymptotically unbiased learning algorithm, and 3) a series of
experiments that illustrate the versatility of our approach.Comment: 27 pages, 9 figure