68 research outputs found
ChatABL: Abductive Learning via Natural Language Interaction with ChatGPT
Large language models (LLMs) such as ChatGPT have recently demonstrated
significant potential in mathematical abilities, providing valuable reasoning
paradigm consistent with human natural language. However, LLMs currently have
difficulty in bridging perception, language understanding and reasoning
capabilities due to incompatibility of the underlying information flow among
them, making it challenging to accomplish tasks autonomously. On the other
hand, abductive learning (ABL) frameworks for integrating the two abilities of
perception and reasoning has seen significant success in inverse decipherment
of incomplete facts, but it is limited by the lack of semantic understanding of
logical reasoning rules and the dependence on complicated domain knowledge
representation. This paper presents a novel method (ChatABL) for integrating
LLMs into the ABL framework, aiming at unifying the three abilities in a more
user-friendly and understandable manner. The proposed method uses the strengths
of LLMs' understanding and logical reasoning to correct the incomplete logical
facts for optimizing the performance of perceptual module, by summarizing and
reorganizing reasoning rules represented in natural language format. Similarly,
perceptual module provides necessary reasoning examples for LLMs in natural
language format. The variable-length handwritten equation deciphering task, an
abstract expression of the Mayan calendar decoding, is used as a testbed to
demonstrate that ChatABL has reasoning ability beyond most existing
state-of-the-art methods, which has been well supported by comparative studies.
To our best knowledge, the proposed ChatABL is the first attempt to explore a
new pattern for further approaching human-level cognitive ability via natural
language interaction with ChatGPT
DeepProbLog: Neural Probabilistic Logic Programming
We introduce DeepProbLog, a probabilistic logic programming language that
incorporates deep learning by means of neural predicates. We show how existing
inference and learning techniques can be adapted for the new language. Our
experiments demonstrate that DeepProbLog supports both symbolic and subsymbolic
representations and inference, 1) program induction, 2) probabilistic (logic)
programming, and 3) (deep) learning from examples. To the best of our
knowledge, this work is the first to propose a framework where general-purpose
neural networks and expressive probabilistic-logical modeling and reasoning are
integrated in a way that exploits the full expressiveness and strengths of both
worlds and can be trained end-to-end based on examples.Comment: Accepted for spotlight at NeurIPS 201
A Pseudo-Semantic Loss for Autoregressive Models with Logical Constraints
Neuro-symbolic AI bridges the gap between purely symbolic and neural
approaches to learning. This often requires maximizing the likelihood of a
symbolic constraint w.r.t the neural network's output distribution. Such output
distributions are typically assumed to be fully-factorized. This limits the
applicability of neuro-symbolic learning to the more expressive autoregressive
distributions, e.g., transformers. Under such distributions, computing the
likelihood of even simple constraints is #P-hard. Instead of attempting to
enforce the constraint on the entire output distribution, we propose to do so
on a random, local approximation thereof. More precisely, we optimize the
likelihood of the constraint under a pseudolikelihood-based approximation
centered around a model sample. Our approximation is factorized, allowing the
reuse of solutions to sub-problems, a main tenet for efficiently computing
neuro-symbolic losses. Moreover, it is a local, high-fidelity approximation of
the likelihood, exhibiting low entropy and KL-divergence around the model
sample. We evaluate our approach on Sudoku and shortest-path prediction cast as
autoregressive generation, and observe that we greatly improve upon the base
model's ability to predict logically-consistent outputs. We also evaluate on
the task of detoxifying large language models. Using a simple constraint
disallowing a list of toxic words, we are able to steer the model's outputs
away from toxic generations, achieving SoTA detoxification compared to previous
approaches.Comment: Updated detoxification experiments; moved example toxic generations
to Github and added lin
DeepProbLog: neural probabilistic logic programming
We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) probabilistic (logic) programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples
- …