64 research outputs found
On the Semantics of Gringo
Input languages of answer set solvers are based on the mathematically simple
concept of a stable model. But many useful constructs available in these
languages, including local variables, conditional literals, and aggregates,
cannot be easily explained in terms of stable models in the sense of the
original definition of this concept and its straightforward generalizations.
Manuals written by designers of answer set solvers usually explain such
constructs using examples and informal comments that appeal to the user's
intuition, without references to any precise semantics. We propose to approach
the problem of defining the semantics of gringo programs by translating them
into the language of infinitary propositional formulas. This semantics allows
us to study equivalent transformations of gringo programs using natural
deduction in infinitary propositional logic.Comment: Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turke
SDRL: Interpretable and Data-efficient Deep Reinforcement Learning Leveraging Symbolic Planning
Deep reinforcement learning (DRL) has gained great success by learning
directly from high-dimensional sensory inputs, yet is notorious for the lack of
interpretability. Interpretability of the subtasks is critical in hierarchical
decision-making as it increases the transparency of black-box-style DRL
approach and helps the RL practitioners to understand the high-level behavior
of the system better. In this paper, we introduce symbolic planning into DRL
and propose a framework of Symbolic Deep Reinforcement Learning (SDRL) that can
handle both high-dimensional sensory inputs and symbolic planning. The
task-level interpretability is enabled by relating symbolic actions to
options.This framework features a planner -- controller -- meta-controller
architecture, which takes charge of subtask scheduling, data-driven subtask
learning, and subtask evaluation, respectively. The three components
cross-fertilize each other and eventually converge to an optimal symbolic plan
along with the learned subtasks, bringing together the advantages of long-term
planning capability with symbolic knowledge and end-to-end reinforcement
learning directly from a high-dimensional sensory input. Experimental results
validate the interpretability of subtasks, along with improved data efficiency
compared with state-of-the-art approaches
Representing First-Order Causal Theories by Logic Programs
Nonmonotonic causal logic, introduced by McCain and Turner (McCain, N. and Turner, H. 1997. Causal theories of action and change. In Proceedings of National Conference on Artificial Intelligence (AAAI), Stanford, CA, 460–465) became the basis for the semantics of several expressive action languages. McCain\u27s embedding of definite propositional causal theories into logic programming paved the way to the use of answer set solvers for answering queries about actions described in such languages. In this paper we extend this embedding to nondefinite theories and to the first-order causal logic
Representing First-Order Causal Theories by Logic Programs
Nonmonotonic causal logic, introduced by Norman McCain and Hudson Turner,
became a basis for the semantics of several expressive action languages.
McCain's embedding of definite propositional causal theories into logic
programming paved the way to the use of answer set solvers for answering
queries about actions described in such languages. In this paper we extend this
embedding to nondefinite theories and to first-order causal logic.Comment: 29 pages. To appear in Theory and Practice of Logic Programming
(TPLP); Theory and Practice of Logic Programming, May, 201
Empower Large Language Model to Perform Better on Industrial Domain-Specific Question Answering
Large Language Model (LLM) has gained popularity and achieved remarkable
results in open-domain tasks, but its performance in real industrial
domain-specific scenarios is average since there is no specific knowledge in
it. This issue has attracted widespread attention, but there are few relevant
benchmarks available. In this paper, we provide a benchmark Question Answering
(QA) dataset named MSQA, which is about Microsoft products and IT technical
problems encountered by customers. This dataset contains industry
cloud-specific QA knowledge, which is not available for general LLM, so it is
well suited for evaluating methods aimed at improving domain-specific
capabilities of LLM. In addition, we propose a new model interaction paradigm
that can empower LLM to achieve better performance on domain-specific tasks
where it is not proficient. Extensive experiments demonstrate that the approach
following our model fusion framework outperforms the commonly used LLM with
retrieval methods.Comment: 13 pages, 1 figur
- …