39 research outputs found
Neuro-symbolic Models for Interpretable Time Series Classification using Temporal Logic Description
Most existing Time series classification (TSC) models lack interpretability
and are difficult to inspect. Interpretable machine learning models can aid in
discovering patterns in data as well as give easy-to-understand insights to
domain specialists. In this study, we present Neuro-Symbolic Time Series
Classification (NSTSC), a neuro-symbolic model that leverages signal temporal
logic (STL) and neural network (NN) to accomplish TSC tasks using multi-view
data representation and expresses the model as a human-readable, interpretable
formula. In NSTSC, each neuron is linked to a symbolic expression, i.e., an STL
(sub)formula. The output of NSTSC is thus interpretable as an STL formula akin
to natural language, describing temporal and logical relations hidden in the
data. We propose an NSTSC-based classifier that adopts a decision-tree approach
to learn formula structures and accomplish a multiclass TSC task. The proposed
smooth activation functions for wSTL allow the model to be learned in an
end-to-end fashion. We test NSTSC on a real-world wound healing dataset from
mice and benchmark datasets from the UCR time-series repository, demonstrating
that NSTSC achieves comparable performance with the state-of-the-art models.
Furthermore, NSTSC can generate interpretable formulas that match with domain
knowledge
Improving Natural Language Inference Using External Knowledge in the Science Questions Domain
Natural Language Inference (NLI) is fundamental to many Natural Language
Processing (NLP) applications including semantic search and question answering.
The NLI problem has gained significant attention thanks to the release of large
scale, challenging datasets. Present approaches to the problem largely focus on
learning-based methods that use only textual information in order to classify
whether a given premise entails, contradicts, or is neutral with respect to a
given hypothesis. Surprisingly, the use of methods based on structured
knowledge -- a central topic in artificial intelligence -- has not received
much attention vis-a-vis the NLI problem. While there are many open knowledge
bases that contain various types of reasoning information, their use for NLI
has not been well explored. To address this, we present a combination of
techniques that harness knowledge graphs to improve performance on the NLI
problem in the science questions domain. We present the results of applying our
techniques on text, graph, and text-to-graph based models, and discuss
implications for the use of external knowledge in solving the NLI problem. Our
model achieves the new state-of-the-art performance on the NLI problem over the
SciTail science questions dataset.Comment: 9 pages, 3 figures, 5 table
Formally Specifying the High-Level Behavior of LLM-Based Agents
Autonomous, goal-driven agents powered by LLMs have recently emerged as
promising tools for solving challenging problems without the need for
task-specific finetuned models that can be expensive to procure. Currently, the
design and implementation of such agents is ad hoc, as the wide variety of
tasks that LLM-based agents may be applied to naturally means there can be no
one-size-fits-all approach to agent design. In this work we aim to alleviate
the difficulty of designing and implementing new agents by proposing a
minimalistic generation framework that simplifies the process of building
agents. The framework we introduce allows the user to define desired agent
behaviors in a high-level, declarative specification that is then used to
construct a decoding monitor which guarantees the LLM will produce an output
exhibiting the desired behavior. Our declarative approach, in which the
behavior is described without concern for how it should be implemented or
enforced, enables rapid design, implementation, and experimentation with
different LLM-based agents. We demonstrate how the proposed framework can be
used to implement recent LLM-based agents (e.g., ReACT), and show how the
flexibility of our approach can be leveraged to define a new agent with more
complex behavior, the Plan-Act-Summarize-Solve (PASS) agent. Lastly, we
demonstrate that our method outperforms other agents on multiple popular
reasoning-centric question-answering benchmarks.Comment: Preprint under revie