40 research outputs found
Furthest Reasoning with Plan Assessment: Stable Reasoning Path with Retrieval-Augmented Large Language Models
Large Language Models (LLMs), acting as a powerful reasoner and generator,
exhibit extraordinary performance across various natural language tasks, such
as question answering (QA). Among these tasks, Multi-Hop Question Answering
(MHQA) stands as a widely discussed category, necessitating seamless
integration between LLMs and the retrieval of external knowledge. Existing
methods employ LLM to generate reasoning paths and plans, and utilize IR to
iteratively retrieve related knowledge, but these approaches have inherent
flaws. On one hand, Information Retriever (IR) is hindered by the low quality
of generated queries by LLM. On the other hand, LLM is easily misguided by the
irrelevant knowledge by IR. These inaccuracies, accumulated by the iterative
interaction between IR and LLM, lead to a disaster in effectiveness at the end.
To overcome above barriers, in this paper, we propose a novel pipeline for MHQA
called Furthest-Reasoning-with-Plan-Assessment (FuRePA), including an improved
framework (Furthest Reasoning) and an attached module (Plan Assessor). 1)
Furthest reasoning operates by masking previous reasoning path and generated
queries for LLM, encouraging LLM generating chain of thought from scratch in
each iteration. This approach enables LLM to break the shackle built by
previous misleading thoughts and queries (if any). 2) The Plan Assessor is a
trained evaluator that selects an appropriate plan from a group of candidate
plans proposed by LLM. Our methods are evaluated on three highly recognized
public multi-hop question answering datasets and outperform state-of-the-art on
most metrics (achieving a 10%-12% in answer accuracy)
Recommended from our members
Psychological evidence for assumptions of path-based inheritance reasoning
The psychological validity of inheritance reasoners is clarified. Elio and Pelletier (1993) presented the first pilot experiment exploring some of these issues. We investigate other foundational assumptions of inheritance reasoning with defaults: transitivity, blocking of transitivity by negative defaults, pre-emption in terms of structurally defined specificity and structurally defined redundancy of information. Responses were in accord with the assumption of at least limited transitivity, however, reasoning with negative information and structurally defined specificity conditions did not support the predictions of the literature. 'Preemptive' links were found to provide additional information leading to indeterminacy, rather than providing completely overriding information as the literature predicts. On the other hand, results support the structural identification of certain links as redundant. Other findings suggest that inheritance proof-theory might be excessively guided by its syntax
Partial logics with two kinds of negation as a foundation for knowledge-based reasoning
We show how to use model classes of partial logic to define semantics of general knowledge-based reasoning. Its essential benefit is that partial logics allow us to distinguish two sorts of negative information: the absence of information and the explicit rejection or falsification of information. Another general advantage of partial logic, which we discuss in the first part, is that its meta-theory is very close to the meta-theory of classical logic. In the second part notions of minimal, paraminimal and stable models are presented in terms of partial logic and we show how the resulting definitions can be used to define the semantics of knowledge bases such as relational and deductive databases, and extended logic programs
A survey of announcement effects on foreign exchange returns
Researchers have long studied the reaction of foreign exchange returns to macroeconomic announcements in order to infer changes in policy reaction functions and foreign exchange microstructure, including the speed of market reaction to news and how order flow helps impound public and private information into prices. These studies have often been disconnected, however; and this article critically reviews and evaluates the literature on announcement effects on foreign exchange returns.Foreign exchange
Belnap's epistemic states and negation-as-failure
Generalizing Belnap's system of epistemic states [Bel77] we obtain the system of disjunctive factbases which is the paradigm for all other kinds of disjunctive knowledge bases. Disjunctive factbases capture the nonmonotonic reasoning based on paraminimal models. In the schema of a disjunctive factbase, certain predicates of the resp. domain are declared to be exact, i.e. two-valued, and in turn some of these exact predicates are declared to be subject to the Closed-World Assumption (CWA). Thus, we distinguish between three kinds of predicates: inexact predicates, exact predicates subject to the CWA, and exact predicates not subject to the CWA
Lipschitz stability of controlled invariant subspaces
AbstractLet (A,B)∈Cn×n×Cn×m and M be an (A,B)-invariant subspace. In this paper the following results are presented: (i) If M∩ImB={0}, necessary and sufficient conditions for the Lipschitz stability of M are given. (ii) If M contains the controllability subspace of the pair (A,B), sufficient conditions for the Lipschitz stability of the subspace M are given
A contradiction-driven approach to theory formation : conceptual issues, pragmatics in human learning, potentialities
In Educational literature, Discovery Learning appears as an approach in which the learner builds up his/her own knowledge by performing experiments within a domain and inferring/increasing rules as a result. Such a constructivist approach has been largely exploited in the design of computational artefacts with learning purposes, the so-called Discovery Learning Environments (DLEs). One known feature of such environments is the autonomy degree required for students to succeed while handling a domain. Additionally, DLEs designers are often challenged to get students actually engaged. Such questions are on the basis of our concerns with the design and usage of particular DLEs, within which learning events occur as a consequence of contradiction detection and overcoming, during human/machine cooperative work. In this paper, we present an artificial agent capable of handling such a contradiction-driven approach of learning, by highlighting the exchanges that the agent should promote with a human learner. The conceptual model supporting the agent’s design relies on the scientific rationale, particularly the empirical approach guided by the theory-experiment confrontation. We shall reinforce the interest of the model for the design of DLEs by presenting its exploitation in a real learning situation in Law. Also, we suggest potential instantiations of the model elsewhere than in Human Learning