143 research outputs found
Enhanced SPARQL-based design rationale retrieval
Design rationale (DR) is an important category within design knowledge, and effective reuse of it depends on its successful retrieval. In this paper, an ontology-based DR retrieval approach is presented, which allows users to search by entering normal queries such as questions in natural language. First, an ontology-based semantic model of DR is developed based on the extended issue-based information system-based DR representation in order to effectively utilize the semantics embedded in DR, and a database of ontology-based DR is constructed, which supports SPARQL queries. Second, two SPARQL query generation methods are proposed. The first method generates initial SPARQL queries from natural language queries automatically using template matching, and the other generates initial SPARQL queries automatically from DR record-based queries. In addition, keyword extension and optimization is conducted to enhance the SPARQL-based retrieval. Third, a design rationale retrieval prototype system is implemented. The experimental results show the advantages of the proposed approach
StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving
Most existing chain-of-thought (CoT) prompting methods suffer from the issues
of generalizability and consistency, as they often rely on instance-specific
solutions that may not be applicable to other cases and lack task-level
consistency in their reasoning steps. To address these limitations, we propose
a comprehensive framework, StrategyLLM, harnessing the capabilities of LLMs to
tackle various tasks. The framework improves generalizability by formulating
general problem-solving strategies and enhances consistency by producing
consistent solutions using these strategies. StrategyLLM employs four LLM-based
agents: strategy generator, executor, optimizer, and evaluator, working
together to generate, evaluate, and select promising strategies for a given
task automatically. The experimental results demonstrate that StrategyLLM
outperforms the competitive baseline CoT-SC that requires human-annotated
solutions on 13 datasets across 4 challenging tasks without human involvement,
including math reasoning (39.2% 43.3%), commonsense reasoning
(70.3% 72.5%), algorithmic reasoning (51.7% 62.0%),
and symbolic reasoning (30.0% 79.2%)
- β¦