2,168 research outputs found
Keyword Search on RDF Graphs - A Query Graph Assembly Approach
Keyword search provides ordinary users an easy-to-use interface for querying
RDF data. Given the input keywords, in this paper, we study how to assemble a
query graph that is to represent user's query intention accurately and
efficiently. Based on the input keywords, we first obtain the elementary query
graph building blocks, such as entity/class vertices and predicate edges. Then,
we formally define the query graph assembly (QGA) problem. Unfortunately, we
prove theoretically that QGA is a NP-complete problem. In order to solve that,
we design some heuristic lower bounds and propose a bipartite graph
matching-based best-first search algorithm. The algorithm's time complexity is
, where is the number of the keywords and is a
tunable parameter, i.e., the maximum number of candidate entity/class vertices
and predicate edges allowed to match each keyword. Although QGA is intractable,
both and are small in practice. Furthermore, the algorithm's time
complexity does not depend on the RDF graph size, which guarantees the good
scalability of our system in large RDF graphs. Experiments on DBpedia and
Freebase confirm the superiority of our system on both effectiveness and
efficiency
LEAGUE: Guided Skill Learning and Abstraction for Long-Horizon Manipulation
To assist with everyday human activities, robots must solve complex
long-horizon tasks and generalize to new settings. Recent deep reinforcement
learning (RL) methods show promise in fully autonomous learning, but they
struggle to reach long-term goals in large environments. On the other hand,
Task and Motion Planning (TAMP) approaches excel at solving and generalizing
across long-horizon tasks, thanks to their powerful state and action
abstractions. But they assume predefined skill sets, which limits their
real-world applications. In this work, we combine the benefits of these two
paradigms and propose an integrated task planning and skill learning framework
named LEAGUE (Learning and Abstraction with Guidance). LEAGUE leverages the
symbolic interface of a task planner to guide RL-based skill learning and
creates abstract state space to enable skill reuse. More importantly, LEAGUE
learns manipulation skills in-situ of the task planning system, continuously
growing its capability and the set of tasks that it can solve. We evaluate
LEAGUE on four challenging simulated task domains and show that LEAGUE
outperforms baselines by large margins. We also show that the learned skills
can be reused to accelerate learning in new tasks domains and transfer to a
physical robot platform.Comment: Accepted to RA-L 202
- β¦