7,902 research outputs found
Optimal Scene Graph Planning with Large Language Model Guidance
Recent advances in metric, semantic, and topological mapping have equipped
autonomous robots with semantic concept grounding capabilities to interpret
natural language tasks. This work aims to leverage these new capabilities with
an efficient task planning algorithm for hierarchical metric-semantic models.
We consider a scene graph representation of the environment and utilize a large
language model (LLM) to convert a natural language task into a linear temporal
logic (LTL) automaton. Our main contribution is to enable optimal hierarchical
LTL planning with LLM guidance over scene graphs. To achieve efficiency, we
construct a hierarchical planning domain that captures the attributes and
connectivity of the scene graph and the task automaton, and provide semantic
guidance via an LLM heuristic function. To guarantee optimality, we design an
LTL heuristic function that is provably consistent and supplements the
potentially inadmissible LLM guidance in multi-heuristic planning. We
demonstrate efficient planning of complex natural language tasks in scene
graphs of virtualized real environments
Cautious Planning with Incremental Symbolic Perception: Designing Verified Reactive Driving Maneuvers
This work presents a step towards utilizing incrementally-improving symbolic
perception knowledge of the robot's surroundings for provably correct reactive
control synthesis applied to an autonomous driving problem. Combining abstract
models of motion control and information gathering, we show that
assume-guarantee specifications (a subclass of Linear Temporal Logic) can be
used to define and resolve traffic rules for cautious planning. We propose a
novel representation called symbolic refinement tree for perception that
captures the incremental knowledge about the environment and embodies the
relationships between various symbolic perception inputs. The incremental
knowledge is leveraged for synthesizing verified reactive plans for the robot.
The case studies demonstrate the efficacy of the proposed approach in
synthesizing control inputs even in case of partially occluded environments
Software tools for the cognitive development of autonomous robots
Robotic systems are evolving towards higher degrees of autonomy. This paper reviews the cognitive tools available nowadays for the fulfilment of abstract or long-term goals as well as for learning and modifying their behaviour.Peer ReviewedPostprint (author's final draft
A Survey of Knowledge Representation in Service Robotics
Within the realm of service robotics, researchers have placed a great amount
of effort into learning, understanding, and representing motions as
manipulations for task execution by robots. The task of robot learning and
problem-solving is very broad, as it integrates a variety of tasks such as
object detection, activity recognition, task/motion planning, localization,
knowledge representation and retrieval, and the intertwining of
perception/vision and machine learning techniques. In this paper, we solely
focus on knowledge representations and notably how knowledge is typically
gathered, represented, and reproduced to solve problems as done by researchers
in the past decades. In accordance with the definition of knowledge
representations, we discuss the key distinction between such representations
and useful learning models that have extensively been introduced and studied in
recent years, such as machine learning, deep learning, probabilistic modelling,
and semantic graphical structures. Along with an overview of such tools, we
discuss the problems which have existed in robot learning and how they have
been built and used as solutions, technologies or developments (if any) which
have contributed to solving them. Finally, we discuss key principles that
should be considered when designing an effective knowledge representation.Comment: Accepted for RAS Special Issue on Semantic Policy and Action
Representations for Autonomous Robots - 22 Page
- …