1,109 research outputs found
The State of Lifelong Learning in Service Robots: Current Bottlenecks in Object Perception and Manipulation
Service robots are appearing more and more in our daily life. The development
of service robots combines multiple fields of research, from object perception
to object manipulation. The state-of-the-art continues to improve to make a
proper coupling between object perception and manipulation. This coupling is
necessary for service robots not only to perform various tasks in a reasonable
amount of time but also to continually adapt to new environments and safely
interact with non-expert human users. Nowadays, robots are able to recognize
various objects, and quickly plan a collision-free trajectory to grasp a target
object in predefined settings. Besides, in most of the cases, there is a
reliance on large amounts of training data. Therefore, the knowledge of such
robots is fixed after the training phase, and any changes in the environment
require complicated, time-consuming, and expensive robot re-programming by
human experts. Therefore, these approaches are still too rigid for real-life
applications in unstructured environments, where a significant portion of the
environment is unknown and cannot be directly sensed or controlled. In such
environments, no matter how extensive the training data used for batch
learning, a robot will always face new objects. Therefore, apart from batch
learning, the robot should be able to continually learn about new object
categories and grasp affordances from very few training examples on-site.
Moreover, apart from robot self-learning, non-expert users could interactively
guide the process of experience acquisition by teaching new concepts, or by
correcting insufficient or erroneous concepts. In this way, the robot will
constantly learn how to help humans in everyday tasks by gaining more and more
experiences without the need for re-programming
A Survey of Knowledge Representation in Service Robotics
Within the realm of service robotics, researchers have placed a great amount
of effort into learning, understanding, and representing motions as
manipulations for task execution by robots. The task of robot learning and
problem-solving is very broad, as it integrates a variety of tasks such as
object detection, activity recognition, task/motion planning, localization,
knowledge representation and retrieval, and the intertwining of
perception/vision and machine learning techniques. In this paper, we solely
focus on knowledge representations and notably how knowledge is typically
gathered, represented, and reproduced to solve problems as done by researchers
in the past decades. In accordance with the definition of knowledge
representations, we discuss the key distinction between such representations
and useful learning models that have extensively been introduced and studied in
recent years, such as machine learning, deep learning, probabilistic modelling,
and semantic graphical structures. Along with an overview of such tools, we
discuss the problems which have existed in robot learning and how they have
been built and used as solutions, technologies or developments (if any) which
have contributed to solving them. Finally, we discuss key principles that
should be considered when designing an effective knowledge representation.Comment: Accepted for RAS Special Issue on Semantic Policy and Action
Representations for Autonomous Robots - 22 Page
Recommended from our members
Belief-Space Planning for Resourceful Manipulation and Mobility
Robots are increasingly expected to work in partially observable and unstructured environments. They need to select actions that exploit perceptual and motor resourcefulness to manage uncertainty based on the demands of the task and environment. The research in this dissertation makes two primary contributions. First, it develops a new concept in resourceful robot platforms called the UMass uBot and introduces the sixth and seventh in the uBot series. uBot-6 introduces multiple postural configurations that enable different modes of mobility and manipulation to meet the needs of a wide variety of tasks and environmental constraints. uBot-7 extends this with the use of series elastic actuators (SEAs) to improve manipulation capabilities and support safer operation around humans. The resourcefulness of these robots is complemented with a belief-space planning framework that enables task-driven action selection in the context of the partially observable environment. The framework uses a compact but expressive state representation based on object models. We extend an existing affordance-based object model, called an aspect transition graph (ATG), with geometric information. This enables object-centric modeling of features and actions, making the model much more expressive without increasing the complexity. A novel task representation enables the belief-space planner to perform general object-centric tasks ranging from recognition to manipulation of objects. The approach supports the efficient handling of multi-object scenes. The combination of the physical platform and the planning framework are evaluated in two novel, challenging, partially observable planning domains. The ARcube domain provides a large population of objects that are highly ambiguous. Objects can only be differentiated using multi-modal sensor information and manual interactions. In the dexterous mobility domain, a robot can employ multiple mobility modes to complete navigation tasks under a variety of possible environment constraints. The performance of the proposed approach is evaluated using experiments in simulation and on a real robot
A review and comparison of ontology-based approaches to robot autonomy
Within the next decades, robots will need to be able to execute a large variety of tasks autonomously in a large variety of environments. To relax the resulting programming effort, a knowledge-enabled approach to robot programming can be adopted to organize information in re-usable knowledge pieces. However, for the ease of reuse, there needs to be an agreement on the meaning of terms. A common approach is to represent these terms using ontology languages that conceptualize the respective domain. In this work, we will review projects that use ontologies to support robot autonomy. We will systematically search for projects that fulfill a set of inclusion criteria and compare them with each other with respect to the scope of their ontology, what types of cognitive capabilities are supported by the use of ontologies, and which is their application domain.Peer ReviewedPostprint (author's final draft
Object Handovers: a Review for Robotics
This article surveys the literature on human-robot object handovers. A
handover is a collaborative joint action where an agent, the giver, gives an
object to another agent, the receiver. The physical exchange starts when the
receiver first contacts the object held by the giver and ends when the giver
fully releases the object to the receiver. However, important cognitive and
physical processes begin before the physical exchange, including initiating
implicit agreement with respect to the location and timing of the exchange.
From this perspective, we structure our review into the two main phases
delimited by the aforementioned events: 1) a pre-handover phase, and 2) the
physical exchange. We focus our analysis on the two actors (giver and receiver)
and report the state of the art of robotic givers (robot-to-human handovers)
and the robotic receivers (human-to-robot handovers). We report a comprehensive
list of qualitative and quantitative metrics commonly used to assess the
interaction. While focusing our review on the cognitive level (e.g.,
prediction, perception, motion planning, learning) and the physical level
(e.g., motion, grasping, grip release) of the handover, we briefly discuss also
the concepts of safety, social context, and ergonomics. We compare the
behaviours displayed during human-to-human handovers to the state of the art of
robotic assistants, and identify the major areas of improvement for robotic
assistants to reach performance comparable to human interactions. Finally, we
propose a minimal set of metrics that should be used in order to enable a fair
comparison among the approaches.Comment: Review paper, 19 page
Reasoning and understanding grasp affordances for robot manipulation
This doctoral research focuses on developing new methods that enable an artificial agent
to grasp and manipulate objects autonomously. More specifically, we are using the concept
of affordances to learn and generalise robot grasping and manipulation techniques. [75] defined affordances as the ability of an agent to perform a certain action with an object in a
given environment. In robotics, affordances defines the possibility of an agent to perform
actions with an object. Therefore, by understanding the relation between actions, objects
and the effect of these actions, the agent understands the task at hand, providing the robot
with the potential to bridge perception to action. The significance of affordances in robotics
has been studied from varied perspectives, such as psychology and cognitive sciences.
Many efforts have been made to pragmatically employ the concept of affordances as it
provides the potential for an artificial agent to perform tasks autonomously. We start by reviewing and finding common ground amongst different strategies that use affordances for
robotic tasks. We build on the identified grounds to provide guidance on including the concept of affordances as a medium to boost autonomy for an artificial agent. To this end, we
outline common design choices to build an affordance relation; and their implications on
the generalisation capabilities of the agent when facing previously unseen scenarios. Based
on our exhaustive review, we conclude that prior research on object affordance detection
is effective, however, among others, it has the following technical gaps: (i) the methods are
limited to a single object ↔ affordance hypothesis, and (ii) they cannot guarantee task completion or any level of performance for the manipulation task alone nor (iii) in collaboration
with other agents. In this research thesis, we propose solutions to these technical challenges.
In an incremental fashion, we start by addressing the limited generalisation capabilities
of, at the time state-of-the-art methods, by strengthening the perception to action connection through the construction of an Knowledge Base (KB). We then leverage the information
encapsulated in the KB to design and implement a reasoning and understanding method
based on statistical relational leaner (SRL) that allows us to cope with uncertainty in testing
environments, and thus, improve generalisation capabilities in affordance-aware manipulation tasks. The KB in conjunctions with our SRL are the base for our designed solutions
that guarantee task completion when the robot is performing a task alone as well as when in
collaboration with other agents. We finally expose and discuss a range of interesting avenues
that have the potential to thrive the capabilities of a robotic agent through the use of the
concept of affordances for manipulation tasks. A summary of the contributions of this thesis
can be found at: https://bit.ly/grasp_affordance_reasonin
Spatial representation for planning and executing robot behaviors in complex environments
Robots are already improving our well-being and productivity in
different applications such as industry, health-care and indoor
service applications. However, we are still far from developing (and
releasing) a fully functional robotic agent that can autonomously
survive in tasks that require human-level
cognitive capabilities. Robotic systems on the market, in fact, are
designed to address specific applications, and can only run
pre-defined behaviors to robustly repeat few tasks (e.g., assembling
objects parts, vacuum cleaning). They internal representation of the
world is usually constrained to the task they are performing, and
does not allows for generalization to other
scenarios. Unfortunately, such a paradigm only apply to a very
limited set of domains, where the environment can be assumed to be
static, and its dynamics can be handled before
deployment. Additionally, robots configured in this way will
eventually fail if their "handcrafted'' representation of the
environment does not match the external world.
Hence, to enable more sophisticated cognitive skills, we investigate
how to design robots to properly represent the environment and
behave accordingly. To this end, we formalize a representation of
the environment that enhances the robot spatial knowledge to
explicitly include a representation of its own actions. Spatial
knowledge constitutes the core of the robot understanding of the
environment, however it is not sufficient to represent what the
robot is capable to do in it. To overcome such a limitation, we
formalize SK4R, a spatial knowledge representation for robots which
enhances spatial knowledge with a novel and "functional"
point of view that explicitly models robot actions. To this end, we
exploit the concept of affordances, introduced to express
opportunities (actions) that objects offer to an agent. To encode
affordances within SK4R, we define the "affordance
semantics" of actions that is used to annotate an environment, and
to represent to which extent robot actions support goal-oriented
behaviors.
We demonstrate the benefits of a functional representation of the
environment in multiple robotic scenarios that traverse and
contribute different research topics relating to: robot knowledge
representations, social robotics, multi-robot systems and robot
learning and planning. We show how a domain-specific representation,
that explicitly encodes affordance semantics, provides the robot
with a more concrete understanding of the environment and of the
effects that its actions have on it. The goal of our work is to
design an agent that will no longer execute an action, because of
mere pre-defined routine, rather, it will execute an actions because
it "knows'' that the resulting state leads one step closer to
success in its task
- …