1,164 research outputs found
Multiform Adaptive Robot Skill Learning from Humans
Object manipulation is a basic element in everyday human lives. Robotic
manipulation has progressed from maneuvering single-rigid-body objects with
firm grasping to maneuvering soft objects and handling contact-rich actions.
Meanwhile, technologies such as robot learning from demonstration have enabled
humans to intuitively train robots. This paper discusses a new level of robotic
learning-based manipulation. In contrast to the single form of learning from
demonstration, we propose a multiform learning approach that integrates
additional forms of skill acquisition, including adaptive learning from
definition and evaluation. Moreover, going beyond state-of-the-art technologies
of handling purely rigid or soft objects in a pseudo-static manner, our work
allows robots to learn to handle partly rigid partly soft objects with
time-critical skills and sophisticated contact control. Such capability of
robotic manipulation offers a variety of new possibilities in human-robot
interaction.Comment: Accepted to 2017 Dynamic Systems and Control Conference (DSCC),
Tysons Corner, VA, October 11-1
Robot Composite Learning and the Nunchaku Flipping Challenge
Advanced motor skills are essential for robots to physically coexist with
humans. Much research on robot dynamics and control has achieved success on
hyper robot motor capabilities, but mostly through heavily case-specific
engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous
manner, robot learning from human demonstration (LfD) has achieved great
progress, but still has limitations handling dynamic skills and compound
actions. In this paper, we present a composite learning scheme which goes
beyond LfD and integrates robot learning from human definition, demonstration,
and evaluation. The method tackles advanced motor skills that require dynamic
time-critical maneuver, complex contact control, and handling partly soft
partly rigid objects. We also introduce the "nunchaku flipping challenge", an
extreme test that puts hard requirements to all these three aspects. Continued
from our previous presentations, this paper introduces the latest update of the
composite learning scheme and the physical success of the nunchaku flipping
challenge
Recovering from External Disturbances in Online Manipulation through State-Dependent Revertive Recovery Policies
Robots are increasingly entering uncertain and unstructured environments.
Within these, robots are bound to face unexpected external disturbances like
accidental human or tool collisions. Robots must develop the capacity to
respond to unexpected events. That is not only identifying the sudden anomaly,
but also deciding how to handle it. In this work, we contribute a recovery
policy that allows a robot to recovery from various anomalous scenarios across
different tasks and conditions in a consistent and robust fashion. The system
organizes tasks as a sequence of nodes composed of internal modules such as
motion generation and introspection. When an introspection module flags an
anomaly, the recovery strategy is triggered and reverts the task execution by
selecting a target node as a function of a state dependency chart. The new
skill allows the robot to overcome the effects of the external disturbance and
conclude the task. Our system recovers from accidental human and tool
collisions in a number of tasks. Of particular importance is the fact that we
test the robustness of the recovery system by triggering anomalies at each node
in the task graph showing robust recovery everywhere in the task. We also
trigger multiple and repeated anomalies at each of the nodes of the task
showing that the recovery system can consistently recover anywhere in the
presence of strong and pervasive anomalous conditions. Robust recovery systems
will be key enablers for long-term autonomy in robot systems. Supplemental info
including code, data, graphs, and result analysis can be found at [1].Comment: 8 pages, 8 figures, 1 tabl
Teaching robots parametrized executable plans through spoken interaction
While operating in domestic environments, robots will necessarily
face difficulties not envisioned by their developers at programming
time. Moreover, the tasks to be performed by a robot will often
have to be specialized and/or adapted to the needs of specific users
and specific environments. Hence, learning how to operate by interacting
with the user seems a key enabling feature to support the
introduction of robots in everyday environments.
In this paper we contribute a novel approach for learning, through
the interaction with the user, task descriptions that are defined as a
combination of primitive actions. The proposed approach makes
a significant step forward by making task descriptions parametric
with respect to domain specific semantic categories. Moreover, by
mapping the task representation into a task representation language,
we are able to express complex execution paradigms and to revise
the learned tasks in a high-level fashion. The approach is evaluated
in multiple practical applications with a service robot
A Survey of Knowledge Representation in Service Robotics
Within the realm of service robotics, researchers have placed a great amount
of effort into learning, understanding, and representing motions as
manipulations for task execution by robots. The task of robot learning and
problem-solving is very broad, as it integrates a variety of tasks such as
object detection, activity recognition, task/motion planning, localization,
knowledge representation and retrieval, and the intertwining of
perception/vision and machine learning techniques. In this paper, we solely
focus on knowledge representations and notably how knowledge is typically
gathered, represented, and reproduced to solve problems as done by researchers
in the past decades. In accordance with the definition of knowledge
representations, we discuss the key distinction between such representations
and useful learning models that have extensively been introduced and studied in
recent years, such as machine learning, deep learning, probabilistic modelling,
and semantic graphical structures. Along with an overview of such tools, we
discuss the problems which have existed in robot learning and how they have
been built and used as solutions, technologies or developments (if any) which
have contributed to solving them. Finally, we discuss key principles that
should be considered when designing an effective knowledge representation.Comment: Accepted for RAS Special Issue on Semantic Policy and Action
Representations for Autonomous Robots - 22 Page
Data-driven learning for robot physical intelligence
The physical intelligence, which emphasizes physical capabilities such as dexterous manipulation and dynamic mobility, is essential for robots to physically coexist with humans. Much research on robot physical intelligence has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this dissertation, a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation is proposed. This method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. Besides, the power of crowdsourcing is brought to tackle case-specific engineering problem in the robot physical intelligence. Crowdsourcing has demonstrated great potential in recent development of artificial intelligence. Constant learning from a large group of human mentors breaks the limit of learning from one or a few mentors in individual cases, and has achieved success in image recognition, translation, and many other cyber applications. A robot learning scheme that allows a robot to synthesize new physical skills using knowledge acquired from crowdsourced human mentors is proposed. The work is expected to provide a long-term and big-scale measure to produce advanced robot physical intelligence
NASA Center for Intelligent Robotic Systems for Space Exploration
NASA's program for the civilian exploration of space is a challenge to scientists and engineers to help maintain and further develop the United States' position of leadership in a focused sphere of space activity. Such an ambitious plan requires the contribution and further development of many scientific and technological fields. One research area essential for the success of these space exploration programs is Intelligent Robotic Systems. These systems represent a class of autonomous and semi-autonomous machines that can perform human-like functions with or without human interaction. They are fundamental for activities too hazardous for humans or too distant or complex for remote telemanipulation. To meet this challenge, Rensselaer Polytechnic Institute (RPI) has established an Engineering Research Center for Intelligent Robotic Systems for Space Exploration (CIRSSE). The Center was created with a five year $5.5 million grant from NASA submitted by a team of the Robotics and Automation Laboratories. The Robotics and Automation Laboratories of RPI are the result of the merger of the Robotics and Automation Laboratory of the Department of Electrical, Computer, and Systems Engineering (ECSE) and the Research Laboratory for Kinematics and Robotic Mechanisms of the Department of Mechanical Engineering, Aeronautical Engineering, and Mechanics (ME,AE,&M), in 1987. This report is an examination of the activities that are centered at CIRSSE
A Method for Learning a Petri Net Model Based on Region Theory
The deployment of robots in real life applications is growing. For better control and analysis of robots, modeling and learning are the hot topics in the field. This paper proposes a method for learning a Petri net model from the limited attempts of robots. The method can supplement the information getting from robot system and then derive an accurate Petri net based on region theory accordingly. We take the building block world as an example to illustrate the presented method and prove the rationality of the method by two theorems. Moreover, the method described in this paper has been implemented by a program and tested on a set of examples. The results of experiments show that our algorithm is feasible and effective
Integration and coordination in a cognitive vision system
In this paper, we present a case study that exemplifies
general ideas of system integration and coordination.
The application field of assistant technology provides an
ideal test bed for complex computer vision systems including
real-time components, human-computer interaction, dynamic
3-d environments, and information retrieval aspects.
In our scenario the user is wearing an augmented reality device
that supports her/him in everyday tasks by presenting
information that is triggered by perceptual and contextual
cues. The system integrates a wide variety of visual functions
like localization, object tracking and recognition, action
recognition, interactive object learning, etc. We show
how different kinds of system behavior are realized using
the Active Memory Infrastructure that provides the technical
basis for distributed computation and a data- and eventdriven
integration approach
- …