3,297 research outputs found
Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot
Mobile manipulation tasks are one of the key challenges in the field of
search and rescue (SAR) robotics requiring robots with flexible locomotion and
manipulation abilities. Since the tasks are mostly unknown in advance, the
robot has to adapt to a wide variety of terrains and workspaces during a
mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and
an anthropomorphic upper body to carry out complex tasks in environments too
dangerous for humans. Due to its high number of degrees of freedom, controlling
the robot with direct teleoperation approaches is challenging and exhausting.
Supervised autonomy approaches are promising to increase quality and speed of
control while keeping the flexibility to solve unknown tasks. We developed a
set of operator assistance functionalities with different levels of autonomy to
control the robot for challenging locomotion and manipulation tasks. The
integrated system was evaluated in disaster response scenarios and showed
promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), Madrid, Spain, October 201
Semantic Robot Programming for Goal-Directed Manipulation in Cluttered Scenes
We present the Semantic Robot Programming (SRP) paradigm as a convergence of
robot programming by demonstration and semantic mapping. In SRP, a user can
directly program a robot manipulator by demonstrating a snapshot of their
intended goal scene in workspace. The robot then parses this goal as a scene
graph comprised of object poses and inter-object relations, assuming known
object geometries. Task and motion planning is then used to realize the user's
goal from an arbitrary initial scene configuration. Even when faced with
different initial scene configurations, SRP enables the robot to seamlessly
adapt to reach the user's demonstrated goal. For scene perception, we propose
the Discriminatively-Informed Generative Estimation of Scenes and Transforms
(DIGEST) method to infer the initial and goal states of the world from RGBD
images. The efficacy of SRP with DIGEST perception is demonstrated for the task
of tray-setting with a Michigan Progress Fetch robot. Scene perception and task
execution are evaluated with a public household occlusion dataset and our
cluttered scene dataset.Comment: published in ICRA 201
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
A Whole-Body Pose Taxonomy for Loco-Manipulation Tasks
Exploiting interaction with the environment is a promising and powerful way
to enhance stability of humanoid robots and robustness while executing
locomotion and manipulation tasks. Recently some works have started to show
advances in this direction considering humanoid locomotion with multi-contacts,
but to be able to fully develop such abilities in a more autonomous way, we
need to first understand and classify the variety of possible poses a humanoid
robot can achieve to balance. To this end, we propose the adaptation of a
successful idea widely used in the field of robot grasping to the field of
humanoid balance with multi-contacts: a whole-body pose taxonomy classifying
the set of whole-body robot configurations that use the environment to enhance
stability. We have revised criteria of classification used to develop grasping
taxonomies, focusing on structuring and simplifying the large number of
possible poses the human body can adopt. We propose a taxonomy with 46 poses,
containing three main categories, considering number and type of supports as
well as possible transitions between poses. The taxonomy induces a
classification of motion primitives based on the pose used for support, and a
set of rules to store and generate new motions. We present preliminary results
that apply known segmentation techniques to motion data from the KIT whole-body
motion database. Using motion capture data with multi-contacts, we can identify
support poses providing a segmentation that can distinguish between locomotion
and manipulation parts of an action.Comment: 8 pages, 7 figures, 1 table with full page figure that appears in
landscape page, 2015 IEEE/RSJ International Conference on Intelligent Robots
and System
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
This paper provides an overview of the current state-of-the-art in selective
harvesting robots (SHRs) and their potential for addressing the challenges of
global food production. SHRs have the potential to increase productivity,
reduce labour costs, and minimise food waste by selectively harvesting only
ripe fruits and vegetables. The paper discusses the main components of SHRs,
including perception, grasping, cutting, motion planning, and control. It also
highlights the challenges in developing SHR technologies, particularly in the
areas of robot design, motion planning and control. The paper also discusses
the potential benefits of integrating AI and soft robots and data-driven
methods to enhance the performance and robustness of SHR systems. Finally, the
paper identifies several open research questions in the field and highlights
the need for further research and development efforts to advance SHR
technologies to meet the challenges of global food production. Overall, this
paper provides a starting point for researchers and practitioners interested in
developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic
- …