3,559 research outputs found
Analyzing Whole-Body Pose Transitions in Multi-Contact Motions
When executing whole-body motions, humans are able to use a large variety of
support poses which not only utilize the feet, but also hands, knees and elbows
to enhance stability. While there are many works analyzing the transitions
involved in walking, very few works analyze human motion where more complex
supports occur.
In this work, we analyze complex support pose transitions in human motion
involving locomotion and manipulation tasks (loco-manipulation). We have
applied a method for the detection of human support contacts from motion
capture data to a large-scale dataset of loco-manipulation motions involving
multi-contact supports, providing a semantic representation of them. Our
results provide a statistical analysis of the used support poses, their
transitions and the time spent in each of them. In addition, our data partially
validates our taxonomy of whole-body support poses presented in our previous
work.
We believe that this work extends our understanding of human motion for
humanoids, with a long-term objective of developing methods for autonomous
multi-contact motion planning.Comment: 8 pages, IEEE-RAS International Conference on Humanoid Robots
(Humanoids) 201
Grasping bulky objects with two anthropomorphic hands
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThis paper presents an algorithm to compute precision grasps for bulky objects using two anthropomorphic hands. We use objects modeled as point clouds obtained from a sensor camera or from a CAD model. We then process the point clouds dividing them into two set of slices where we look for sets of triplets of points. Each triplet must accomplish some physical conditions based on the structure of the hands. Then, the triplets of points from each set of slices are evaluated to find a combination that satisfies the force closure condition (FC). Once one valid couple of triplets have been found the inverse kinematics of the system is computed in order to know if the corresponding points are reachable by the hands, if so, motion planning and a collision check are performed to asses if the final grasp configuration of the system is suitable. The paper
inclu des some application examples of the proposed approachAccepted versio
Managing a Fleet of Autonomous Mobile Robots (AMR) using Cloud Robotics Platform
In this paper, we provide details of implementing a system for managing a
fleet of autonomous mobile robots (AMR) operating in a factory or a warehouse
premise. While the robots are themselves autonomous in its motion and obstacle
avoidance capability, the target destination for each robot is provided by a
global planner. The global planner and the ground vehicles (robots) constitute
a multi agent system (MAS) which communicate with each other over a wireless
network. Three different approaches are explored for implementation. The first
two approaches make use of the distributed computing based Networked Robotics
architecture and communication framework of Robot Operating System (ROS) itself
while the third approach uses Rapyuta Cloud Robotics framework for this
implementation. The comparative performance of these approaches are analyzed
through simulation as well as real world experiment with actual robots. These
analyses provide an in-depth understanding of the inner working of the Cloud
Robotics Platform in contrast to the usual ROS framework. The insight gained
through this exercise will be valuable for students as well as practicing
engineers interested in implementing similar systems else where. In the
process, we also identify few critical limitations of the current Rapyuta
platform and provide suggestions to overcome them.Comment: 14 pages, 15 figures, journal pape
On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation
Biological and robotic grasp and manipulation are undeniably similar at the
level of mechanical task performance. However, their underlying fundamental
biological vs. engineering mechanisms are, by definition, dramatically
different and can even be antithetical. Even our approach to each is
diametrically opposite: inductive science for the study of biological systems
vs. engineering synthesis for the design and construction of robotic systems.
The past 20 years have seen several conceptual advances in both fields and the
quest to unify them. Chief among them is the reluctant recognition that their
underlying fundamental mechanisms may actually share limited common ground,
while exhibiting many fundamental differences. This recognition is particularly
liberating because it allows us to resolve and move beyond multiple paradoxes
and contradictions that arose from the initial reasonable assumption of a large
common ground. Here, we begin by introducing the perspective of neuromechanics,
which emphasizes that real-world behavior emerges from the intimate
interactions among the physical structure of the system, the mechanical
requirements of a task, the feasible neural control actions to produce it, and
the ability of the neuromuscular system to adapt through interactions with the
environment. This allows us to articulate a succinct overview of a few salient
conceptual paradoxes and contradictions regarding under-determined vs.
over-determined mechanics, under- vs. over-actuated control, prescribed vs.
emergent function, learning vs. implementation vs. adaptation, prescriptive vs.
descriptive synergies, and optimal vs. habitual performance. We conclude by
presenting open questions and suggesting directions for future research. We
hope this frank assessment of the state-of-the-art will encourage and guide
these communities to continue to interact and make progress in these important
areas
Shared Autonomy via Hindsight Optimization
In shared autonomy, user input and robot autonomy are combined to control a
robot to achieve a goal. Often, the robot does not know a priori which goal the
user wants to achieve, and must both predict the user's intended goal, and
assist in achieving that goal. We formulate the problem of shared autonomy as a
Partially Observable Markov Decision Process with uncertainty over the user's
goal. We utilize maximum entropy inverse optimal control to estimate a
distribution over the user's goal based on the history of inputs. Ideally, the
robot assists the user by solving for an action which minimizes the expected
cost-to-go for the (unknown) goal. As solving the POMDP to select the optimal
action is intractable, we use hindsight optimization to approximate the
solution. In a user study, we compare our method to a standard
predict-then-blend approach. We find that our method enables users to
accomplish tasks more quickly while utilizing less input. However, when asked
to rate each system, users were mixed in their assessment, citing a tradeoff
between maintaining control authority and accomplishing tasks quickly
- …