605 research outputs found
Cooperative SLAM-based object transportation by two humanoid robots in a cluttered environment
International audienceIn this work, we tackle the problem of making two humanoid robots navigate in a cluttered environment while transporting a very large object that simply can not be moved by a single robot. We present a complete navigation scheme, from the incremental construction of a map of the environment and the computation of collision-free trajectories to the control to execute those trajectories. We present experiments conducted on real Nao robots, equipped with RGB-D sensors mounted on their heads, moving an object around obstacles. Our experiments show that a significantly large object can be transported without changing the robot's main hardware, and therefore enacting the capacity of humanoid robots in real-life situations
Using humanoid robots to study human behavior
Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans âprogramâ behavior in-or train-each other
Learning Task Constraints from Demonstration for Hybrid Force/Position Control
We present a novel method for learning hybrid force/position control from
demonstration. We learn a dynamic constraint frame aligned to the direction of
desired force using Cartesian Dynamic Movement Primitives. In contrast to
approaches that utilize a fixed constraint frame, our approach easily
accommodates tasks with rapidly changing task constraints over time. We
activate only one degree of freedom for force control at any given time,
ensuring motion is always possible orthogonal to the direction of desired
force. Since we utilize demonstrated forces to learn the constraint frame, we
are able to compensate for forces not detected by methods that learn only from
the demonstrated kinematic motion, such as frictional forces between the
end-effector and the contact surface. We additionally propose novel extensions
to the Dynamic Movement Primitive (DMP) framework that encourage robust
transition from free-space motion to in-contact motion in spite of environment
uncertainty. We incorporate force feedback and a dynamically shifting goal to
reduce forces applied to the environment and retain stable contact while
enabling force control. Our methods exhibit low impact forces on contact and
low steady-state tracking error.Comment: Under revie
Intuitive Instruction of Industrial Robots : A Knowledge-Based Approach
With more advanced manufacturing technologies, small and medium sized enterprises can compete with low-wage labor by providing customized and high quality products. For small production series, robotic systems can provide a cost-effective solution. However, for robots to be able to perform on par with human workers in manufacturing industries, they must become flexible and autonomous in their task execution and swift and easy to instruct. This will enable small businesses with short production series or highly customized products to use robot coworkers without consulting expert robot programmers. The objective of this thesis is to explore programming solutions that can reduce the programming effort of sensor-controlled robot tasks. The robot motions are expressed using constraints, and multiple of simple constrained motions can be combined into a robot skill. The skill can be stored in a knowledge base together with a semantic description, which enables reuse and reasoning. The main contributions of the thesis are 1) development of ontologies for knowledge about robot devices and skills, 2) a user interface that provides simple programming of dual-arm skills for non-experts and experts, 3) a programming interface for task descriptions in unstructured natural language in a user-specified vocabulary and 4) an implementation where low-level code is generated from the high-level descriptions. The resulting system greatly reduces the number of parameters exposed to the user, is simple to use for non-experts and reduces the programming time for experts by 80%. The representation is described on a semantic level, which means that the same skill can be used on different robot platforms. The research is presented in seven papers, the first describing the knowledge representation and the second the knowledge-based architecture that enables skill sharing between robots. The third paper presents the translation from high-level instructions to low-level code for force-controlled motions. The two following papers evaluate the simplified programming prototype for non-expert and expert users. The last two present how program statements are extracted from unstructured natural language descriptions
A survey of robot manipulation in contact
In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks
DMPs-based skill learning for redundant dual-arm robotic synchronized cooperative manipulation
Dual-arm robot manipulation is applicable to many domains, such as industrial, medical, and home service scenes. Learning from demonstrations (LfD) is a highly effective paradigm for robotic learning, where a robot learns from human actions directly and can be used autonomously for new tasks, avoiding the complicated analytical calculation for motion programming. However, the learned skills are not easy to generalize to new cases where special constraints such as varying relative distance limitation of robotic end effectors for human-like cooperative manipulations exist. In this paper, we propose a dynamic movement primitives (DMPs) based skills learning framework for redundant dual-arm robots. The method, with a coupling acceleration term to the DMPs function, is inspired by the transient performance control of Barrier Lyapunov Functions (BLFs). The additional coupling acceleration term is calculated based on the constant joint distance and varying relative distance limitations of end effectors for object approaching actions. In addition, we integrate the generated actions in joint space and the solution for a redundant dual-arm robot to complete a human-like manipulation. Simulations undertaken in Matlab and Gazebo environments certify the effectiveness of the proposed method
Learning by Demonstration and Robust Control of Dexterous In-Hand Robotic Manipulation Skills
Dexterous robotic manipulation of unknown objects can open the way to novel tasks and applications of robots in semi-structured and unstructured settings, from advanced industrial manufacturing to exploration of harsh environments. However, it is challenging for at least three reasons: the desired motion of the object might be too complex to be described analytically, precise models of the manipulated objects are not available, the controller should simultaneously ensure both a robust grasp and an effective in-hand motion. To solve these issues we propose to learn in-hand robotic manipulation tasks from human demonstrations, using Dynamical Movement Primitives (DMPs), and to reproduce them with a robust compliant controller based on the Virtual Springs Framework (VSF), that employs real-time feedback of the contact forces measured on the robot fingertips. With this solution, the generalization capabilities of DMPs can be transferred successfully to the dexterous in-hand manipulation problem: we demonstrate this by presenting real-world experiments of in-hand translation and rotation of unknown objects
Learning Task Priorities from Demonstrations
Bimanual operations in humanoids offer the possibility to carry out more than
one manipulation task at the same time, which in turn introduces the problem of
task prioritization. We address this problem from a learning from demonstration
perspective, by extending the Task-Parameterized Gaussian Mixture Model
(TP-GMM) to Jacobian and null space structures. The proposed approach is tested
on bimanual skills but can be applied in any scenario where the prioritization
between potentially conflicting tasks needs to be learned. We evaluate the
proposed framework in: two different tasks with humanoids requiring the
learning of priorities and a loco-manipulation scenario, showing that the
approach can be exploited to learn the prioritization of multiple tasks in
parallel.Comment: Accepted for publication at the IEEE Transactions on Robotic
- âŠ