853 research outputs found
Motion primitive based random planning for loco-manipulation tasks
Several advanced control laws are available for complex robotic systems such as humanoid robots and mobile manipulators. Controls are usually developed for locomotion or for manipulation purposes. Resulting motions are usually executed sequentially and the potentiality of the robotic platform is not fully exploited. In this work we consider the problem of loco-manipulation planning for a robot with given parametrized control laws known as primitives. Such primitives, may have not been designed to be executed simultaneously and by composing them instability may easily arise. With the proposed approach, primitives combination that guarantee stability of the system are obtained resulting in complex whole-body behavior. A formal definition of motion primitives is provided and a random sampling approach on a manifold with limited dimension is investigated. Probabilistic completeness and asymptotic optimality are also proved. The proposed approach is tested both on a mobile manipulator and on the humanoid robot Walk-Man, performing loco-manipulation tasks
Hierarchical Planning and Control for Box Loco-Manipulation
Humans perform everyday tasks using a combination of locomotion and
manipulation skills. Building a system that can handle both skills is essential
to creating virtual humans. We present a physically-simulated human capable of
solving box rearrangement tasks, which requires a combination of both skills.
We propose a hierarchical control architecture, where each level solves the
task at a different level of abstraction, and the result is a physics-based
simulated virtual human capable of rearranging boxes in a cluttered
environment. The control architecture integrates a planner, diffusion models,
and physics-based motion imitation of sparse motion clips using deep
reinforcement learning. Boxes can vary in size, weight, shape, and placement
height. Code and trained control policies are provided
Versatile Multi-Contact Planning and Control for Legged Loco-Manipulation
Loco-manipulation planning skills are pivotal for expanding the utility of
robots in everyday environments. These skills can be assessed based on a
system's ability to coordinate complex holistic movements and multiple contact
interactions when solving different tasks. However, existing approaches have
been merely able to shape such behaviors with hand-crafted state machines,
densely engineered rewards, or pre-recorded expert demonstrations. Here, we
propose a minimally-guided framework that automatically discovers whole-body
trajectories jointly with contact schedules for solving general
loco-manipulation tasks in pre-modeled environments. The key insight is that
multi-modal problems of this nature can be formulated and treated within the
context of integrated Task and Motion Planning (TAMP). An effective bilevel
search strategy is achieved by incorporating domain-specific rules and
adequately combining the strengths of different planning techniques: trajectory
optimization and informed graph search coupled with sampling-based planning. We
showcase emergent behaviors for a quadrupedal mobile manipulator exploiting
both prehensile and non-prehensile interactions to perform real-world tasks
such as opening/closing heavy dishwashers and traversing spring-loaded doors.
These behaviors are also deployed on the real system using a two-layer
whole-body tracking controller
Deep Imitation Learning for Humanoid Loco-manipulation through Human Teleoperation
We tackle the problem of developing humanoid loco-manipulation skills with
deep imitation learning. The difficulty of collecting task demonstrations and
training policies for humanoids with a high degree of freedom presents
substantial challenges. We introduce TRILL, a data-efficient framework for
training humanoid loco-manipulation policies from human demonstrations. In this
framework, we collect human demonstration data through an intuitive Virtual
Reality (VR) interface. We employ the whole-body control formulation to
transform task-space commands by human operators into the robot's joint-torque
actuation while stabilizing its dynamics. By employing high-level action
abstractions tailored for humanoid loco-manipulation, our method can
efficiently learn complex sensorimotor skills. We demonstrate the effectiveness
of TRILL in simulation and on a real-world robot for performing various
loco-manipulation tasks. Videos and additional materials can be found on the
project page: https://ut-austin-rpl.github.io/TRILL.Comment: Submitted to Humanoids 202
Active Vision for Scene Understanding
Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot\u27s view in order to explore interaction possibilities of the scene
Exploring with Sticky Mittens: Reinforcement Learning with Expert Interventions via Option Templates
Long horizon robot learning tasks with sparse rewards pose a significant challenge for current reinforcement learning algorithms. A key feature enabling humans to learn challenging control tasks is that they often receive expert intervention that enables them to understand the high-level structure of the task before mastering low-level control actions. We propose a framework for leveraging expert intervention to solve long-horizon reinforcement learning tasks. We consider option templates, which are specifications encoding a potential option that can be trained using reinforcement learning. We formulate expert intervention as allowing the agent to execute option templates before learning an implementation. This enables them to use an option, before committing costly resources to learning it. We evaluate our approach on three challenging reinforcement learning problems, showing that it outperforms state-of-the-art approaches by two orders of magnitude
Active Vision for Scene Understanding
Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot's view in order to explore interaction possibilities of the scene
A novel framework to improve motion planning of robotic systems through semantic knowledge-based reasoning
The need to improve motion planning techniques for manipulator robots, and new effective strategies to manipulate different objects to perform more complex tasks, is crucial for various real-world applications where robots cooperate with humans. This paper proposes a novel framework that aims to improve the motion planning of a robotic agent (a manipulator robot) through semantic knowledge-based reasoning. The Semantic Web Rule Language (SWRL) was used to infer new knowledge based on the known environment and the robotic system. Ontological knowledge, e.g., semantic maps, were generated through a deep neural network, trained to detect and classify objects in the environment where the robotic agent performs. Manipulation constraints were deduced, and the environment corresponding to the agent’s manipulation workspace was created so the planner could interpret it to generate a collision-free path. For reasoning with the ontology, different SPARQL queries were used. The proposed framework was implemented and validated in a real experimental setup, using the planning framework ROSPlan to perform the planning tasks. The proposed framework proved to be a promising strategy to improve motion planning of robotics systems, showing the benefits of artificial intelligence, for knowledge representation and reasoning in robotics.info:eu-repo/semantics/publishedVersio
- …