10 research outputs found

    Optimal control goal manifolds for planar nonprehensile throwing

    Get PDF
    Abstract-This paper presents a throwing motion planner based on a goal manifold for two-point boundary value problem. The article outlines algorithmic and geometric issues for planar throwing of rigid objects with a nonprehensile end-effector. Special attention is paid to the challenge of controlling a desired 6-dimensional state of the object with a planar 3-DoF robot. Modeling of the contacts is discussed using a state vector of the coupled robot and object dynamics. Robustness against uncertainty due to varying model parameters such as object inertia and friction between the end-effector and the object is investigated. An approach for obtaining manifolds of terminal constraints from the goal configuration is described. Classification of these constraints is given. Finally, feasible trajectory generation conditions for successful execution of the generated optimal controls are discussed

    Nonprehensile Dynamic Manipulation: A Survey

    Get PDF
    Nonprehensile dynamic manipulation can be reason- ably considered as the most complex manipulation task. It might be argued that such a task is still rather far from being fully solved and applied in robotics. This survey tries to collect the results reached so far by the research community about planning and control in the nonprehensile dynamic manipulation domain. A discussion about current open issues is addressed as well

    Versatile Multi-Contact Planning and Control for Legged Loco-Manipulation

    Full text link
    Loco-manipulation planning skills are pivotal for expanding the utility of robots in everyday environments. These skills can be assessed based on a system's ability to coordinate complex holistic movements and multiple contact interactions when solving different tasks. However, existing approaches have been merely able to shape such behaviors with hand-crafted state machines, densely engineered rewards, or pre-recorded expert demonstrations. Here, we propose a minimally-guided framework that automatically discovers whole-body trajectories jointly with contact schedules for solving general loco-manipulation tasks in pre-modeled environments. The key insight is that multi-modal problems of this nature can be formulated and treated within the context of integrated Task and Motion Planning (TAMP). An effective bilevel search strategy is achieved by incorporating domain-specific rules and adequately combining the strengths of different planning techniques: trajectory optimization and informed graph search coupled with sampling-based planning. We showcase emergent behaviors for a quadrupedal mobile manipulator exploiting both prehensile and non-prehensile interactions to perform real-world tasks such as opening/closing heavy dishwashers and traversing spring-loaded doors. These behaviors are also deployed on the real system using a two-layer whole-body tracking controller

    What is Robotics: Why Do We Need It and How Can We Get It?

    Get PDF
    Robotics is an emerging synthetic science concerned with programming work. Robot technologies are quickly advancing beyond the insights of the existing science. More secure intellectual foundations will be required to achieve better, more reliable and safer capabilities as their penetration into society deepens. Presently missing foundations include the identification of fundamental physical limits, the development of new dynamical systems theory and the invention of physically grounded programming languages. The new discipline needs a departmental home in the universities which it can justify both intellectually and by its capacity to attract new diverse populations inspired by the age old human fascination with robots. For more information: Kod*la

    Dyadic collaborative manipulation formalism for optimizing human-robot teaming

    Get PDF
    Dyadic collaborative Manipulation (DcM) is a term we use to refer to a team of two individuals, the agent and the partner, jointly manipulating an object. The two individuals partner together to form a distributed system, augmenting their manipulation abilities. Effective collaboration between the two individuals during joint action depends on: (i) the breadth of the agent’s action repertoire, (ii) the level of model acquaintance between the two individuals, (iii) the ability to adapt online of one’s own actions to the actions of their partner, and (iv) the ability to estimate the partner’s intentions and goals. Key to the successful completion of co-manipulation tasks with changing goals is the agent’s ability to change grasp-holds, especially in large object co-manipulation scenarios. Hence, in this work we developed a Trajectory Optimization (TO) method to enhance the repertoire of actions of robotic agents, by enabling them to plan and execute hybrid motions, i.e. motions that include discrete contact transitions, continuous trajectories and force profiles. The effectiveness of the TO method is investigated numerically and in simulation, in a number of manipulation scenarios with both a single and a bimanual robot. In addition, it is worth noting that transitions from free motion to contact is a challenging problem in robotics, in part due to its hybrid nature. Additionally, disregarding the effects of impacts at the motion planning level often results in intractable impulsive contact forces. To address this challenge, we introduce an impact-aware multi-mode TO method that combines hybrid dynamics and hybrid control in a coherent fashion. A key concept in our approach is the incorporation of an explicit contact force transmission model into the TO method. This allows the simultaneous optimization of the contact forces, contact timings, continuous motion trajectories and compliance, while satisfying task constraints. To demonstrate the benefits of our method, we compared our method against standard compliance control and an impact-agnostic TO method in physical simulations. Also, we experimentally validated the proposed method with a robot manipulator on the task of halting a large-momentum object. Further, we propose a principled formalism to address the joint planning problem in DcM scenarios and we solve the joint problem holistically via model-based optimization by representing the human's behavior as task space forces. The task of finding the partner-aware contact points, forces and the respective timing of grasp-hold changes are carried out by a TO method using non-linear programming. Using simulations, the capability of the optimization method is investigated in terms of robot policy changes (trajectories, timings, grasp-holds) to potential changes of the collaborative partner policies. We also realized, in hardware, effective co-manipulation of a large object by the human and the robot, including eminent grasp changes as well as optimal dyadic interactions to realize the joint task. To address the online adaptation challenge of joint motion plans in dyads, we propose an efficient bilevel formulation which combines graph search methods with trajectory optimization, enabling robotic agents to adapt their policy on-the-fly in accordance to changes of the dyadic task. This method is the first to empower agents with the ability to plan online in hybrid spaces; optimizing over discrete contact locations, contact sequence patterns, continuous trajectories, and force profiles for co-manipulation tasks. This is particularly important in large object co-manipulation tasks that require on-the-fly plan adaptation. We demonstrate in simulation and with robot experiments the efficacy of the bilevel optimization by investigating the effect of robot policy changes in response to real-time alterations of the goal. This thesis provides insight into joint manipulation setups performed by human-robot teams. In particular, it studies computational models of joint action and exploits the uncharted hybrid action space, that is especially relevant in general manipulation and co-manipulation tasks. It contributes towards developing a framework for DcM, capable of planning motions in the contact-force space, realizing these motions while considering impacts and joint action relations, as well as adapting on-the-fly these motion plans with respect to changes of the co-manipulation goals

    Robust physics-based robotic manipulation in real-time

    Get PDF
    This thesis presents planners and controllers for robust physics-based manipulation in real-time. By physics-based manipulation, I refer to manipulation tasks where a physics model is required to predict the consequences of robot actions, for example, when a robot pushes obstacles aside in a fridge to retrieve an object behind them. There are two major problems with physics-based planning using traditional techniques. First, uncertainty, both in physics predictions and in state estimation, can result in the failure of many physics-based plans when executed in the real-world. Second, the computational expense of making physics-based predictions can make planning slow and can be a major bottleneck for real-time control. I address both of these problems in this thesis. To address uncertainty, first, I present an online re-planning algorithm based on trajectory optimization. It reacts, in real-time, to changes in physics predictions to successfully complete a manipulation task. Second, some open-loop physics-based plans succeed in the real-world under uncertainty. How can one generate such robust open-loop plans with guarantees? I provide conditions for robustness in the real-world based on contraction theory. I also present a robust planner and a controller. It autonomously switches between robust open-loop execution, and closed-loop control to complete a manipulation task. Third, a robot can be optimistic in the face of uncertainty. It can adapt its actions to the accuracy requirements of a task. I present such a task-adaptive planner that embraces uncertainty, pushing fast for easy tasks, and slow for more difficult tasks. To address the problem of computationally expensive physics-based predictions, I present learned and analytical coarse physics models for single and multi-object manipulation. They are cheap to compute but can be inaccurate. On the other hand, fine physics models provide the best predictions but are computationally expensive. I present algorithms that combine coarse and fine physics models through parallel-in-time integration. The result is orders of magnitude reduction in physics-based planning and control time

    Compliant control of Uni/ Multi- robotic arms with dynamical systems

    Get PDF
    Accomplishment of many interactive tasks hinges on the compliance of humans. Humans demonstrate an impressive capability of complying their behavior and more particularly their motions with the environment in everyday life. In humans, compliance emerges from different facets. For example, many daily activities involve reaching for grabbing tasks, where compliance appears in a form of coordination. Humans comply their handsâ motions with each other and with that of the object not only to establish a stable contact and to control the impact force but also to overcome sensorimotor imprecisions. Even though compliance has been studied from different aspects in humans, it is primarily related to impedance control in robotics. In this thesis, we leverage the properties of autonomous dynamical systems (DS) for immediate re-planning and introduce active complaint motion generators for controlling robots in three different scenarios, where compliance does not necessarily mean impedance and hence it is not directly related to control in the force/velocity domain. In the first part of the thesis, we propose an active compliant strategy for catching objects in flight, which is less sensitive to the timely control of the interception. The soft catching strategy consists in having the robot following the object for a short period of time. This leaves more time for the fingers to close on the object at the interception and offers more robustness than a âhardâ catching method in which the hand waits for the object at the chosen interception point. We show theoretically that the resulting DS will intercept the object at the intercept point, at the right time with the desired velocity direction. Stability and convergence of the approach are assessed through Lyapunov stability theory. In the second part, we propose a unified compliant control architecture for coordinately reaching for grabbing a moving object by a multi-arm robotic system. Due to the complexity of the task and of the system, each arm complies not only with the objectâs motion but also with the motion of other arms, in both task and joint spaces. At the task-space level, we propose a unified dynamical system that endows the multi-arm system with both synchronous and asynchronous behaviors and with the capability of smoothly transitioning between the two modes. At the joint space level, the compliance between the arms is achieved by introducing a centralized inverse kinematics (IK) solver under self-collision avoidance constraints; formulated as a quadratic programming problem (QP) and solved in real-time. In the last part, we propose a compliant dynamical system for stably transitioning from free motions to contacts. In this part, by modulating the robot's velocity in three regions, we show theoretically and empirically that the robot can (I) stably touch the contact surface (II) at a desired location, and (III) leave the surface or stop on the surface at a desired point

    Manipulating Objects using Compliant, Unactuated Tails: Modeling and Planning

    Get PDF
    Ropes and rope-like objects (e.g., chains, cords, lines, whips, or lassos) are comparatively cheap, simple, and useful in daily life. For a long time, humans have used such structures for manipulation tasks in a qualitatively different ways such as pulling, fastening, attaching, tying, knotting, and whipping. Nevertheless, these structures have received little attention in robotics. Because they are unactuated, such structures are regarded as difficult to model, plan and control. In this dissertation, we are interested in a mobile robot system using a flexible rope-like structure attached to its end akin to a ‘tail’. Our goal is to investigate how mobile robots can use compliant, unactuated structures for various manipulation tasks. Robots that use a tail to manipulate objects face challenges in modeling and planning of behaviors, dynamics, and combinatorial optimization. In this dissertation, we propose several methods to deal with the difficulties of modeling and planning. In addition, we solve variants of object manipulation problems wherein multiple classes of objects are to be transported by multiple cooperative robots using ropes. Firstly, we examine motion primitives, where the primitives are designed to simplify modeling and planning issues. We explore several sets of motion primitive where each primitive contributes some aspect lacking in the others. These primitives are forward models of the system’s behavior that predict the position and orientation of the object being manipulated within the workspace. Then, to solve manipulation problems, we design a planner that seeks a sequence of motion primitives by using a sampling-based motion planning approach coupled with a particle-based representation to treat error propagation of the motions. Our proposed planner is used to optimize motion sequences based on a specified preference over a set of objectives, such as execution time, navigation cost, or collision likelihood. The solutions deal with different preferences effectively, and we analyze the complementary nature of dynamic and quasi-static motions, showing that there exist regimes where transitions among them are indeed desirable, as reflected in the plans produced. Secondly, we explore a variety of interesting primitives that result in new approaches for object manipulation problems. We examine ways two robots can join the ends of their tails so that a pair of conjoined robots can encircle objects leading to the advantage of greater towing capacity if they work cooperatively. However, individual robots possess the advantage of allowing for greater concurrency if objects are distant from one another. We solve a new manipulation problem for the scenarios of moving a collection of objects to goal locations with multiple robots that may form conjoined pairs. To maximize efficiency, the robots balance working as a tightly-knit sub-team with individual operation. We develop heuristics that give satisfactory solutions in reasonable time. The results we report include data from physical robots executing plans produced by our planner, collecting objects both by individual action and by a coupled pair operation. We expect that our research results will help to understand how a flexible compliant appendage when added to a robot can be useful for more than just agility. The proposed techniques using simple motion models for characterizing the complicated system dynamics can be used to robotics research: motion planning, minimalist manipulators, behavior-based control, and multi-robot coordination. In addition, we expect that the proposed methods can enhance the performance of various manipulation tasks, efficient search, adaptive sampling or coverage in unknown, unstructured environments

    Robot manipulator skill learning and generalising through teleoperation

    Get PDF
    Robot manipulators have been widely used for simple repetitive, and accurate tasks in industrial plants, such as pick and place, assembly and welding etc., but it is still hard to deploy in human-centred environments for dexterous manipulation tasks, such as medical examination and robot-assisted healthcare. These tasks are not only related to motion planning and control but also to the compliant interaction behaviour of robots, e.g. motion control, force regulation and impedance adaptation simultaneously under dynamic and unknown environments. Recently, with the development of collaborative robotics (cobots) and machine learning, robot skill learning and generalising have attained increasing attention from robotics, machine learning and neuroscience communities. Nevertheless, learning complex and compliant manipulation skills, such as manipulating deformable objects, scanning the human body and folding clothes, is still challenging for robots. On the other hand, teleoperation, also namely remote operation or telerobotics, has been an old research area since 1950, and there have been a number of applications such as space exploration, telemedicine, marine vehicles and emergency response etc. One of its advantages is to combine the precise control of robots with human intelligence to perform dexterous and safety-critical tasks from a distance. In addition, telepresence allows remote operators could feel the actual interaction between the robot and the environment, including the vision, sound and haptic feedback etc. Especially under the development of various augmented reality (AR), virtual reality (VR) and wearable devices, intuitive and immersive teleoperation have received increasing attention from robotics and computer science communities. Thus, various human-robot collaboration (HRC) interfaces based on the above technologies were developed to integrate robot control and telemanipulation by human operators for robot skills learning from human beings. In this context, robot skill learning could benefit teleoperation by automating repetitive and tedious tasks, and teleoperation demonstration and interaction by human teachers also allow the robot to learn progressively and interactively. Therefore, in this dissertation, we study human-robot skill transfer and generalising through intuitive teleoperation interfaces for contact-rich manipulation tasks, including medical examination, manipulating deformable objects, grasping soft objects and composite layup in manufacturing. The introduction, motivation and objectives of this thesis are introduced in Chapter 1. In Chapter 2, a literature review on manipulation skills acquisition through teleoperation is carried out, and the motivation and objectives of this thesis are discussed subsequently. Overall, the main contents of this thesis have three parts: Part 1 (Chapter 3) introduces the development and controller design of teleoperation systems with multimodal feedback, which is the foundation of this project for robot learning from human demonstration and interaction. In Part 2 (Chapters 4, 5, 6 and 7), we studied primitive skill library theory, behaviour tree-based modular method, and perception-enhanced method to improve the generalisation capability of learning from the human demonstrations. And several applications were employed to evaluate the effectiveness of these methods.In Part 3 (Chapter 8), we studied the deep multimodal neural networks to encode the manipulation skill, especially the multimodal perception information. This part conducted physical experiments on robot-assisted ultrasound scanning applications.Chapter 9 summarises the contributions and potential directions of this thesis. Keywords: Learning from demonstration; Teleoperation; Multimodal interface; Human-in-the-loop; Compliant control; Human-robot interaction; Robot-assisted sonography
    corecore