14 research outputs found

    Legged Robots for Object Manipulation: A Review

    Get PDF
    Legged robots can have a unique role in manipulating objects in dynamic, human-centric, or otherwise inaccessible environments. Although most legged robotics research to date typically focuses on traversing these challenging environments, many legged platform demonstrations have also included "moving an object" as a way of doing tangible work. Legged robots can be designed to manipulate a particular type of object (e.g., a cardboard box, a soccer ball, or a larger piece of furniture), by themselves or collaboratively. The objective of this review is to collect and learn from these examples, to both organize the work done so far in the community and highlight interesting open avenues for future work. This review categorizes existing works into four main manipulation methods: object interactions without grasping, manipulation with walking legs, dedicated non-locomotive arms, and legged teams. Each method has different design and autonomy features, which are illustrated by available examples in the literature. Based on a few simplifying assumptions, we further provide quantitative comparisons for the range of possible relative sizes of the manipulated object with respect to the robot. Taken together, these examples suggest new directions for research in legged robot manipulation, such as multifunctional limbs, terrain modeling, or learning-based control, to support a number of new deployments in challenging indoor/outdoor scenarios in warehouses/construction sites, preserved natural areas, and especially for home robotics.Comment: Preprint of the paper submitted to Frontiers in Mechanical Engineerin

    Introduction to the Special Issue on Aerial Manipulation

    Get PDF
    The papers in this special section focus on aerial manipulation which is intended as grasping, positioning, assembling and disassembling of mechanical parts, measurement instruments and any other kind of objects, performed by a flying robot equipped with arms and grippers. Aerial manipulators can be helpful in those industrial and service applications that are considered very dangerous for a human operator. For instance, think of tasks like the inspection of a bridge, the inspection and the fixing-up of high-voltage electric lines, the repairing of rotor blades and so on. These tasks are both very unsafe and expensive because they require the performance of professional climbers and/or specialists in the field. A drone with manipulation capabilities can instead assist the human operator in these jobs or, at least, in the most hazardous and critical situations. As a matter of fact, such devices can indeed operate in dangerous tasks like reaching the bottom of the deck of a bridge or the highest places of a plant or a building; they can avoid dangerous work at height; aerial platforms can increase the total number of inspections of a plant, monitoring the wear of the components. Without doubts, aerial manipulation will improve the quality of the job of many workers

    Motion planning for manipulation and/or navigation tasks with emphasis on humanoid robots

    Get PDF
    This thesis handles the motion planning problem for various robotic platforms. This is a fundamental problem, especially referring to humanoid robots for which it is particularly challenging for a number of reasons. The first is the high number of degrees of freedom. The second is that a humanoid robot is not a free-flying system in its configuration space: its motions must be generated appropriately. Finally, the implicit requirement that the robot maintains equilibrium, either static or dynamic, typically constrains the trajectory of the robot center of mass. In particular, we are interested in handling problems in which the robot must execute a task, possibly requiring stepping, in environments cluttered by obstacles. In order to solve this problem, we propose to use offline probabilistic motion planning techniques such as Rapidly Exploring Random Trees (RRTs) that consist in finding a solution by means of a graph built in an appropriately defined configuration space. The novelty of the approach is that it does not separate locomotion from task execution. This feature allows to generate whole-body movements while fulfilling the task. The task can be assigned as a trajectory or a single point in the task space or even combining tasks of different nature (e.g., manipulation and navigation tasks). The proposed method is also able to deform the task, if the assigned one is too difficult to be fulfilled. It automatically detects when the task should be deformed and which kind of deformation to apply. However, there are situations, especially when robots and humans have to share the same workspace, in which the robot has to be equipped with reactive capabilities (as avoiding moving obstacles), allowing to reach a basic level of safety. The final part of the thesis handles the rearrangement planning problem. This problem is interesting in view of manipulation tasks, where the robot has to interact with objects in the environment. Roughly speaking, the goal of this problem is to plan the motion for a robot whose assigned a task (e.g., move a target object in a goal region). Doing this, the robot is allowed to move some movable objects that are in the environment. The problem is difficult because we must plan in continuous, high-dimensional state and action spaces. Additionally, the physical constraints induced by the nonprehensile interaction between the robot and the objects in the scene must be respected. Our insight is to embed physics models in the planning stage, allowing robot manipulation and simultaneous objects interaction. Throughout the thesis, we evaluate the proposed planners through experiments on different robotic platforms

    Modeling, analysis and control of robot-object nonsmooth underactuated Lagrangian systems: A tutorial overview and perspectives

    Get PDF
    International audienceSo-called robot-object Lagrangian systems consist of a class of nonsmooth underactuated complementarity Lagrangian systems, with a specific structure: an "object" and a "robot". Only the robot is actuated. The object dynamics can thus be controlled only through the action of the contact Lagrange multipliers, which represent the interaction forces between the robot and the object. Juggling, walking, running, hopping machines, robotic systems that manipulate objects, tapping, pushing systems, kinematic chains with joint clearance, crawling, climbing robots, some cable-driven manipulators, and some circuits with set-valued nonsmooth components, belong this class. This article aims at presenting their main features, then many application examples which belong to the robot-object class, then reviewing the main tools and control strategies which have been proposed in the Automatic Control and in the Robotics literature. Some comments and open issues conclude the article

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping

    From Deployments Of Elder Care Service Robots To The Design Of Affordable Low-Complexity End-Effectors And Novel Manipulation Techniques

    Get PDF
    This thesis proposes an investigation on both behavioral and technical aspects of human-robot interaction (HRI) in elder care settings, in view of an affordable platform capable of executing desired tasks. The behavioral investigation combines a qualitative study with focus groups and surveys from not only the elders’ standpoint, but also from the standpoint of healthcare professionals to investigate suitable tasks to be accomplished by a service robot in such environments. Through multiple deployments of various robot embodiments at actual elder care facilities (such as at a low-income Supportive Apartment Living, SAL, and Program of All-Inclusive Care, PACE Centers) and interaction with older adults, design guidelines are developed to improve on both interaction and usability aspects. This need assessment informed the technical investigation of this work, where we initially propose picking and placing objects using end-effectors without internal mobility (or zero degrees-of-freedom, DOF), considering both quasi-static (tipping and regrasping as in-hand manipulation) and dynamic approaches. Maximizing grasping versatility by allowing robots to grasp multiple objects sequentially using a single end-effector and actuator is also proposed. These novel manipulation techniques and end-effector designs focus on minimizing robot hardware usage and cost, while still performing complex tasks and complying with safety constraints imposed by the elder care facilities

    Dyadic collaborative manipulation formalism for optimizing human-robot teaming

    Get PDF
    Dyadic collaborative Manipulation (DcM) is a term we use to refer to a team of two individuals, the agent and the partner, jointly manipulating an object. The two individuals partner together to form a distributed system, augmenting their manipulation abilities. Effective collaboration between the two individuals during joint action depends on: (i) the breadth of the agent’s action repertoire, (ii) the level of model acquaintance between the two individuals, (iii) the ability to adapt online of one’s own actions to the actions of their partner, and (iv) the ability to estimate the partner’s intentions and goals. Key to the successful completion of co-manipulation tasks with changing goals is the agent’s ability to change grasp-holds, especially in large object co-manipulation scenarios. Hence, in this work we developed a Trajectory Optimization (TO) method to enhance the repertoire of actions of robotic agents, by enabling them to plan and execute hybrid motions, i.e. motions that include discrete contact transitions, continuous trajectories and force profiles. The effectiveness of the TO method is investigated numerically and in simulation, in a number of manipulation scenarios with both a single and a bimanual robot. In addition, it is worth noting that transitions from free motion to contact is a challenging problem in robotics, in part due to its hybrid nature. Additionally, disregarding the effects of impacts at the motion planning level often results in intractable impulsive contact forces. To address this challenge, we introduce an impact-aware multi-mode TO method that combines hybrid dynamics and hybrid control in a coherent fashion. A key concept in our approach is the incorporation of an explicit contact force transmission model into the TO method. This allows the simultaneous optimization of the contact forces, contact timings, continuous motion trajectories and compliance, while satisfying task constraints. To demonstrate the benefits of our method, we compared our method against standard compliance control and an impact-agnostic TO method in physical simulations. Also, we experimentally validated the proposed method with a robot manipulator on the task of halting a large-momentum object. Further, we propose a principled formalism to address the joint planning problem in DcM scenarios and we solve the joint problem holistically via model-based optimization by representing the human's behavior as task space forces. The task of finding the partner-aware contact points, forces and the respective timing of grasp-hold changes are carried out by a TO method using non-linear programming. Using simulations, the capability of the optimization method is investigated in terms of robot policy changes (trajectories, timings, grasp-holds) to potential changes of the collaborative partner policies. We also realized, in hardware, effective co-manipulation of a large object by the human and the robot, including eminent grasp changes as well as optimal dyadic interactions to realize the joint task. To address the online adaptation challenge of joint motion plans in dyads, we propose an efficient bilevel formulation which combines graph search methods with trajectory optimization, enabling robotic agents to adapt their policy on-the-fly in accordance to changes of the dyadic task. This method is the first to empower agents with the ability to plan online in hybrid spaces; optimizing over discrete contact locations, contact sequence patterns, continuous trajectories, and force profiles for co-manipulation tasks. This is particularly important in large object co-manipulation tasks that require on-the-fly plan adaptation. We demonstrate in simulation and with robot experiments the efficacy of the bilevel optimization by investigating the effect of robot policy changes in response to real-time alterations of the goal. This thesis provides insight into joint manipulation setups performed by human-robot teams. In particular, it studies computational models of joint action and exploits the uncharted hybrid action space, that is especially relevant in general manipulation and co-manipulation tasks. It contributes towards developing a framework for DcM, capable of planning motions in the contact-force space, realizing these motions while considering impacts and joint action relations, as well as adapting on-the-fly these motion plans with respect to changes of the co-manipulation goals

    Becoming Human with Humanoid

    Get PDF
    Nowadays, our expectations of robots have been significantly increases. The robot, which was initially only doing simple jobs, is now expected to be smarter and more dynamic. People want a robot that resembles a human (humanoid) has and has emotional intelligence that can perform action-reaction interactions. This book consists of two sections. The first section focuses on emotional intelligence, while the second section discusses the control of robotics. The contents of the book reveal the outcomes of research conducted by scholars in robotics fields to accommodate needs of society and industry
    corecore