40 research outputs found

    Reasoning and understanding grasp affordances for robot manipulation

    Get PDF
    This doctoral research focuses on developing new methods that enable an artificial agent to grasp and manipulate objects autonomously. More specifically, we are using the concept of affordances to learn and generalise robot grasping and manipulation techniques. [75] defined affordances as the ability of an agent to perform a certain action with an object in a given environment. In robotics, affordances defines the possibility of an agent to perform actions with an object. Therefore, by understanding the relation between actions, objects and the effect of these actions, the agent understands the task at hand, providing the robot with the potential to bridge perception to action. The significance of affordances in robotics has been studied from varied perspectives, such as psychology and cognitive sciences. Many efforts have been made to pragmatically employ the concept of affordances as it provides the potential for an artificial agent to perform tasks autonomously. We start by reviewing and finding common ground amongst different strategies that use affordances for robotic tasks. We build on the identified grounds to provide guidance on including the concept of affordances as a medium to boost autonomy for an artificial agent. To this end, we outline common design choices to build an affordance relation; and their implications on the generalisation capabilities of the agent when facing previously unseen scenarios. Based on our exhaustive review, we conclude that prior research on object affordance detection is effective, however, among others, it has the following technical gaps: (i) the methods are limited to a single object ↔ affordance hypothesis, and (ii) they cannot guarantee task completion or any level of performance for the manipulation task alone nor (iii) in collaboration with other agents. In this research thesis, we propose solutions to these technical challenges. In an incremental fashion, we start by addressing the limited generalisation capabilities of, at the time state-of-the-art methods, by strengthening the perception to action connection through the construction of an Knowledge Base (KB). We then leverage the information encapsulated in the KB to design and implement a reasoning and understanding method based on statistical relational leaner (SRL) that allows us to cope with uncertainty in testing environments, and thus, improve generalisation capabilities in affordance-aware manipulation tasks. The KB in conjunctions with our SRL are the base for our designed solutions that guarantee task completion when the robot is performing a task alone as well as when in collaboration with other agents. We finally expose and discuss a range of interesting avenues that have the potential to thrive the capabilities of a robotic agent through the use of the concept of affordances for manipulation tasks. A summary of the contributions of this thesis can be found at: https://bit.ly/grasp_affordance_reasonin

    Toward Effective Physical Human-Robot Interaction

    Get PDF
    With the fast advancement of technology, in recent years, robotics technology has significantly matured and produced robots that are able to operate in unstructured environments such as domestic environments, offices, hospitals and other human-inhabited locations. In this context, the interaction and cooperation between humans and robots has become an important and challenging aspect of robot development. Among the various kinds of possible interactions, in this Ph.D. thesis I am particularly interested in physical human-robot interaction (pHRI). In order to study how a robot can successfully engage in physical interaction with people and which factors are crucial during this kind of interaction, I investigated how humans and robots can hand over objects to each other. To study this specific interactive task I developed two robotic prototypes and conducted human-robot user studies. Although various aspects of human-robot handovers have been deeply investigated in the state of the art, during my studies I focused on three issues that have been rarely investigated so far: Human presence and motion analysis during the interaction in order to infer non-verbal communication cues and to synchronize the robot actions with the human motion; Development and evaluation of human-aware pro-active robot behaviors that enable robots to behave actively in the proximity of the human body in order to negotiate the handover location and to perform the transfer of the object; Consideration of objects grasp affordances during the handover in order to make the interaction more comfortable for the human

    Behaviour-driven motion synthesis

    Get PDF
    Heightened demand for alternatives to human exposure to strenuous and repetitive labour, as well as to hazardous environments, has led to an increased interest in real-world deployment of robotic agents. Targeted applications require robots to be adept at synthesising complex motions rapidly across a wide range of tasks and environments. To this end, this thesis proposes leveraging abstractions of the problem at hand to ease and speed up the solving. We formalise abstractions to hint relevant robotic behaviour to a family of planning problems, and integrate them tightly into the motion synthesis process to make real-world deployment in complex environments practical. We investigate three principal challenges of this proposition. Firstly, we argue that behavioural samples in form of trajectories are of particular interest to guide robotic motion synthesis. We formalise a framework with behavioural semantic annotation that enables the storage and bootstrap of sets of problem-relevant trajectories. Secondly, in the core of this thesis, we study strategies to exploit behavioural samples in task instantiations that differ significantly from those stored in the framework. We present two novel strategies to efficiently leverage offline-computed problem behavioural samples: (i) online modulation based on geometry-tuned potential fields, and (ii) experience-guided exploration based on trajectory segmentation and malleability. Thirdly, we demonstrate that behavioural hints can be extracted on-the-fly to tackle highlyconstrained, ever-changing complex problems, from which there is no prior knowledge. We propose a multi-layer planner that first solves a simplified version of the problem at hand, to then inform the search for a solution in the constrained space. Our contributions on efficient motion synthesis via behaviour guidance augment the robots’ capabilities to deal with more complex planning problems, and do so more effectively than related approaches in the literature by computing better quality paths in lower response time. We demonstrate our contributions, in both laboratory experiments and field trials, on a spectrum of planning problems and robotic platforms ranging from high-dimensional humanoids and robotic arms with a focus on autonomous manipulation in resembling environments, to high-dimensional kinematic motion planning with a focus on autonomous safe navigation in unknown environments. While this thesis was motivated by challenges on motion synthesis, we have explored the applicability of our findings on disparate robotic fields, such as grasp and task planning. We have made some of our contributions open-source hoping they will be of use to the robotics community at large.The CDT in Robotics and Autonomous Systems at Heriot-Watt University and The University of EdinburghThe ORCA Hub EPSRC project (EP/R026173/1)The Scottish Informatics and Computer Science Alliance (SICSA

    Functional Autonomy Techniques for Manipulation in Uncertain Environments

    Get PDF
    As robotic platforms are put to work in an ever more diverse array of environments, their ability to deploy visuomotor capabilities without supervision is complicated by the potential for unforeseen operating conditions. This is a particular challenge within the domain of manipulation, where significant geometric, semantic, and kinetic understanding across the space of possible manipulands is necessary to allow effective interaction. To facilitate adoption of robotic platforms in such environments, this work investigates the application of functional, or behavior level, autonomy to the task of manipulation in uncertain environments. Three functional autonomy techniques are presented to address subproblems within the domain. The task of reactive selection between a set of actions that incur a probabilistic cost to advance the same goal metric in the presence of an operator action preference is formulated as the Obedient Multi-Armed Bandit (OMAB) problem, under the purview of Reinforcement Learning. A policy for the problem is presented and evaluated against a novel performance metric, disappointment (analogous to prototypical MAB's regret), in comparison to adaptations of existing MAB policies. This is posed for both stationary and non-stationary cost distributions, within the context of two example planetary exploration applications of multi-modal mobility, and surface excavation. Second, a computational model that derives semantic meaning from the outcome of manipulation tasks is developed, which leverages physics simulation and clustering to learn symbolic failure modes. A deep network extracts visual signatures for each mode that may then guide failure recovery. The model is demonstrated through application to the archetypal manipulation task of placing objects into a container, as well as stacking of cuboids, and evaluated against both synthetic verification sets and real depth images. Third, an approach is presented for visual estimation of the minimum magnitude grasping wrench necessary to extract massive objects from an unstructured pile, subject to a given end effector's grasping limits, that is formulated for each object as a "wrench space stiction manifold". Properties are estimated from segmented RGBD point clouds, and a geometric adjacency graph used to infer incident wrenches upon each object, allowing candidate extraction object/force-vector pairs to be selected from the pile that are likely to be within the system's capability.</p

    Integrated Task and Motion Planning of Multi-Robot Manipulators in Industrial and Service Automation

    Get PDF
    Efficient coordination of several robot arms in order to carry out some given independent/cooperative tasks in a common workspace, avoiding collisions, is an appealing research problem that has been studied in different robotic fields, with industrial and service applications. Coordination of several robot arms in a shared environment is challenging because complexity of collision free path planning increases with the number of robots sharing the same workspace. Although research in different aspects of this problem such as task planning, motion planning and robot control has made great progress, the integration of these components is not well studied in the literature. This thesis focuses on integrating task and motion planning multi-robot-arm systems by introducing a practical and optimal interface layer for such systems. For a given set of speci fications and a sequence of tasks for a multi-arm system, the studied system design aims to automatically construct the necessary waypoints, the sequence of arms to be operated, and the algorithms required for the robots to reliably execute manipulation tasks. The contributions of the thesis are three-fold. First, an algorithm is introduced to integrate task and motion planning layers in order to achieve optimal and collision free task execution. Representation via shared space graph (SSG) is introduced to check whether two arms share certain parts of the workspace and to quantify cooperation of such arm pairs, which is essential in selection of arm sequence and scheduling of each arm in the sequence to perform a task or a sub-task. The introduced algorithm allows robots to autonomously reason about a structured environment, performs the sequence planning of robots to operate, and provides robots and objects path for each task to succeed a set of goals. Secondly, an integrated motion and task planning methodology is introduced for systems of multiple mobile and fixed base robot arms performing different tasks simultaneously in a shared workspace. We introduce concept of dynamic shared space graph (D-SSG) to continuously check whether two arms sharing certain parts of the workspace at different time steps and quantify cooperation of such arm pairs, which is essential to the selection of arm sequences and scheduling of each arm in the sequence to perform a task or a sub-task. The introduced algorithm allows robots to autonomously reason about complex human involving environments to plan the high level decisions (sequence planning) of robots to operate and calculates robots and objects path for each task to succeed a set of goals. The third contribution is design of an integration algorithm between low-level motion planning and high-level symbolic task planning layers to produce alternate plans in case of kinematic and geometric changes in the environment to prevent failure in the high-level task plan. In order to verify the methodological contributions of the thesis with a solid implementation basis, some implementations and tests are presented in the open-source robotics planning environments ROS, Moveit and Gazebo. Detailed analysis of these implementations and test results are provided as well
    corecore