113 research outputs found

    Robot tool use: A survey

    Get PDF
    Using human tools can significantly benefit robots in many application domains. Such ability would allow robots to solve problems that they were unable to without tools. However, robot tool use is a challenging task. Tool use was initially considered to be the ability that distinguishes human beings from other animals. We identify three skills required for robot tool use: perception, manipulation, and high-level cognition skills. While both general manipulation tasks and tool use tasks require the same level of perception accuracy, there are unique manipulation and cognition challenges in robot tool use. In this survey, we first define robot tool use. The definition highlighted the skills required for robot tool use. The skills coincide with an affordance model which defined a three-way relation between actions, objects, and effects. We also compile a taxonomy of robot tool use with insights from animal tool use literature. Our definition and taxonomy lay a theoretical foundation for future robot tool use studies and also serve as practical guidelines for robot tool use applications. We first categorize tool use based on the context of the task. The contexts are highly similar for the same task (e.g., cutting) in non-causal tool use, while the contexts for causal tool use are diverse. We further categorize causal tool use based on the task complexity suggested in animal tool use studies into single-manipulation tool use and multiple-manipulation tool use. Single-manipulation tool use are sub-categorized based on tool features and prior experiences of tool use. This type of tool may be considered as building blocks of causal tool use. Multiple-manipulation tool use combines these building blocks in different ways. The different combinations categorize multiple-manipulation tool use. Moreover, we identify different skills required in each sub-type in the taxonomy. We then review previous studies on robot tool use based on the taxonomy and describe how the relations are learned in these studies. We conclude with a discussion of the current applications of robot tool use and open questions to address future robot tool use

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Generalized Anthropomorphic Functional Grasping with Minimal Demonstrations

    Full text link
    This article investigates the challenge of achieving functional tool-use grasping with high-DoF anthropomorphic hands, with the aim of enabling anthropomorphic hands to perform tasks that require human-like manipulation and tool-use. However, accomplishing human-like grasping in real robots present many challenges, including obtaining diverse functional grasps for a wide variety of objects, handling generalization ability for kinematically diverse robot hands and precisely completing object shapes from a single-view perception. To tackle these challenges, we propose a six-step grasp synthesis algorithm based on fine-grained contact modeling that generates physically plausible and human-like functional grasps for category-level objects with minimal human demonstrations. With the contact-based optimization and learned dense shape correspondence, the proposed algorithm is adaptable to various objects in same category and a board range of robot hand models. To further demonstrate the robustness of the framework, over 10K functional grasps are synthesized to train our neural network, named DexFG-Net, which generates diverse sets of human-like functional grasps based on the reconstructed object model produced by a shape completion module. The proposed framework is extensively validated in simulation and on a real robot platform. Simulation experiments demonstrate that our method outperforms baseline methods by a large margin in terms of grasp functionality and success rate. Real robot experiments show that our method achieved an overall success rate of 79\% and 68\% for tool-use grasp on 3-D printed and real test objects, respectively, using a 5-Finger Schunk Hand. The experimental results indicate a step towards human-like grasping with anthropomorphic hands.Comment: 20 pages, 23 figures and 7 table

    Behaviour-driven motion synthesis

    Get PDF
    Heightened demand for alternatives to human exposure to strenuous and repetitive labour, as well as to hazardous environments, has led to an increased interest in real-world deployment of robotic agents. Targeted applications require robots to be adept at synthesising complex motions rapidly across a wide range of tasks and environments. To this end, this thesis proposes leveraging abstractions of the problem at hand to ease and speed up the solving. We formalise abstractions to hint relevant robotic behaviour to a family of planning problems, and integrate them tightly into the motion synthesis process to make real-world deployment in complex environments practical. We investigate three principal challenges of this proposition. Firstly, we argue that behavioural samples in form of trajectories are of particular interest to guide robotic motion synthesis. We formalise a framework with behavioural semantic annotation that enables the storage and bootstrap of sets of problem-relevant trajectories. Secondly, in the core of this thesis, we study strategies to exploit behavioural samples in task instantiations that differ significantly from those stored in the framework. We present two novel strategies to efficiently leverage offline-computed problem behavioural samples: (i) online modulation based on geometry-tuned potential fields, and (ii) experience-guided exploration based on trajectory segmentation and malleability. Thirdly, we demonstrate that behavioural hints can be extracted on-the-fly to tackle highlyconstrained, ever-changing complex problems, from which there is no prior knowledge. We propose a multi-layer planner that first solves a simplified version of the problem at hand, to then inform the search for a solution in the constrained space. Our contributions on efficient motion synthesis via behaviour guidance augment the robots’ capabilities to deal with more complex planning problems, and do so more effectively than related approaches in the literature by computing better quality paths in lower response time. We demonstrate our contributions, in both laboratory experiments and field trials, on a spectrum of planning problems and robotic platforms ranging from high-dimensional humanoids and robotic arms with a focus on autonomous manipulation in resembling environments, to high-dimensional kinematic motion planning with a focus on autonomous safe navigation in unknown environments. While this thesis was motivated by challenges on motion synthesis, we have explored the applicability of our findings on disparate robotic fields, such as grasp and task planning. We have made some of our contributions open-source hoping they will be of use to the robotics community at large.The CDT in Robotics and Autonomous Systems at Heriot-Watt University and The University of EdinburghThe ORCA Hub EPSRC project (EP/R026173/1)The Scottish Informatics and Computer Science Alliance (SICSA

    Visuo-Haptic Grasping of Unknown Objects through Exploration and Learning on Humanoid Robots

    Get PDF
    Die vorliegende Arbeit befasst sich mit dem Greifen unbekannter Objekte durch humanoide Roboter. Dazu werden visuelle Informationen mit haptischer Exploration kombiniert, um Greifhypothesen zu erzeugen. Basierend auf simulierten Trainingsdaten wird außerdem eine Greifmetrik gelernt, welche die Erfolgswahrscheinlichkeit der Greifhypothesen bewertet und die mit der größten geschätzten Erfolgswahrscheinlichkeit auswählt. Diese wird verwendet, um Objekte mit Hilfe einer reaktiven Kontrollstrategie zu greifen. Die zwei Kernbeiträge der Arbeit sind zum einen die haptische Exploration von unbekannten Objekten und zum anderen das Greifen von unbekannten Objekten mit Hilfe einer neuartigen datengetriebenen Greifmetrik

    Grounded Semantic Reasoning for Robotic Interaction with Real-World Objects

    Get PDF
    Robots are increasingly transitioning from specialized, single-task machines to general-purpose systems that operate in unstructured environments, such as homes, offices, and warehouses. In these real-world domains, robots need to manipulate novel objects while adapting to changes in environments and goals. Semantic knowledge, which concisely describes target domains with symbols, can potentially reveal the meaningful patterns shared between problems and environments. However, existing robots are yet to effectively reason about semantic data encoding complex relational knowledge or jointly reason about symbolic semantic data and multimodal data pertinent to robotic manipulation (e.g., object point clouds, 6-DoF poses, and attributes detected with multimodal sensing). This dissertation develops semantic reasoning frameworks capable of modeling complex semantic knowledge grounded in robot perception and action. We show that grounded semantic reasoning enables robots to more effectively perceive, model, and interact with objects in real-world environments. Specifically, this dissertation makes the following contributions: (1) a survey providing a unified view for the diversity of works in the field by formulating semantic reasoning as the integration of knowledge sources, computational frameworks, and world representations; (2) a method for predicting missing relations in large-scale knowledge graphs by leveraging type hierarchies of entities, effectively avoiding ambiguity while maintaining generalization of multi-hop reasoning patterns; (3) a method for predicting unknown properties of objects in various environmental contexts, outperforming prior knowledge graph and statistical relational learning methods due to the use of n-ary relations for modeling object properties; (4) a method for purposeful robotic grasping that accounts for a broad range of contexts (including object visual affordance, material, state, and task constraint), outperforming existing approaches in novel contexts and for unknown objects; (5) a systematic investigation into the generalization of task-oriented grasping that includes a benchmark dataset of 250k grasps, and a novel graph neural network that incorporates semantic relations into end-to-end learning of 6-DoF grasps; (6) a method for rearranging novel objects into semantically meaningful spatial structures based on high-level language instructions, more effectively capturing multi-object spatial constraints than existing pairwise spatial representations; (7) a novel planning-inspired approach that iteratively optimizes placements of partially observed objects subject to both physical constraints and semantic constraints inferred from language instructions.Ph.D

    Mind and Matter

    Get PDF
    Do brains create material reality in thinking processes or is it the other way around, with things shaping the mind? Where is the location of meaning-making? How do neural networks become established by means of multimodal pattern replications, and how are they involved in conceptualization? How are resonance textures within cellular entities extended in the body and the mind by means of mirroring processes? In which ways do they correlate to consciousness and self-consciousness? Is it possible to explain out-of-awareness unconscious processes? What holds together the relationship between experiential reality, bodily processes like memory, reason, or imagination, and sign-systems and simulation structures like metaphor and metonymy visible in human language? This volume attempts to answer some of these questions

    Models, Simulations, and the Reduction of Complexity

    Get PDF
    Modern science is a model-building activity. But how are models contructed? How are they related to theories and data? How do they explain complex scientific phenomena, and which role do computer simulations play? To address these questions which are highly relevant to scientists as well as to philosophers of science, 8 leading natural, engineering and social scientists reflect upon their modeling work, and 8 philosophers provide a commentary

    Environments of Intelligence

    Get PDF
    What is the role of the environment, and of the information it provides, in cognition? More specifically, may there be a role for certain artefacts to play in this context? These are questions that motivate "4E" theories of cognition (as being embodied, embedded, extended, enactive). In his take on that family of views, Hajo Greif first defends and refines a concept of information as primarily natural, environmentally embedded in character, which had been eclipsed by information-processing views of cognition. He continues with an inquiry into the cognitive bearing of some artefacts that are sometimes referred to as 'intelligent environments'. Without necessarily having much to do with Artificial Intelligence, such artefacts may ultimately modify our informational environments. With respect to human cognition, the most notable effect of digital computers is not that they might be able, or become able, to think but that they alter the way we perceive, think and act. The Open Access version of this book, available at http://www.tandfebooks.com/doi/view/10.4324/9781315401867, has been made available under a Creative Commons CC-BY licenc
    • …
    corecore