8 research outputs found

    Estimating an articulated tool's kinematics via visuo-tactile based robotic interactive manipulation

    Get PDF
    Li Q, Ückermann A, Haschke R, Ritter H. Estimating an articulated tool's kinematics via visuo-tactile based robotic interactive manipulation. Presented at the IEEE IROS, Madrid,Spain.The usage of articulated tools for autonomous robots is still a challenging task. One of the difficulties is to automatically estimate the tool’s kinematics model. This model cannot be obtained from a single passive observation, because some information, such as a rotation axis (hinge), can only be detected when the tool is being used. Inspired by a baby using its hands while playing with an articulated toy, we employ a dual arm robotic setup and propose an interactive manipulation strategy based on visual-tactile servoing to estimate the tool’s kinematics model. In our proposed method, one hand is holding the tool’s handle stably, and the other arm equipped with tactile finger flips the movable part of the articulated tool. An innovative visuo-tactile servoing controller is introduced to implement the flipping task by integrating the vision and tactile feed- back in a compact control loop. In order to deal with the temporary invisibility of the movable part in camera, a data fusion method which integrates the visual measurement of the movable part and the fingertip’s motion trajectory is used to optimally estimate the orientation of the tool’s movable part. The important tool’s kinematic parameters are estimated by geometric calculations while the movable part is flipped by the finger. We evaluate our method by flipping a pivoting cleaning head (flap) of a wiper and estimating the wiper’s kinematic parameters. We demonstrate that the flap of the wiper is flipped robustly, even the flap is shortly invisible. The orientation of the flap is tracked well compared to the ground truth data. The kinematic parameters of the wiper are estimated correctl

    Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks

    Get PDF
    We propose a tool-use model that enables a robot to act toward a provided goal. It is important to consider features of the four factors; tools, objects actions, and effects at the same time because they are related to each other and one factor can influence the others. The tool-use model is constructed with deep neural networks (DNNs) using multimodal sensorimotor data; image, force, and joint angle information. To allow the robot to learn tool-use, we collect training data by controlling the robot to perform various object operations using several tools with multiple actions that leads different effects. Then the tool-use model is thereby trained and learns sensorimotor coordination and acquires relationships among tools, objects, actions and effects in its latent space. We can give the robot a task goal by providing an image showing the target placement and orientation of the object. Using the goal image with the tool-use model, the robot detects the features of tools and objects, and determines how to act to reproduce the target effects automatically. Then the robot generates actions adjusting to the real time situations even though the tools and objects are unknown and more complicated than trained ones

    Robot tool use: A survey

    Get PDF
    Using human tools can significantly benefit robots in many application domains. Such ability would allow robots to solve problems that they were unable to without tools. However, robot tool use is a challenging task. Tool use was initially considered to be the ability that distinguishes human beings from other animals. We identify three skills required for robot tool use: perception, manipulation, and high-level cognition skills. While both general manipulation tasks and tool use tasks require the same level of perception accuracy, there are unique manipulation and cognition challenges in robot tool use. In this survey, we first define robot tool use. The definition highlighted the skills required for robot tool use. The skills coincide with an affordance model which defined a three-way relation between actions, objects, and effects. We also compile a taxonomy of robot tool use with insights from animal tool use literature. Our definition and taxonomy lay a theoretical foundation for future robot tool use studies and also serve as practical guidelines for robot tool use applications. We first categorize tool use based on the context of the task. The contexts are highly similar for the same task (e.g., cutting) in non-causal tool use, while the contexts for causal tool use are diverse. We further categorize causal tool use based on the task complexity suggested in animal tool use studies into single-manipulation tool use and multiple-manipulation tool use. Single-manipulation tool use are sub-categorized based on tool features and prior experiences of tool use. This type of tool may be considered as building blocks of causal tool use. Multiple-manipulation tool use combines these building blocks in different ways. The different combinations categorize multiple-manipulation tool use. Moreover, we identify different skills required in each sub-type in the taxonomy. We then review previous studies on robot tool use based on the taxonomy and describe how the relations are learned in these studies. We conclude with a discussion of the current applications of robot tool use and open questions to address future robot tool use

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Interaction dynamics and autonomy in cognitive systems

    Get PDF
    The concept of autonomy is of crucial importance for understanding life and cognition. Whereas cellular and organismic autonomy is based in the self-production of the material infrastructure sustaining the existence of living beings as such, we are interested in how biological autonomy can be expanded into forms of autonomous agency, where autonomy as a form of organization is extended into the behaviour of an agent in interaction with its environment (and not its material self-production). In this thesis, we focus on the development of operational models of sensorimotor agency, exploring the construction of a domain of interactions creating a dynamical interface between agent and environment. We present two main contributions to the study of autonomous agency: First, we contribute to the development of a modelling route for testing, comparing and validating hypotheses about neurocognitive autonomy. Through the design and analysis of specific neurodynamical models embedded in robotic agents, we explore how an agent is constituted in a sensorimotor space as an autonomous entity able to adaptively sustain its own organization. Using two simulation models and different dynamical analysis and measurement of complex patterns in their behaviour, we are able to tackle some theoretical obstacles preventing the understanding of sensorimotor autonomy, and to generate new predictions about the nature of autonomous agency in the neurocognitive domain. Second, we explore the extension of sensorimotor forms of autonomy into the social realm. We analyse two cases from an experimental perspective: the constitution of a collective subject in a sensorimotor social interactive task, and the emergence of an autonomous social identity in a large-scale technologically-mediated social system. Through the analysis of coordination mechanisms and emergent complex patterns, we are able to gather experimental evidence indicating that in some cases social autonomy might emerge based on mechanisms of coordinated sensorimotor activity and interaction, constituting forms of collective autonomous agency
    corecore