17 research outputs found

    Robot-aided cloth classification using depth information and CNNs

    Get PDF
    The final publication is available at link.springer.comWe present a system to deal with the problem of classifying garments from a pile of clothes. This system uses a robot arm to extract a garment and show it to a depth camera. Using only depth images of a partial view of the garment as input, a deep convolutional neural network has been trained to classify different types of garments. The robot can rotate the garment along the vertical axis in order to provide different views of the garment to enlarge the prediction confidence and avoid confusions. In addition to obtaining very high classification scores, compared to previous approaches to cloth classification that match the sensed data against a database, our system provides a fast and occlusion-robust solution to the problem.Peer ReviewedPostprint (author's final draft

    Robot manipulation in human environments: Challenges for learning algorithms

    Get PDF
    Resumen del trabajo presentado al Dagstuhl Seminar 2014 celebrado en Dagstuhl (Alemania) del 17 al 21 de febrero de 2014.The European projects PACO-PLUS, GARNICS and IntellAct, the Spanish projects PAU and PAU+, and the Catalan grant SGR-155.Peer Reviewe

    From the Turing test to science fiction: the challenges of social robotics

    Get PDF
    The Turing test (1950) sought to distinguish whether a speaker engaged in a computer talk was a human or a machine [6]. Science fiction has immortalized several humanoid robots full of humanity, and it is nowadays speculating about the role the human being and the machine may play in this “pas à deux” in which we are irremissibly engaged [12]. Where is current robotics research heading to? Industrial robots are giving way to social robots designed to aid in healthcare, education, entertainment and services. In the near future, robots will assist disabled and elderly people, do chores, act as playmates for youngsters and adults, and even work as nannies and reinforcement teachers. This poses new requirements to robotics research, since social robots must be easy to program by non-experts [10], intrinsically safe [3], able to perceive and manipulate deformable objects [2, 8], tolerant to inaccurate perceptions and actions [4, 7] and, above all, they must be endowed with a strong learning capacity [1, 9] and a high adaptability [14] to non-predefined and dynamic environments. Taking as an example projects developed at the Institut de Robòtica i Informàtica Industrial (CSIC-UPC), some of the scientific, technological and ethical challenges [5, 11, 13] that this robotic evolution entails will be showcased.Peer ReviewedPostprint (author’s final draft

    Pointcloud-based Identification of Optimal Grasping Poses for Cloth-like Deformable Objects

    Get PDF
    In this paper, the problem of identifying optimal grasping poses for cloth-like deformable objects is addressed by means of a four-steps algorithm performing the processing of the data coming from a 3D camera. The first step segments the source pointcloud, while the second step implements a wrinkledness measure able to robustly detect graspable regions of a cloth. In the third step the identification of each individual wrinkle is accomplished by fitting a piecewise curve. Finally, in the fourth step, a target grasping pose for each detected wrinkle is estimated. Compared to deep learning approaches where the availability of a good quality dataset or trained model is necessary, our general algorithm can find employment in very different scenarios with minor parameters tweaking. Results showing the application of our method to the clothes bin picking task are presented

    External force estimation for textile grasp detection

    No full text
    Our current work on external force estimation without end-effector force sensor is resented.To verify if a grasp of a textile has been successful, the external wrench applied on the robot is computed online, with a state observer based on a LWPR [3] model of a task.Peer ReviewedPostprint (author’s final draft

    A Grasping-centered Analysis for Cloth Manipulation

    Get PDF
    Compliant and soft hands have gained a lot of attention in the past decade because of their ability to adapt to the shape of the objects, increasing their effectiveness for grasping. However, when it comes to grasping highly flexible objects such as textiles, we face the dual problem: it is the object that will adapt to the shape of the hand or gripper. In this context, the classic grasp analysis or grasping taxonomies are not suitable for describing textile objects grasps. This work proposes a novel definition of textile object grasps that abstracts from the robotic embodiment or hand shape and recovers concepts from the early neuroscience literature on hand prehension skills. This framework enables us to identify what grasps have been used in literature until now to perform robotic cloth manipulation, and allows for a precise definition of all the tasks that have been tackled in terms of manipulation primitives based on regrasps. In addition, we also review what grippers have been used. Our analysis shows how the vast majority of cloth manipulations have relied only on one type of grasp, and at the same time we identify several tasks that need more variety of grasp types to be executed successfully. Our framework is generic, provides a classification of cloth manipulation primitives and can inspire gripper design and benchmark construction for cloth manipulation.Comment: 13 pages, 4 figures, 4 tables. Accepted for publication at IEEE Transactions on Robotic

    Visual grasp point localization, classification and state recognition in robotic manipulation of cloth: an overview

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Cloth manipulation by robots is gaining popularity among researchers because of its relevance, mainly (but not only) in domestic and assistive robotics. The required science and technologies begin to be ripe for the challenges posed by the manipulation of soft materials, and many contributions have appeared in the last years. This survey provides a systematic review of existing techniques for the basic perceptual tasks of grasp point localization, state estimation and classification of cloth items, from the perspective of their manipulation by robots. This choice is grounded on the fact that any manipulative action requires to instruct the robot where to grasp, and most garment handling activities depend on the correct recognition of the type to which the particular cloth item belongs and its state. The high inter- and intraclass variability of garments, the continuous nature of the possible deformations of cloth and the evident difficulties in predicting their localization and extension on the garment piece are challenges that have encouraged the researchers to provide a plethora of methods to confront such problems, with some promising results. The present review constitutes for the first time an effort in furnishing a structured framework of these works, with the aim of helping future contributors to gain both insight and perspective on the subjectPeer ReviewedPostprint (author's final draft

    Data-efficient Learning of Robotic Clothing Assistance using Bayesian Gaussian Process Latent Variable Models

    Get PDF
    Motor-skill learning for complex robotic tasks is a challenging problem due to the high task variability. Robotic clothing assistance is one such challenging problem that can greatly improve the quality-of-life for the elderly and disabled. In this study, we propose a data-efficient representation to encode task-specific motor-skills of the robot using Bayesian nonparametric latent variable models. The effectivity of the proposed motor-skill representation is demonstrated in two ways: (1) through a real-time controller that can be used as a tool for learning from demonstration to impart novel skills to the robot and (2) by demonstrating that policy search reinforcement learning in such a task-specific latent space outperforms learning in the high-dimensional joint configuration space of the robot. We implement our proposed framework in a practical setting with a dual-arm robot performing clothing assistance tasks

    A framework for robotic clothing assistance by imitation learning

    Get PDF
    The recent demographic trend across developed nations shows a dramatic increase in the aging population, fallen fertility rates and a shortage of caregivers. Hence, the demand for service robots to assist with dressing which is an essential Activity of Daily Living (ADL) is increasing rapidly. Robotic Clothing Assistance is a challenging task since the robot has to deal with two demanding tasks simultaneously, (a) non-rigid and highly flexible cloth manipulation and (b) safe human–robot interaction while assisting humans whose posture may vary during the task. On the other hand, humans can deal with these tasks rather easily. In this paper, we propose a framework for robotic clothing assistance by imitation learning from a human demonstration to a compliant dual-arm robot. In this framework, we divide the dressing task into three phases, i.e. reaching phase, arm dressing phase, and body dressing phase. We model the arm dressing phase as a global trajectory modification using Dynamic Movement Primitives (DMP), while we model the body dressing phase toward a local trajectory modification applying Bayesian Gaussian Process Latent Variable Model (BGPLVM). We show that the proposed framework developed towards assisting the elderly is generalizable to various people and successfully performs a sleeveless shirt dressing task. We also present participants feedback on public demonstration at the International Robot Exhibition (iREX) 2017. To our knowledge, this is the first work performing a full dressing of a sleeveless shirt on a human subject with a humanoid robot

    Sequential Decision Making under Uncertainty for Sensor Management in Mobile Robotics

    Get PDF
    Sensor management refers to the control of the degrees of freedom in a sensing system. The objective of sensor management is to improve performance e.g. by obtaining more accurate information or by achieving other operational goals. Sensor management is viewed as a sequential decision making process, where decisions at any time are made conditional on the past decisions and measurement data. At the time of deciding a control action for a sensing system the measurement data that will be obtained are unknown. Thus, informally speaking, a solution to a sensor management problem is a policy that determines which sensing action to undertake given the current information on the state of the process under investigation and contingent on any possible realisation of future measurement data outcomes.This thesis studies sensor management framing the contingent planning problem in the partially observable Markov decision process (POMDP) framework. In particular, applications in mobile robotics are considered. Mobile robots are viewed as controllable sensor platforms.Based on earlier work on POMDP based robot control, and distinguishing between the two cases of either exploiting or gathering information, we define four canonical sensor management problem types in mobile robotics. In each of the problem types, we exploit the structural properties of their inputs to improve efficiency of applicable contingent planning algorithms.In particular, we consider sensor management problems for information gathering where the utility of the possible control policies is quantified by mutual information (MI). We identify the relationship between the POMDP formulation of an environment monitoring problem and another contingent planning problem known as a multi-armed bandit (MAB). In a robotic exploration task, we derive a novel approximation for MI.Through both simulation and real-world experiments in mobile robotics domains, we determine the applicability, advantages, and disadvantages of a POMDP based approach to sensor management in mobile robotics
    corecore