356 research outputs found

    Learning cloth manipulation with demonstrations

    Get PDF
    Recent advances in Deep Reinforcement learning and computational capabilities of GPUs have led to variety of research being conducted in the learning side of robotics. The main aim being that of making autonomous robots that are capable of learning how to solve a task on their own with minimal requirement for engineering on the planning, vision, or control side. Efforts have been made to learn the manipulation of rigid objects through the help of human demonstrations, specifically in the tasks such as stacking of multiple blocks on top of each other, inserting a pin into a hole, etc. These Deep RL algorithms successfully learn how to complete a task involving the manipulation of rigid objects, but autonomous manipulation of textile objects such as clothes through Deep RL algorithms is still not being studied in the community. The main objectives of this work involve, 1) implementing the state of the art Deep RL algorithms for rigid object manipulation and getting a deep understanding of the working of these various algorithms, 2) Creating an open-source simulation environment for simulating textile objects such as clothes, 3) Designing Deep RL algorithms for learning autonomous manipulation of textile objects through demonstrations.Peer ReviewedPreprin

    Learning RGB-D descriptors of garment parts for informed robot grasping

    Get PDF
    Robotic handling of textile objects in household environments is an emerging application that has recently received considerable attention thanks to the development of domestic robots. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields a desired configuration. In this work we propose a vision-based method, built on the Bag of Visual Words approach, that combines appearance and 3D information to detect parts suitable for grasping in clothes, even when they are highly wrinkled. We also contribute a new, annotated, garment part dataset that can be used for benchmarking classification, part detection, and segmentation algorithms. The dataset is used to evaluate our approach and several state-of-the-art 3D descriptors for the task of garment part detection. Results indicate that appearance is a reliable source of information, but that augmenting it with 3D information can help the method perform better with new clothing items.This research is partially funded by the Spanish Ministry of Science and Innovation under Project PAU+ DPI2011-2751, the EU Project IntellAct FP7-ICT2009-6-269959 and the ERA-Net Chistera Project ViSen PCIN-2013-047. A. Ramisa worked under the JAE-Doc grant from CSIC and FSE.Peer Reviewe

    Teaching grasping points using natural movements

    Get PDF
    Trabajo presentado a la 18th Catalan Conference on Artificial Intelligence (CCIA) celebrada en Valencia (España) del 21 al 23 de octubre de 2015.The research on robots performing every-day tasks at home has pursued the problem of the manipulation of everyday objects. Among them, grasping a cloth is a challenging task, as the textile is highly-deformable and it is not straightforward to define a generic grasping point. In this paper, we address this problem by introducing a new robot interaction method that enables unexperienced users to control the robot in a natural way. When the robot proposes a grasping point, the user is able to teach the robot a new one. The data collected using this method is then used for training a system using linear regression method, which produces better grasping points and allowing better manipulation actions. The experiments demonstrates the validity of the new interaction method and its potential to improve the point-grasping selection algorithm.Peer Reviewe

    Learning robot policies using a high-level abstraction persona-behaviour simulator

    Get PDF
    2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksCollecting data in Human-Robot Interaction for training learning agents might be a hard task to accomplish. This is especially true when the target users are older adults with dementia since this usually requires hours of interactions and puts quite a lot of workload on the user. This paper addresses the problem of importing the Personas technique from HRI to create fictional patients’ profiles. We propose a Persona-Behaviour Simulator tool that provides, with high-level abstraction, user’s actions during an HRI task, and we apply it to cognitive training exercises for older adults with dementia. It consists of a Persona Definition that characterizes a patient along four dimensions and a Task Engine that provides information regarding the task complexity. We build a simulated environment where the high-level user’s actions are provided by the simulator and the robot initial policy is learned using a Q-learning algorithm. The results show that the current simulator provides a reasonable initial policy for a defined Persona profile. Moreover, the learned robot assistance has proved to be robust to potential changes in the user’s behaviour. In this way, we can speed up the fine-tuning of the rough policy during the real interactions to tailor the assistance to the given user. We believe the presented approach can be easily extended to account for other types of HRI tasks; for example, when input data is required to train a learning algorithm, but data collection is very expensive or unfeasible. We advocate that simulation is a convenient tool in these cases.Peer ReviewedPostprint (author's final draft

    Adapting robot task planning to user preferences: an assistive shoe dressing example

    Get PDF
    The final publication is available at link.springer.comHealthcare robots will be the next big advance in humans’ domestic welfare, with robots able to assist elderly people and users with disabilities. However, each user has his/her own preferences, needs and abilities. Therefore, robotic assistants will need to adapt to them, behaving accordingly. Towards this goal, we propose a method to perform behavior adaptation to the user preferences, using symbolic task planning. A user model is built from the user’s answers to simple questions with a fuzzy inference system, and it is then integrated into the planning domain. We describe an adaptation method based on both the user satisfaction and the execution outcome, depending on which penalizations are applied to the planner’s rules. We demonstrate the application of the adaptation method in a simple shoe-fitting scenario, with experiments performed in a simulated user environment. The results show quick behavior adaptation, even when the user behavior changes, as well as robustness to wrong inference of the initial user model. Finally, some insights in a non-simulated world shoe-fitting setup are also provided.Peer ReviewedPostprint (author's final draft

    On inferring intentions in shared tasks for industrial collaborative robots

    Get PDF
    Inferring human operators' actions in shared collaborative tasks, plays a crucial role in enhancing the cognitive capabilities of industrial robots. In all these incipient collaborative robotic applications, humans and robots not only should share space but also forces and the execution of a task. In this article, we present a robotic system which is able to identify different human's intentions and to adapt its behavior consequently, only by means of force data. In order to accomplish this aim, three major contributions are presented: (a) force-based operator's intent recognition, (b) force-based dataset of physical human-robot interaction and (c) validation of the whole system in a scenario inspired by a realistic industrial application. This work is an important step towards a more natural and user-friendly manner of physical human-robot interaction in scenarios where humans and robots collaborate in the accomplishment of a task.Peer ReviewedPostprint (published version

    Towards safety in physically assistive robots: eating assistance

    Get PDF
    Safety is one of the base elements to build trust in robots. This paper studies remedies to unavoidable collisions using robotics assistive feeding as an example task. Firstly, we propose an attention mechanism so the user can control the robot using gestures and thus prevent collisions. Secondly, when unwanted contacts are unavoidable we compare two safety strategies: active safety, using a force sensor to monitor maximum allowed forces; and passive safety using compliant controllers. Experimental evaluation shows that the gesture mechanism is effective to control the robot. Also, the impact forces obtained with both methods are similar and thus can be used independently. Additionally, users experimenting on purpose impacts declared that the impact was not harmful.Peer ReviewedPostprint (author's final draft

    Context-aware Human Motion Prediction

    Get PDF
    The problem of predicting human motion given a sequence of past observations is at the core of many applications in robotics and computer vision. Current state-of-the-art formulate this problem as a sequence-to-sequence task, in which a historical of 3D skeletons feeds a Recurrent Neural Network (RNN) that predicts future movements, typically in the order of 1 to 2 seconds. However, one aspect that has been obviated so far, is the fact that human motion is inherently driven by interactions with objects and/or other humans in the environment. In this paper, we explore this scenario using a novel context-aware motion prediction architecture. We use a semantic-graph model where the nodes parameterize the human and objects in the scene and the edges their mutual interactions. These interactions are iteratively learned through a graph attention layer, fed with the past observations, which now include both object and human body motions. Once this semantic graph is learned, we inject it to a standard RNN to predict future movements of the human/s and object/s. We consider two variants of our architecture, either freezing the contextual interactions in the future of updating them. A thorough evaluation in the "Whole-Body Human Motion Database" shows that in both cases, our context-aware networks clearly outperform baselines in which the context information is not considered.Comment: Accepted at CVPR2

    Zoom control to compensate camera translation within a robot egomotion estimation approach

    Get PDF
    The final publication is available at link.springer.comZoom control has not received the attention one would expect in view of how it enriches the competences of a vision system. The possibility of changing the size of object projections not only permits analysing objects at a higher resolution, but it also may improve tracking and, therefore, subsequent 3D motion estimation and reconstruction results. Of further interest to us, zoom control enables much larger camera motions, while fixating on the same target, than it would be possible with fixed focal length cameras.This work is partially funded by the EU PACO-PLUS project FP6-2004-IST- 4-27657. The authors thank Gabriel Pi for their contribution in preparing the experiments.Peer ReviewedPostprint (author's final draft

    Monocular object pose computation with the foveal-peripheral camera of the humanoid robot Armar-III

    Get PDF
    Active contour modelling is useful to fit non-textured objects, and algorithms have been developed to recover the motion of an object and its uncertainty. Here we show that these algorithms can be used also with point features matched in textured objects, and that active contours and point matches complement in a natural way. In the same manner we also show that depth-from-zoom algorithms, developed for zooming cameras, can be exploited also in the foveal-peripheral eye configuration present in the Armar-III humanoid robot.Peer Reviewe
    corecore