6,933 research outputs found

    Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation

    Full text link
    Manipulation of deformable objects, such as ropes and cloth, is an important but challenging problem in robotics. We present a learning-based system where a robot takes as input a sequence of images of a human manipulating a rope from an initial to goal configuration, and outputs a sequence of actions that can reproduce the human demonstration, using only monocular images as input. To perform this task, the robot learns a pixel-level inverse dynamics model of rope manipulation directly from images in a self-supervised manner, using about 60K interactions with the rope collected autonomously by the robot. The human demonstration provides a high-level plan of what to do and the low-level inverse model is used to execute the plan. We show that by combining the high and low-level plans, the robot can successfully manipulate a rope into a variety of target shapes using only a sequence of human-provided images for direction.Comment: 8 pages, accepted to International Conference on Robotics and Automation (ICRA) 201

    DeRi-IGP: Manipulating Rigid Objects Using Deformable Objects via Iterative Grasp-Pull

    Full text link
    Heterogeneous systems manipulation, i.e., manipulating rigid objects via deformable (soft) objects, is an emerging field that remains in its early stages of research. Existing works in this field suffer from limited action and operational space, poor generalization ability, and expensive development. To address these challenges, we propose a universally applicable and effective moving primitive, Iterative Grasp-Pull (IGP), and a sample-based framework, DeRi-IGP, to solve the heterogeneous system manipulation task. The DeRi-IGP framework uses local onboard robots' RGBD sensors to observe the environment, comprising a soft-rigid body system. It then uses this information to iteratively grasp and pull a soft body (e.g., rope) to move the attached rigid body to a desired location. We evaluate the effectiveness of our framework in solving various heterogeneous manipulation tasks and compare its performance with several state-of-the-art baselines. The result shows that DeRi-IGP outperforms other methods by a significant margin. In addition, we also demonstrate the advantage of the large operational space of IGP in the long-distance object acquisition task within both simulated and real environments

    DeRi-Bot: Learning to Collaboratively Manipulate Rigid Objects via Deformable Objects

    Full text link
    Recent research efforts have yielded significant advancements in manipulating objects under homogeneous settings where the robot is required to either manipulate rigid or deformable (soft) objects. However, the manipulation under heterogeneous setups that involve both rigid and one-dimensional (1D) deformable objects remains an unexplored area of research. Such setups are common in various scenarios that involve the transportation of heavy objects via ropes, e.g., on factory floors, at disaster sites, and in forestry. To address this challenge, we introduce DeRi-Bot, the first framework that enables the collaborative manipulation of rigid objects with deformable objects. Our framework comprises an Action Prediction Network (APN) and a Configuration Prediction Network (CPN) to model the complex pattern and stochasticity of soft-rigid body systems. We demonstrate the effectiveness of DeRi-Bot in moving rigid objects to a target position with ropes connected to robotic arms. Furthermore, DeRi-Bot is a distributive method that can accommodate an arbitrary number of robots or human partners without reconfiguration or retraining. We evaluate our framework in both simulated and real-world environments and show that it achieves promising results with strong generalization across different types of objects and multi-agent settings, including human-robot collaboration.Comment: This paper has been accepted by IEEE RA-

    Multi-rotor Aerial Vehicles in Physical Interactions: A Survey

    Full text link
    Research on Multi-rotor Aerial Vehicles (MAVs) has experienced remarkable advancements over the past two decades, propelling the field forward at an accelerated pace. Through the implementation of motion control and the integration of specialized mechanisms, researchers have unlocked the potential of MAVs to perform a wide range of tasks in diverse scenarios. Notably, the literature has highlighted the distinctive attributes of MAVs that endow them with a competitive edge in physical interaction when compared to other robotic systems. In this survey, we present a categorization of the various types of physical interactions in which MAVs are involved, supported by comprehensive case studies. We examine the approaches employed by researchers to address different challenges using MAVs and their applications, including the development of different types of controllers to handle uncertainties inherent in these interactions. By conducting a thorough analysis of the strengths and limitations associated with different methodologies, as well as engaging in discussions about potential enhancements, this survey aims to illuminate the path for future research focusing on MAVs with high actuation capabilities

    Tactile information improves visual object discrimination in kea, Nestor notabilis, and capuchin monkeys, Sapajus spp.

    Get PDF
    In comparative visual cognition research, the influence of information acquired by nonvisual senses has received little attention. Systematic studies focusing on how the integration of information from sight and touch can affect animal perception are sparse. Here, we investigated whether tactile input improves visual discrimination ability of a bird, the kea, and capuchin monkeys, two species with acute vision, and known for their tendency to handle objects. To this end, we assessed whether, at the attainment of a criterion, accuracy and/or learning speed in the visual modality were enhanced by haptic (i.e. active tactile) exploration of an object. Subjects were trained to select the positive stimulus between two cylinders of the same shape and size, but with different surface structures. In the Sight condition, one pair of cylinders was inserted into transparent Plexiglas tubes. This prevented animals from haptically perceiving the objects' surfaces. In the Sight and Touch condition, one pair of cylinders was not inserted into transparent Plexiglas tubes. This allowed the subjects to perceive the objects' surfaces both visually and haptically. We found that both kea and capuchins (1) showed comparable levels of accuracy at the attainment of the learning criterion in both conditions, but (2) required fewer trials to achieve the criterion in the Sight and Touch condition. Moreover, this study showed that both kea and capuchins can integrate information acquired by the visual and tactile modalities. To our knowledge, this represents the first evidence of visuotactile integration in a bird species. Overall, our findings demonstrate that the acquisition of tactile information while manipulating objects facilitates visual discrimination of objects in two phylogenetically distant species

    Physink: sketching physical behavior

    Get PDF
    Describing device behavior is a common task that is currently not well supported by general animation or CAD software. We present PhysInk, a system that enables users to demonstrate 2D behavior by sketching and directly manipulating objects on a physics-enabled stage. Unlike previous tools that simply capture the user's animation, PhysInk captures an understanding of the behavior in a timeline. This enables useful capabilities such as causality-aware editing and finding physically-correct equivalent behavior. We envision PhysInk being used as a physics teacher's sketchpad or a WYSIWYG tool for game designers

    Control of posture with FES systems

    Get PDF
    One of the major obstacles in restoration of functional FES supported standing in paraplegia is the lack of knowledge of a suitable control strategy. The main issue is how to integrate the purposeful actions of the non-paralysed upper body when interacting with the environment while standing, and the actions of the artificial FES control system supporting the paralyzed lower extremities. In this paper we provide a review of our approach to solving this question, which focuses on three inter-related areas: investigations of the basic mechanisms of functional postural responses in neurologically intact subjects; re-training of the residual sensory-motor activities of the upper body in paralyzed individuals; and development of closed-loop FES control systems for support of the paralyzed joints

    Continuum robots and underactuated grasping

    Get PDF
    We discuss the capabilities of continuum (continuous backbone) robot structures in the performance of under-actuated grasping. Continuum robots offer the potential of robust grasps over a wide variety of object classes, due to their ability to adapt their shape to interact with the environment via non-local continuum contact conditions. Furthermore, this capability can be achieved with simple, low degree of freedom hardware. However, there are practical issues which currently limit the application of continuum robots to grasping. We discuss these issues and illustrate via an experimental continuum grasping case study. <br><br> <i>This paper was presented at the IFToMM/ASME International Workshop on Underactuated Grasping (UG2010), 19 August 2010, Montréal, Canada.</i&gt
    corecore