4,355 research outputs found

    Robotic Ironing with 3D Perception and Force/Torque Feedback in Household Environments

    Full text link
    As robotic systems become more popular in household environments, the complexity of required tasks also increases. In this work we focus on a domestic chore deemed dull by a majority of the population, the task of ironing. The presented algorithm improves on the limited number of previous works by joining 3D perception with force/torque sensing, with emphasis on finding a practical solution with a feasible implementation in a domestic setting. Our algorithm obtains a point cloud representation of the working environment. From this point cloud, the garment is segmented and a custom Wrinkleness Local Descriptor (WiLD) is computed to determine the location of the present wrinkles. Using this descriptor, the most suitable ironing path is computed and, based on it, the manipulation algorithm performs the force-controlled ironing operation. Experiments have been performed with a humanoid robot platform, proving that our algorithm is able to detect successfully wrinkles present in garments and iteratively reduce the wrinkleness using an unmodified iron.Comment: Accepted and to be published on the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) that will be held in Vancouver, Canada, September 24-28, 201

    Contour Tracking Control for Mobile Robots applicable to Large-scale Assembly and Additive Manufacturing in Construction

    Get PDF
    In the construction industry, as well as during the assembly of large-scale components, the required workspaces usually cannot be served by a stationary robot. Instead, mobile robots are used to increase the accessible space. Here, the problem arises that the accuracy of such systems is not sufficient to meet the tolerance requirements of the components to be produced. Furthermore, there is an additional difficulty in the trajectory planning process since the exact dimensions of the pre-manufactured parts are unknown. Hence, existing static planning methods cannot be exerted on every application. Recent approaches present dynamic planning algorithms based on specific component characteristics. For example, the latest methods follow the contour by a force-controlled motion or detect features with a camera. However, in several applications such as welding or additive manufacturing in construction, no contact force is generated that could be controlled. Vision-based approaches are generally restricted by varying materials and lighting conditions, often found in large-scale construction. For these reasons, we propose a more robust approach without measuring contact forces, which, for example, applies to large-scale additive manufacturing. We based our algorithm on a high-precision 2D line laser, capable of detecting different feature contours regardless of material or lightning. The laser is mounted to the robot's end-effector and provides a depth profile of the component's surface. From this depth data, we determine the target contour and control the manipulator to follow it. Simultaneously we vary the robot's speed to adjust the feed rate depending on the contour's shape, maintaining a constant material application rate. As a proof of concept, we apply the algorithm to the additive manufacturing of two-layer linear structures made from spray PU foam. When making these structures, each layer must be positioned precisely on the previous layer to obtain a straight wall and prevent elastic buckling or plastic collapse. Initial experiments show improved layer alignment within 10 % of the layer width, as well as better layer height consistency and process reliability

    Design and Development of Sensor Integrated Robotic Hand

    Get PDF
    Most of the automated systems using robots as agents do use few sensors according to the need. However, there are situations where the tasks carried out by the end-effector, or for that matter by the robot hand needs multiple sensors. The hand, to make the best use of these sensors, and behave autonomously, requires a set of appropriate types of sensors which could be integrated in proper manners. The present research work aims at developing a sensor integrated robot hand that can collect information related to the assigned tasks, assimilate there correctly and then do task action as appropriate. The process of development involves selection of sensors of right types and of right specification, locating then at proper places in the hand, checking their functionality individually and calibrating them for the envisaged process. Since the sensors need to be integrated so that they perform in the desired manner collectively, an integration platform is created using NI PXIe-1082. A set of algorithm is developed for achieving the integrated model. The entire process is first modelled and simulated off line for possible modification in order to ensure that all the sensors do contribute towards the autonomy of the hand for desired activity. This work also involves design of a two-fingered gripper. The design is made in such a way that it is capable of carrying out the desired tasks and can accommodate all the sensors within its fold. The developed sensor integrated hand has been put to work and its performance test has been carried out. This hand can be very useful for part assembly work in industries for any shape of part with a limit on the size of the part in mind. The broad aim is to design, model simulate and develop an advanced robotic hand. Sensors for pick up contacts pressure, force, torque, position, surface profile shape using suitable sensing elements in a robot hand are to be introduced. The hand is a complex structure with large number of degrees of freedom and has multiple sensing capabilities apart from the associated sensing assistance from other organs. The present work is envisaged to add multiple sensors to a two-fingered robotic hand having motion capabilities and constraints similar to the human hand. There has been a good amount of research and development in this field during the last two decades a lot remains to be explored and achieved. The objective of the proposed work is to design, simulate and develop a sensor integrated robotic hand. Its potential applications can be proposed for industrial environments and in healthcare field. The industrial applications include electronic assembly tasks, lighter inspection tasks, etc. Application in healthcare could be in the areas of rehabilitation and assistive techniques. The work also aims to establish the requirement of the robotic hand for the target application areas, to identify the suitable kinds and model of sensors that can be integrated on hand control system. Functioning of motors in the robotic hand and integration of appropriate sensors for the desired motion is explained for the control of the various elements of the hand. Additional sensors, capable of collecting external information and information about the object for manipulation is explored. Processes are designed using various software and hardware tools such as mathematical computation MATLAB, OpenCV library and LabVIEW 2013 DAQ system as applicable, validated theoretically and finally implemented to develop an intelligent robotic hand. The multiple smart sensors are installed on a standard six degree-of-freedom industrial robot KAWASAKI RS06L articulated manipulator, with the two-finger pneumatic SHUNK robotic hand or designed prototype and robot control programs are integrated in such a manner that allows easy application of grasping in an industrial pick-and-place operation where the characteristics of the object can vary or are unknown. The effectiveness of the actual recommended structure is usually proven simply by experiments using calibration involving sensors and manipulator. The dissertation concludes with a summary of the contribution and the scope of further work

    Shape Recovery from Robot Contour Tracking with Force Feedback

    Get PDF
    In this paper we describe a process for shape recovery from robot contour-tracking operations with force feedback. Shape recovery is an important task for self-teaching robots and for exploratory operations in unknown environments. An algorithm which directs a position controlled robot around an unknown planar contour using the steady state contact force information is described in this paper. The shape recovery from the planar contouring is not a trivial problem. It is experimentally found that there is significant distortion of the original contour if direct kinematics is used to recover the object’s shape, as we are unable to recover the exact position of the robot tool due to the errors present in the kinematic model of the arm and the non-linearities of the drive train. Drive train errors can consist of the joint compliance, gear backlash and gear eccentricity. A mathematical model of the errors generated by the drive train has been previously addressed. In this paper a compensation process is explored for purposes of planar shape recovery. It is found through experimentation that the joint compliance is most conveniently compensated for in practice. Improvements in the shapes recovered from robot contouring are seen with our compensations. Experimental details and difficulties are also discussed

    A robotic engine assembly pick-place system based on machine learning

    Get PDF
    Industrial revolution brought humans and machines together in building a better future. Where in one hand there is need to replace the repetitive jobs with machines to increase efficiency and volume of production, on the other hand intelligent and autonomous machines have still a long way to go to achieve dexterity of a human. The current scenario requires a system which can utilise best of both the human and the machine. This thesis studies a industrial use case scenario where human-machine combine their skills to build an autonomous pick place system. This study takes a small step towards the human-robot consortium primarily focusing on developing a vision based system for object detection followed by a manipulator pick place operation. This thesis can be divided into two parts : 1. Scene analysis, where a Convolutional Neural Network (CNN) is used for object detection followed by generation of grasping points using object edge image and an algorithm developed during this thesis. 2. Implementation, it focuses on motion generation while taking care of external disturbances to perform successful pick-place operation. In addition human involvement is required which includes teaching trajectory points for the robot to follow. This trajectory is used to generate image data-set for a new object type and thereafter generating new object detection model. The author primarily focuses on building a system framework where the complexities related to robot programming such as generating trajectory points and informing grasping position is not required. The system automatically detects object and performs a pick place operation, resulting in relieving user from robot programming. The system is composed of a depth camera and a manipulator. Camera is the only sensor available for scene analysis and the action is performed using a Franka manipulator. The two components work in request-response mode over ROS. This thesis introduces a newer approaches such as, dividing an workspace image into its constituent object images and performing object detection, creating training data, generating grasp points based on object shape along length of an object. The thesis also presents a case study where three different objects are chosen as test objects. The experiments are a demonstration of the methods applied and efficiency attained. The case study also provides a glimpse of the future research and development areas

    Flexible Force-Vision Control for Surface Following using Multiple Cameras

    Get PDF
    A flexible method for six-degree-of-freedom combined vision/force control for interaction with a stiff uncalibrated environment is presented. An edge-based rigidbody tracker is used in an observer-based controller, and combined with a six-degree-of-freedom force- or impedance controller. The effect of error sources such as image space measurement noise and calibration errors are considered. Finally, the method is validated in simulations and a surface following experiment using an industrial robot

    Development of Multi-Robotic Arm System for Sorting System Using Computer Vision

    Get PDF
    This paper develops a multi-robotic arm system and a stereo vision system to sort objects in the right position according to size and shape attributes. The robotic arm system consists of one master and three slave robots associated with three conveyor belts. Each robotic arm is controlled by a robot controller based on a microcontroller. A master controller is used for the vision system and communicating with slave robotic arms using the Modbus RTU protocol through an RS485 serial interface. The stereo vision system is built to determine the 3D coordinates of the object. Instead of rebuilding the entire disparity map, which is computationally expensive, the centroids of the objects in the two images are calculated to determine the depth value. After that, we can calculate the 3D coordinates of the object by using the formula of the pinhole camera model. Objects are picked up and placed on a conveyor branch according to their shape. The conveyor transports the object to the location of the slave robot. Based on the size attribute that the slave robot receives from the master, the object is picked and placed in the right position. Experiment results reveal the effectiveness of the system. The system can be used in industrial processes to reduce the required time and improve the performance of the production line
    corecore