1,026 research outputs found

    Detection and Physical Interaction with Deformable Linear Objects

    Full text link
    Deformable linear objects (e.g., cables, ropes, and threads) commonly appear in our everyday lives. However, perception of these objects and the study of physical interaction with them is still a growing area. There have already been successful methods to model and track deformable linear objects. However, the number of methods that can automatically extract the initial conditions in non-trivial situations for these methods has been limited, and they have been introduced to the community only recently. On the other hand, while physical interaction with these objects has been done with ground manipulators, there have not been any studies on physical interaction and manipulation of the deformable linear object with aerial robots. This workshop describes our recent work on detecting deformable linear objects, which uses the segmentation output of the existing methods to provide the initialization required by the tracking methods automatically. It works with crossings and can fill the gaps and occlusions in the segmentation and output the model desirable for physical interaction and simulation. Then we present our work on using the method for tasks such as routing and manipulation with the ground and aerial robots. We discuss our feasibility analysis on extending the physical interaction with these objects to aerial manipulation applications.Comment: Presented at ICRA 2022 2nd Workshop on Representing and Manipulating Deformable Objects (https://deformable-workshop.github.io/icra2022/

    Contact geometry and mechanics predict friction forces during tactile surface exploration

    Get PDF
    International audienceWhen we touch an object, complex frictional forces are produced, aiding us in perceiving surface features that help to identify the object at hand, and also facilitating grasping and manipulation. However, even during controlled tactile exploration, sliding friction forces fluctuate greatly, and it is unclear how they relate to the surface topography or mechanics of contact with the finger. We investigated the sliding contact between the finger and different relief surfaces, using high-speed video and force measurements. Informed by these experiments, we developed a friction force model that accounts for surface shape and contact mechanical effects, and is able to predict sliding friction forces for different surfaces and exploration speeds. We also observed that local regions of disconnection between the finger and surface develop near high relief features, due to the stiffness of the finger tissues. Every tested surface had regions that were never contacted by the finger; we refer to these as " tactile blind spots ". The results elucidate friction force production during tactile exploration, may aid efforts to connect sensory and motor function of the hand to properties of touched objects, and provide crucial knowledge to inform the rendering of realistic experiences of touch contact in virtual reality

    Robotic manipulation: planning and control for dexterous grasp

    Get PDF

    Using Surfaces and Surface Relations in an Early Cognitive Vision System

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00138-015-0705-yWe present a deep hierarchical visual system with two parallel hierarchies for edge and surface information. In the two hierarchies, complementary visual information is represented on different levels of granularity together with the associated uncertainties and confidences. At all levels, geometric and appearance information is coded explicitly in 2D and 3D allowing to access this information separately and to link between the different levels. We demonstrate the advantages of such hierarchies in three applications covering grasping, viewpoint independent object representation, and pose estimation.European Community’s Seventh Framework Programme FP7/IC

    Design and Development of Sensor Integrated Robotic Hand

    Get PDF
    Most of the automated systems using robots as agents do use few sensors according to the need. However, there are situations where the tasks carried out by the end-effector, or for that matter by the robot hand needs multiple sensors. The hand, to make the best use of these sensors, and behave autonomously, requires a set of appropriate types of sensors which could be integrated in proper manners. The present research work aims at developing a sensor integrated robot hand that can collect information related to the assigned tasks, assimilate there correctly and then do task action as appropriate. The process of development involves selection of sensors of right types and of right specification, locating then at proper places in the hand, checking their functionality individually and calibrating them for the envisaged process. Since the sensors need to be integrated so that they perform in the desired manner collectively, an integration platform is created using NI PXIe-1082. A set of algorithm is developed for achieving the integrated model. The entire process is first modelled and simulated off line for possible modification in order to ensure that all the sensors do contribute towards the autonomy of the hand for desired activity. This work also involves design of a two-fingered gripper. The design is made in such a way that it is capable of carrying out the desired tasks and can accommodate all the sensors within its fold. The developed sensor integrated hand has been put to work and its performance test has been carried out. This hand can be very useful for part assembly work in industries for any shape of part with a limit on the size of the part in mind. The broad aim is to design, model simulate and develop an advanced robotic hand. Sensors for pick up contacts pressure, force, torque, position, surface profile shape using suitable sensing elements in a robot hand are to be introduced. The hand is a complex structure with large number of degrees of freedom and has multiple sensing capabilities apart from the associated sensing assistance from other organs. The present work is envisaged to add multiple sensors to a two-fingered robotic hand having motion capabilities and constraints similar to the human hand. There has been a good amount of research and development in this field during the last two decades a lot remains to be explored and achieved. The objective of the proposed work is to design, simulate and develop a sensor integrated robotic hand. Its potential applications can be proposed for industrial environments and in healthcare field. The industrial applications include electronic assembly tasks, lighter inspection tasks, etc. Application in healthcare could be in the areas of rehabilitation and assistive techniques. The work also aims to establish the requirement of the robotic hand for the target application areas, to identify the suitable kinds and model of sensors that can be integrated on hand control system. Functioning of motors in the robotic hand and integration of appropriate sensors for the desired motion is explained for the control of the various elements of the hand. Additional sensors, capable of collecting external information and information about the object for manipulation is explored. Processes are designed using various software and hardware tools such as mathematical computation MATLAB, OpenCV library and LabVIEW 2013 DAQ system as applicable, validated theoretically and finally implemented to develop an intelligent robotic hand. The multiple smart sensors are installed on a standard six degree-of-freedom industrial robot KAWASAKI RS06L articulated manipulator, with the two-finger pneumatic SHUNK robotic hand or designed prototype and robot control programs are integrated in such a manner that allows easy application of grasping in an industrial pick-and-place operation where the characteristics of the object can vary or are unknown. The effectiveness of the actual recommended structure is usually proven simply by experiments using calibration involving sensors and manipulator. The dissertation concludes with a summary of the contribution and the scope of further work

    Object Localization Using Stereo Vision

    Get PDF

    Joint segmentation and tracking of object surfaces in depth movies along human/robot manipulations

    Get PDF
    A novel framework for joint segmentation and tracking in depth videos of object surfaces is presented. Initially, the 3D colored point cloud obtained using the Kinect camera is used to segment the scene into surface patches, defined by quadratic functions. The computed segments together with their functional descriptions are then used to partition the depth image of the subsequent frame in a consistent manner with respect to the precedent frame. This way, solutions established in previous frames can be reused which improves the efficiency of the algorithm and the coherency of the segmentations along the movie. The algorithm is tested for scenes showing human and robot manipulations of objects. We demonstrate that the method can successfully segment and track the human/robot arm and object surfaces along the manipulations. The performance is evaluated quantitatively by measuring the temporal coherency of the segmentations and the segmentation covering using ground truth. The method provides a visual front-end designed for robotic applications, and can potentially be used in the context of manipulation recognition, visual servoing, and robot-grasping tasksPeer ReviewedPostprint (author’s final draft

    Hierarchical control of complex manufacturing processes

    Get PDF
    The need for changing the control objective during the process has been reported in many systems in manufacturing, robotics, etc. However, not many works have been devoted to systematically investigating the proper strategies for these types of problems. In this dissertation, two approaches to such problems have been suggested for fast varying systems. The first approach, addresses problems where some of the objectives are statically related to the states of the systems. Hierarchical Optimal Control was proposed to simplify the nonlinearity caused by adding the statically related objectives into control problem. The proposed method was implemented for contour-position control of motion systems as well as force-position control of end milling processes. It was shown for a motion control system, when contour tracking is important, the controller can reduce the contour error even when the axial control signals are saturating. Also, for end milling processes it was shown that during machining sharp edges where, excessive cutting forces can cause tool breakage, by using the proposed controller, force can be bounded without sacrificing the position tracking performance. The second approach that was proposed (Hierarchical Model Predictive Control), addressed the problems where all the objectives are dynamically related. In this method neural network approximation methods were used to convert a nonlinear optimization problem into an explicit form which is feasible for real time implementation. This method was implemented for force-velocity control of ram based freeform extrusion fabrication of ceramics. Excellent extrusion results were achieved with the proposed method showing excellent performance for different changes in control objective during the process --Abstract, page iv
    corecore