336,438 research outputs found

    Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation

    Full text link
    Vision-based tactile sensors have gained extensive attention in the robotics community. The sensors are highly expected to be capable of extracting contact information i.e. haptic information during in-hand manipulation. This nature of tactile sensors makes them a perfect match for haptic feedback applications. In this paper, we propose a contact force estimation method using the vision-based tactile sensor DIGIT, and apply it to a position-force teleoperation architecture for force feedback. The force estimation is done by building a depth map for DIGIT gel surface deformation measurement and applying a regression algorithm on estimated depth data and ground truth force data to get the depth-force relationship. The experiment is performed by constructing a grasping force feedback system with a haptic device as a leader robot and a parallel robot gripper as a follower robot, where the DIGIT sensor is attached to the tip of the robot gripper to estimate the contact force. The preliminary results show the capability of using the low-cost vision-based sensor for force feedback applications.Comment: IEEE CBS 202

    Commande Vision/Force de robots parallèles.

    Get PDF
    National audienceIn this paper, force and position control of parallel kinematic machines is discussed. Cartesian space computed torque control is applied to achieve force and position servoing directly in the task space within a sensor based control architecture. The originality of the approach resides in the use of a vision system as an exteroceptive pose measurement of a parallel machine tool for force control purposes

    Flexible Force-Vision Control for Surface Following using Multiple Cameras

    Get PDF
    A flexible method for six-degree-of-freedom combined vision/force control for interaction with a stiff uncalibrated environment is presented. An edge-based rigidbody tracker is used in an observer-based controller, and combined with a six-degree-of-freedom force- or impedance controller. The effect of error sources such as image space measurement noise and calibration errors are considered. Finally, the method is validated in simulations and a surface following experiment using an industrial robot

    GelFlow: Self-supervised Learning of Optical Flow for Vision-Based Tactile Sensor Displacement Measurement

    Full text link
    High-resolution multi-modality information acquired by vision-based tactile sensors can support more dexterous manipulations for robot fingers. Optical flow is low-level information directly obtained by vision-based tactile sensors, which can be transformed into other modalities like force, geometry and depth. Current vision-tactile sensors employ optical flow methods from OpenCV to estimate the deformation of markers in gels. However, these methods need to be more precise for accurately measuring the displacement of markers during large elastic deformation of the gel, as this can significantly impact the accuracy of downstream tasks. This study proposes a self-supervised optical flow method based on deep learning to achieve high accuracy in displacement measurement for vision-based tactile sensors. The proposed method employs a coarse-to-fine strategy to handle large deformations by constructing a multi-scale feature pyramid from the input image. To better deal with the elastic deformation caused by the gel, the Helmholtz velocity decomposition constraint combined with the elastic deformation constraint are adopted to address the distortion rate and area change rate, respectively. A local flow fusion module is designed to smooth the optical flow, taking into account the prior knowledge of the blurred effect of gel deformation. We trained the proposed self-supervised network using an open-source dataset and compared it with traditional and deep learning-based optical flow methods. The results show that the proposed method achieved the highest displacement measurement accuracy, thereby demonstrating its potential for enabling more precise measurement of downstream tasks using vision-based tactile sensors

    Dynamic-vision-based force measurements using convolutional recurrent neural networks

    Get PDF
    In this paper, a novel dynamic Vision-Based Measurement method is proposed to measure contact force independent of the object sizes. A neuromorphic camera (Dynamic Vision Sensor) is utilizused to observe intensity changes within the silicone membrane where the object is in contact. Three deep Long Short-Term Memory neural networks combined with convolutional layers are developed and implemented to estimate the contact force from intensity changes over time. Thirty-five experiments are conducted using three objects with different sizes to validate the proposed approach. We demonstrate that the networks with memory gates are robust against variable contact sizes as the networks learn object sizes in the early stage of a grasp. Moreover, spatial and temporal features enable the sensor to estimate the contact force every 10 ms accurately. The results are promising with Mean Squared Error of less than 0.1 N for grasping and holding contact force using leave-one-out cross-validation method

    Visual kinematic force estimation in robot-assisted surgery – application to knot tying

    Get PDF
    Robot-assisted surgery has potential advantages but lacks force feedback, which can lead to errors such as broken stitches or tissue damage. More experienced surgeons can judge the tool-tissue forces visually and an automated way of capturing this skill is desirable. Methods to measure force tend to involve complex measurement devices or visual tracking of tissue deformation. We investigate whether surgical forces can be estimated simply from the discrepancy between kinematic and visual measurement of the tool position. We show that combined visual and kinematic force estimation can be achieved without external measurements or modelling of tissue deformation. After initial alignment when no force is applied to the tool, visual and kinematic estimates of tool position diverge under force. We plot visual/kinematic displacement with force using vision and marker-based tracking. We demonstrate the ability to discern the forces involved in knot tying and visualize the displacement force using the publicly available JIGSAWS dataset as well as clinical examples of knot tying with the da Vinci surgical system. The ability to visualize or feel forces using this method may offer an advantage to those learning robotic surgery as well as adding to the information available to more experienced surgeons

    sCAM: An Untethered Insertable Laparoscopic Surgical Camera Robot

    Get PDF
    Fully insertable robotic imaging devices represent a promising future of minimally invasive laparoscopic vision. Emerging research efforts in this field have resulted in several proof-of-concept prototypes. One common drawback of these designs derives from their clumsy tethering wires which not only cause operational interference but also reduce camera mobility. Meanwhile, these insertable laparoscopic cameras are manipulated without any pose information or haptic feedback, which results in open loop motion control and raises concerns about surgical safety caused by inappropriate use of force.This dissertation proposes, implements, and validates an untethered insertable laparoscopic surgical camera (sCAM) robot. Contributions presented in this work include: (1) feasibility of an untethered fully insertable laparoscopic surgical camera, (2) camera-tissue interaction characterization and force sensing, (3) pose estimation, visualization, and feedback with sCAM, and (4) robotic-assisted closed-loop laparoscopic camera control. Borrowing the principle of spherical motors, camera anchoring and actuation are achieved through transabdominal magnetic coupling in a stator-rotor manner. To avoid the tethering wires, laparoscopic vision and control communication are realized with dedicated wireless links based on onboard power. A non-invasive indirect approach is proposed to provide real-time camera-tissue interaction force measurement, which, assisted by camera-tissue interaction modeling, predicts stress distribution over the tissue surface. Meanwhile, the camera pose is remotely estimated and visualized using complementary filtering based on onboard motion sensing. Facilitated by the force measurement and pose estimation, robotic-assisted closed-loop control has been realized in a double-loop control scheme with shared autonomy between surgeons and the robotic controller.The sCAM has brought robotic laparoscopic imaging one step further toward less invasiveness and more dexterity. Initial ex vivo test results have verified functions of the implemented sCAM design and the proposed force measurement and pose estimation approaches, demonstrating the technical feasibility of a tetherless insertable laparoscopic camera. Robotic-assisted control has shown its potential to free surgeons from low-level intricate camera manipulation workload and improve precision and intuitiveness in laparoscopic imaging

    Cable Tension Monitoring using Non-Contact Vision-based Techniques

    Get PDF
    In cable-stayed bridges, the structural systems of tensioned cables play a critical role in structural and functional integrity. Thereby, tensile forces in the cables become one of the essential indicators in structural health monitoring (SHM). In this thesis, a video image processing technology integrated with cable dynamic analysis is proposed as a non-contact vision-based measurement technique, which provides a user-friendly, cost-effective, and computationally efficient solution to displacement extraction, frequency identification, and cable tension monitoring. In contrast to conventional contact sensors, the vision-based system is capable of taking remote measurements of cable dynamic response while having flexible sensing capability. Since cable detection is a substantial step in displacement extraction, a comprehensive study on the feasibility of the adopted feature detector is conducted under various testing scenarios. The performance of the feature detector is quantified by developing evaluation parameters. Enhancement methods for the feature detector in cable detection are investigated as well under complex testing environments. Threshold-dependent image matching approaches, which optimize the functionality of the feature-based video image processing technology, is proposed for noise-free and noisy background scenarios. The vision-based system is validated through experimental studies of free vibration tests on a single undamped cable in laboratory settings. The maximum percentage difference of the identified cable fundamental frequency is found to be 0.74% compared with accelerometer readings, while the maximum percentage difference of the estimated cable tensile force is 4.64% compared to direct measurement by a load cell

    Computer Vision Based Robotic Polishing Using Artificial Neural Networks

    Get PDF
    Polishing is a highly skilled manufacturing process with a lot of constraints and interaction with environment. In general, the purpose of polishing is to get the uniform surface roughness distributed evenly throughout part’s surface. In order to reduce the polishing time and cope with the shortage of skilled workers, robotic polishing technology has been investigated. This paper studies about vision system to measure surface defects that have been characterized to some level of surface roughness. The surface defects data have learned using artificial neural networks to give a decision in order to move the actuator of arm robot. Force and rotation time have chosen as output parameters of artificial neural networks. Results shows that although there is a considerable change in both parameter values acquired from vision data compared to real data, it is still possible to obtain surface defects characterization using vision sensor to a certain limit of accuracy. The overall results of this research would encourage further developments in this area to achieve robust computer vision based surface measurement systems for industrial robotic, especially in polishing proces
    • …
    corecore