190 research outputs found

    Fingertip Proximity Sensor with Realtime Visual-based Calibration

    Get PDF
    Proximity and distance estimation sensors are broadly used in robotic hands to enhance the quality of grasping during grasp planning, grasp correction and in-hand manipulation. This paper presents a fiber optical proximity sensor that is integrated with a tactile sensing fingertip of a robotic hand of a mobile robot. The distance estimation of proximity sensors are typically influenced by the reflective properties of an object, such as color or surface roughness. With the approach proposed in this paper, the accuracy of the proximity sensor is enhanced using the information collected by the vision system of the robot. A camera is employed to obtain RGB values of the object to be grasped. Further on, the data obtained from the camera is used to obtain the correct calibration for the proximity sensor. Based on the experimental evidence, it is shown that our approach can be effectively used to reduce the distance estimation error

    Rule Of Thumb: Deep derotation for improved fingertip detection

    Full text link
    We investigate a novel global orientation regression approach for articulated objects using a deep convolutional neural network. This is integrated with an in-plane image derotation scheme, DeROT, to tackle the problem of per-frame fingertip detection in depth images. The method reduces the complexity of learning in the space of articulated poses which is demonstrated by using two distinct state-of-the-art learning based hand pose estimation methods applied to fingertip detection. Significant classification improvements are shown over the baseline implementation. Our framework involves no tracking, kinematic constraints or explicit prior model of the articulated object in hand. To support our approach we also describe a new pipeline for high accuracy magnetic annotation and labeling of objects imaged by a depth camera.Comment: To be published in proceedings of BMVC 201

    Optical Proximity Sensing for Pose Estimation During In-Hand Manipulation

    Full text link
    During in-hand manipulation, robots must be able to continuously estimate the pose of the object in order to generate appropriate control actions. The performance of algorithms for pose estimation hinges on the robot's sensors being able to detect discriminative geometric object features, but previous sensing modalities are unable to make such measurements robustly. The robot's fingers can occlude the view of environment- or robot-mounted image sensors, and tactile sensors can only measure at the local areas of contact. Motivated by fingertip-embedded proximity sensors' robustness to occlusion and ability to measure beyond the local areas of contact, we present the first evaluation of proximity sensor based pose estimation for in-hand manipulation. We develop a novel two-fingered hand with fingertip-embedded optical time-of-flight proximity sensors as a testbed for pose estimation during planar in-hand manipulation. Here, the in-hand manipulation task consists of the robot moving a cylindrical object from one end of its workspace to the other. We demonstrate, with statistical significance, that proximity-sensor based pose estimation via particle filtering during in-hand manipulation: a) exhibits 50% lower average pose error than a tactile-sensor based baseline; b) empowers a model predictive controller to achieve 30% lower final positioning error compared to when using tactile-sensor based pose estimates.Comment: 8 pages, 6 figure

    Design And Development Of Spo2, Bpm, And Body Temperature For Monitoring Patient Conditions In Iot-Based Special Isolation Rooms

    Get PDF
    The use of batteries as the main power source in portable equipment systems has several drawbacks, including the percentage of battery power that must be monitored so that the system is always active. Analysis of battery power efficiency is needed to determine the resistance of portable systems. This study makes a portable system for monitoring the condition of patients with infectious diseases in a special isolation room that can measure heart rate, body temperature, and oxygen saturation. The design of this device uses a 2200mAH battery as a power source on the IC TTGO ESP32 to manage data and display measurement results, the MAX30102 sensor to measure oxygen saturation and heart rate, and the MCP9808 sensor to measure body temperature. The design of this device has been tested on respondents aged 25-40 years by placing the sensor on the fingertip then the measurement results are compared with a standard device that has been calibrated. The measurement results show that the device is feasible to use because the measurement error value is ±5%. Testing the efficiency of battery power in normal mode and save mode. In normal mode, the current used in the device is 154.9 mA, while the save mode by not activating the LCD TTGO ESP32 requires a current of 126.7 mA. The results of the analysis show that using the battery in normal mode can activate the device for up to ±14 hours and in save mode for ±17 hours. This designed method is useful for measuring power efficiency in different device modes and the user knows the battery charging time at regular intervals

    Automatic Fracture Characterization Using Tactile and Proximity Optical Sensing

    Get PDF
    This paper demonstrates how tactile and proximity sensing can be used to perform automatic mechanical fractures detection (surface cracks). For this purpose, a custom-designed integrated tactile and proximity sensor has been implemented. With the help of fiber optics, the sensor measures the deformation of its body, when interacting with the physical environment, and the distance to the environment's objects. This sensor slides across different surfaces and records data which are then analyzed to detect and classify fractures and other mechanical features. The proposed method implements machine learning techniques (handcrafted features, and state of the art classification algorithms). An average crack detection accuracy of ~94% and width classification accuracy of ~80% is achieved. Kruskal-Wallis results (p < 0.001) indicate statistically significant differences among results obtained when analysing only integrated deformation measurements, only proximity measurements and both deformation and proximity data. A real-time classification method has been implemented for online classification of explored surfaces. In contrast to previous techniques, which mainly rely on visual modality, the proposed approach based on optical fibers might be more suitable for operation in extreme environments (such as nuclear facilities) where radiation may damage electronic components of commonly employed sensing devices, such as standard force sensors based on strain gauges and video cameras

    Capturing Hands in Action using Discriminative Salient Points and Physics Simulation

    Full text link
    Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.Comment: Accepted for publication by the International Journal of Computer Vision (IJCV) on 16.02.2016 (submitted on 17.10.14). A combination into a single framework of an ECCV'12 multicamera-RGB and a monocular-RGBD GCPR'14 hand tracking paper with several extensions, additional experiments and detail

    ThirdLight: low-cost and high-speed 3D interaction using photosensor markers

    No full text
    We present a low-cost 3D tracking system for virtual reality, gesture modeling, and robot manipulation applications which require fast and precise localization of headsets, data gloves, props, or controllers. Our system removes the need for cameras or projectors for sensing, and instead uses cheap LEDs and printed masks for illumination, and low-cost photosensitive markers. The illumination device transmits a spatiotemporal pattern as a series of binary Gray-code patterns. Multiple illumination devices can be combined to localize each marker in 3D at high speed (333Hz). Our method has strengths in accuracy, speed, cost, ambient performance, large working space (1m-5m) and robustness to noise compared with conventional techniques. We compare with a state-of-the-art instrumented glove and vision-based systems to demonstrate the accuracy, scalability, and robustness of our approach. We propose a fast and accurate method for hand gesture modeling using an inverse kinematics approach with the six photosensitive markers. We additionally propose a passive markers system and demonstrate various interaction scenarios as practical applications

    Tracking objects with point clouds from vision and touch

    Get PDF
    We present an object-tracking framework that fuses point cloud information from an RGB-D camera with tactile information from a GelSight contact sensor. GelSight can be treated as a source of dense local geometric information, which we incorporate directly into a conventional point-cloud-based articulated object tracker based on signed-distance functions. Our implementation runs at 12 Hz using an online depth reconstruction algorithm for GelSight and a modified second-order update for the tracking algorithm. We present data from hardware experiments demonstrating that the addition of contact-based geometric information significantly improves the pose accuracy during contact, and provides robustness to occlusions of small objects by the robot's end effector
    corecore