13 research outputs found

    Building a Library of Tactile Skills Based on FingerVision

    Get PDF
    Camera-based tactile sensors are emerging as a promising inexpensive solution for tactile-enhanced manipulation tasks. A recently introduced Finger Vision sensor was shown capable of generating reliable signals for force estimation, object pose estimation, and slip detection. In this paper, we build upon the Finger Vision design, improving already existing control algorithms, and, more importantly, expanding its range of applicability to more challenging tasks by utilizing raw skin deformation data for control. In contrast to previous approaches that rely on the average deformation of the whole sensor surface, we directly employ local deviations of each spherical marker immersed in the silicone body of the sensor for feedback control and as input to learning tasks. We show that with such input, substances of varying texture and viscosity can be distinguished on the basis of tactile sensations evoked while stirring them. As another application, we learn a mapping between skin deformation and force applied to an object. To demonstrate the full range of capabilities of the proposed controllers, we deploy them in a challenging architectural assembly task that involves inserting a load-bearing element underneath a bendable plate at the point of maximum load

    Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation

    Full text link
    Vision-based tactile sensors have gained extensive attention in the robotics community. The sensors are highly expected to be capable of extracting contact information i.e. haptic information during in-hand manipulation. This nature of tactile sensors makes them a perfect match for haptic feedback applications. In this paper, we propose a contact force estimation method using the vision-based tactile sensor DIGIT, and apply it to a position-force teleoperation architecture for force feedback. The force estimation is done by building a depth map for DIGIT gel surface deformation measurement and applying a regression algorithm on estimated depth data and ground truth force data to get the depth-force relationship. The experiment is performed by constructing a grasping force feedback system with a haptic device as a leader robot and a parallel robot gripper as a follower robot, where the DIGIT sensor is attached to the tip of the robot gripper to estimate the contact force. The preliminary results show the capability of using the low-cost vision-based sensor for force feedback applications.Comment: IEEE CBS 202

    A bio-inspired multi-functional tendon-driven tactile sensor and application in obstacle avoidance using reinforcement learning

    Get PDF
    This paper presents a new bio-inspired tactile sensor that is multi-functional and has different sensitivity contact areas. The TacTop area is sensitive and is used for object classification when there is a direct contact. On the other hand, the TacSide area is less sensitive and is used to localize the side contact areas. By connecting tendons from the TacSide area to the TacTop area, the sensor is able to perform multiple detection functions using the same expression region. For the mixed contacting signals collected from the expression region with numerous markers and pins, we build a modified DenseNet121 network which specifically removes all fully connected layers and keeps the rest as a sub-network. The proposed model also contains a global average pooling layer with two branching networks to handle different functions and provide accurate spatial translation of the extracted features. The experimental results demonstrate a high prediction accuracy of 98% for object perception and localization. Furthermore, the new tactile sensor is utilized for obstacle avoidance, where action skills are extracted from human demonstrations and then an action dataset is generated for reinforcement learning to guide robots towards correct responses after contact detection. To evaluate the effectiveness of the proposed framework, several simulations are performed in the MuJoCo environment

    Active Tactile Sensing for Texture Perception in Robotic Systems

    Get PDF
    This thesis presents a comprehensive study of tactile sensing, particularly on the prob- lem of active texture perception. It includes a brief introduction to tactile sensing technology and the neural basis for tactile perception. It follows the literature review of textural percep- tion with tactile sensing. I propose a decoding and perception pipeline to tackle fine-texture classification/identification problems via active touching. Experiments are conducted using a 7DOF robotic arm with a finger-shaped tactile sensor mounted on the end-effector to per- form sliding/rubbing movements on multiple fabrics. Low-dimensional frequency features are extracted from the raw signals to form a perceptive feature space, where tactile signals are mapped and segregated into fabric classes. Fabric classes can be parameterized and sim- plified in the feature space using elliptical equations. Results from experiments of varied control parameters are compared and visualized to show that different exploratory move- ments have an apparent impact on the perceived tactile information. It implies the possibil- ity of optimising the robotic movements to improve the textural classification/identification performance

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping

    On Optimal Behavior Under Uncertainty in Humans and Robots

    Get PDF
    Despite significant progress in robotics and automation in the recent decades, there still remains a noticeable gap in performance compared to humans. Although the computation capabilities are growing every year, and are even projected to exceed the capacities of biological systems, the behaviors generated using current computational paradigms are arguably not catching up with the available resources. Why is that? It appears that we are still lacking some fundamental understanding of how living organisms are making decisions, and therefore we are unable to replicate intelligent behavior in artificial systems. Therefore, in this thesis, we attempted to develop a framework for modeling human and robot behavior based on statistical decision theory. Different features of this approach, such as risk-sensitivity, exploration, learning, control, were investigated in a number of publications. First, we considered the problem of learning new skills and developed a framework of entropic regularization of Markov decision processes (MDP). Utilizing a generalized concept of entropy, we were able to realize the trade-off between exploration and exploitation via a choice of a single scalar parameter determining the divergence function. Second, building on the theory of partially observable Markov decision process (POMDP), we proposed and validated a model of human ball catching behavior. Crucially, information seeking behavior was identified as a key feature enabling the modeling of observed human catches. Thus, entropy reduction was seen to play an important role in skillful human behavior. Third, having extracted the modeling principles from human behavior and having developed an information-theoretic framework for reinforcement learning, we studied the real-robot applications of the learning-based controllers in tactile-rich manipulation tasks. We investigated vision-based tactile sensors and the capability of learning algorithms to autonomously extract task-relevant features for manipulation tasks. The specific feature of tactile-based control that perception and action are tightly connected at the point of contact, enabled us to gather insights into the strengths and limitations of the statistical learning approach to real-time robotic manipulation. In conclusion, this thesis presents a series of investigations into the applicability of the statistical decision theory paradigm to modeling the behavior of humans and for synthesizing the behavior of robots. We conclude that a number of important features related to information processing can be represented and utilized in artificial systems for generating more intelligent behaviors. Nevertheless, these are only the first steps and we acknowledge that the road towards artificial general intelligence and skillful robotic applications will require more innovations and potentially transcendence of the probabilistic modeling paradigm
    corecore