9,440 research outputs found

    Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control

    Full text link
    High-resolution tactile sensing can provide accurate information about local contact in contact-rich robotic tasks. However, the deployment of such tasks in unstructured environments remains under-investigated. To improve the robustness of tactile robot control in unstructured environments, we propose and study a new concept: \textit{tactile saliency} for robot touch, inspired by the human touch attention mechanism from neuroscience and the visual saliency prediction problem from computer vision. In analogy to visual saliency, this concept involves identifying key information in tactile images captured by a tactile sensor. While visual saliency datasets are commonly annotated by humans, manually labelling tactile images is challenging due to their counterintuitive patterns. To address this challenge, we propose a novel approach comprised of three interrelated networks: 1) a Contact Depth Network (ConDepNet), which generates a contact depth map to localize deformation in a real tactile image that contains target and noise features; 2) a Tactile Saliency Network (TacSalNet), which predicts a tactile saliency map to describe the target areas for an input contact depth map; 3) and a Tactile Noise Generator (TacNGen), which generates noise features to train the TacSalNet. Experimental results in contact pose estimation and edge-following in the presence of distractors showcase the accurate prediction of target features from real tactile images. Overall, our tactile saliency prediction approach gives robust sim-to-real tactile control in environments with unknown distractors. Project page: https://sites.google.com/view/tactile-saliency/.Comment: Accepted by IROS 202

    Examining the Influence of Saliency in Mobile Interface Displays

    Get PDF
    Designers spend more resources to develop better mobile experiences today than ever before. Researchers commonly use visual search efficiency as a usability measure to determine the time or effort it takes someone to perform a task. Previous research has shown that a computational visual saliency model can predict attentional deployment in stationary desktop displays. Designers can use this salience awareness to co-locate important task information with higher salience regions. Research has shown that placing targets in higher salience regions in this way improves interface efficiency. However, researchers have not tested the model in key mobile technology design dimensions such as small displays and touch screens. In two studies, we examined the influence of saliency in a mobile application interface. In the first study, we explored a saliency model’s ability to predict fixations in small mobile interfaces at three different display sizes under free-viewing conditions. In the second study, we examined the influence that visual saliency had on search efficiency while participants completed a directed search for either an interface element associated with high or low salience. We recorded reaction time to touch the targeted element on the tablet. We experimentally blocked high and low saliency interactions and subjectively measured cognitive workload. We found that a saliency model predicted fixations. In the search task, participants found highly salient targets about 900 milliseconds faster than low salient targets. Interestingly, participants did not perceive a lighter cognitive workload associated with the increase in search efficiency

    Tactile mesh saliency

    Get PDF
    While the concept of visual saliency has been previously explored in the areas of mesh and image processing, saliency detection also applies to other sensory stimuli. In this paper, we explore the problem of tactile mesh saliency, where we define salient points on a virtual mesh as those that a human is more likely to grasp, press, or touch if the mesh were a real-world object. We solve the problem of taking as input a 3D mesh and computing the relative tactile saliency of every mesh vertex. Since it is difficult to manually define a tactile saliency measure, we introduce a crowdsourcing and learning framework. It is typically easy for humans to provide relative rankings of saliency between vertices rather than absolute values. We thereby collect crowdsourced data of such relative rankings and take a learning-to-rank approach. We develop a new formulation to combine deep learning and learning-to-rank methods to compute a tactile saliency measure. We demonstrate our framework with a variety of 3D meshes and various applications including material suggestion for rendering and fabricatio

    Tactile Mesh Saliency:a brief synopsis

    Get PDF
    This work has previously been published [LDS 16] and this extended abstract provides a synopsis for further discussion at the UK CGVC 2016 conference. We introduce the concept of tactile mesh saliency, where tactile salient points on a virtual mesh are those that a human is more likely to grasp, press, or touch if the mesh were a real-world object. We solve the problem of taking as input a 3D mesh and computing the tactile saliency of every mesh vertex. The key to solving this problem is in a new formulation that combines deep learning and learning-to-rank methods to compute a tactile saliency measure. Finally, we discuss possibilities for future work
    • …
    corecore