11,733 research outputs found

    Complexity, rate, and scale in sliding friction dynamics between a finger and textured surface.

    Get PDF
    Sliding friction between the skin and a touched surface is highly complex, but lies at the heart of our ability to discriminate surface texture through touch. Prior research has elucidated neural mechanisms of tactile texture perception, but our understanding of the nonlinear dynamics of frictional sliding between the finger and textured surfaces, with which the neural signals that encode texture originate, is incomplete. To address this, we compared measurements from human fingertips sliding against textured counter surfaces with predictions of numerical simulations of a model finger that resembled a real finger, with similar geometry, tissue heterogeneity, hyperelasticity, and interfacial adhesion. Modeled and measured forces exhibited similar complex, nonlinear sliding friction dynamics, force fluctuations, and prominent regularities related to the surface geometry. We comparatively analysed measured and simulated forces patterns in matched conditions using linear and nonlinear methods, including recurrence analysis. The model had greatest predictive power for faster sliding and for surface textures with length scales greater than about one millimeter. This could be attributed to the the tendency of sliding at slower speeds, or on finer surfaces, to complexly engage fine features of skin or surface, such as fingerprints or surface asperities. The results elucidate the dynamical forces felt during tactile exploration and highlight the challenges involved in the biological perception of surface texture via touch

    About the nature of Kansei information, from abstract to concrete

    Get PDF
    Designer’s expertise refers to the scientific fields of emotional design and kansei information. This paper aims to answer to a scientific major issue which is, how to formalize designer’s knowledge, rules, skills into kansei information systems. Kansei can be considered as a psycho-physiologic, perceptive, cognitive and affective process through a particular experience. Kansei oriented methods include various approaches which deal with semantics and emotions, and show the correlation with some design properties. Kansei words may include semantic, sensory, emotional descriptors, and also objects names and product attributes. Kansei levels of information can be seen on an axis going from abstract to concrete dimensions. Sociological value is the most abstract information positioned on this axis. Previous studies demonstrate the values the people aspire to drive their emotional reactions in front of particular semantics. This means that the value dimension should be considered in kansei studies. Through a chain of value-function-product attributes it is possible to enrich design generation and design evaluation processes. This paper describes some knowledge structures and formalisms we established according to this chain, which can be further used for implementing computer aided design tools dedicated to early design. These structures open to new formalisms which enable to integrate design information in a non-hierarchical way. The foreseen algorithmic implementation may be based on the association of ontologies and bag-of-words.AN

    Pictures in Your Mind: Using Interactive Gesture-Controlled Reliefs to Explore Art

    Get PDF
    Tactile reliefs offer many benefits over the more classic raised line drawings or tactile diagrams, as depth, 3D shape, and surface textures are directly perceivable. Although often created for blind and visually impaired (BVI) people, a wider range of people may benefit from such multimodal material. However, some reliefs are still difficult to understand without proper guidance or accompanying verbal descriptions, hindering autonomous exploration. In this work, we present a gesture-controlled interactive audio guide (IAG) based on recent low-cost depth cameras that can be operated directly with the hands on relief surfaces during tactile exploration. The interactively explorable, location-dependent verbal and captioned descriptions promise rapid tactile accessibility to 2.5D spatial information in a home or education setting, to online resources, or as a kiosk installation at public places. We present a working prototype, discuss design decisions, and present the results of two evaluation studies: the first with 13 BVI test users and the second follow-up study with 14 test users across a wide range of people with differences and difficulties associated with perception, memory, cognition, and communication. The participant-led research method of this latter study prompted new, significant and innovative developments

    Whisking with robots from rat vibrissae to biomimetic technology for active touch

    Get PDF
    This article summarizes some of the key features of the rat vibrissal system, including the actively controlled sweeping movements of the vibrissae known as whisking, and reviews the past and ongoing research aimed at replicating some of this functionality in biomimetic robots

    Limits of Kansei – Kansei unlimited

    Get PDF
    This article discusses momentary limitations of the Kansei Engineering methods. There are for example the focus on the evaluation of colour and form factors, as well as the highly time consuming creation of the questionnaires. To overcome these limits we firstly suggest the integration of word lists from related research fields, like sociology and cognitive psychology on product emotions in the Kansei questionnaires. Thereafter we present a study on the wide range of Kansei attributes treated in an industrial setting. Concept words used by designers are being collected through word maps and categorized into attributes. In a third step we introduce a user-product interaction schema in which the Kansei attributes from the study are positioned. This schema unfolds potential expansion points for future applications of Kansei engineering beyond its current limits

    Human Inspired Multi-Modal Robot Touch

    Get PDF

    More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch

    Full text link
    For humans, the process of grasping an object relies heavily on rich tactile feedback. Most recent robotic grasping work, however, has been based only on visual input, and thus cannot easily benefit from feedback after initiating contact. In this paper, we investigate how a robot can learn to use tactile information to iteratively and efficiently adjust its grasp. To this end, we propose an end-to-end action-conditional model that learns regrasping policies from raw visuo-tactile data. This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions. Our approach requires neither calibration of the tactile sensors, nor any analytical modeling of contact forces, thus reducing the engineering effort required to obtain efficient grasping policies. We train our model with data from about 6,450 grasping trials on a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger. Across extensive experiments, our approach outperforms a variety of baselines at (i) estimating grasp adjustment outcomes, (ii) selecting efficient grasp adjustments for quick grasping, and (iii) reducing the amount of force applied at the fingers, while maintaining competitive performance. Finally, we study the choices made by our model and show that it has successfully acquired useful and interpretable grasping behaviors.Comment: 8 pages. Published on IEEE Robotics and Automation Letters (RAL). Website: https://sites.google.com/view/more-than-a-feelin
    • …
    corecore