33 research outputs found

    Tactile Roughness Perception of Virtual Gratings by Electrovibration

    Full text link
    Realistic display of tactile textures on touch screens is a big step forward for haptic technology to reach a wide range of consumers utilizing electronic devices on a daily basis. Since the texture topography cannot be rendered explicitly by electrovibration on touch screens, it is important to understand how we perceive the virtual textures displayed by friction modulation via electrovibration. We investigated the roughness perception of real gratings made of plexiglass and virtual gratings displayed by electrovibration through a touch screen for comparison. In particular, we conducted two psychophysical experiments with 10 participants to investigate the effect of spatial period and the normal force applied by finger on roughness perception of real and virtual gratings in macro size. We also recorded the contact forces acting on the participants' finger during the experiments. The results showed that the roughness perception of real and virtual gratings are different. We argue that this difference can be explained by the amount of fingerpad penetration into the gratings. For real gratings, penetration increased tangential forces acting on the finger, whereas for virtual ones where skin penetration is absent, tangential forces decreased with spatial period. Supporting our claim, we also found that increasing normal force increases the perceived roughness of real gratings while it causes an opposite effect for the virtual gratings. These results are consistent with the tangential force profiles recorded for both real and virtual gratings. In particular, the rate of change in tangential force (dFt/dtdF_t/dt) as a function of spatial period and normal force followed trends similar to those obtained for the roughness estimates of real and virtual gratings, suggesting that it is a better indicator of the perceived roughness than the tangential force magnitude.Comment: Manuscript received June 25, 2019; revised November 15, 2019; accepted December 11, 201

    Recognition of Haptic Interaction Patterns in Dyadic Joint Object Manipulation

    Get PDF
    The development of robots that can physically cooperate with humans has attained interest in the last decades. Obviously, this effort requires a deep understanding of the intrinsic properties of interaction. Up to now, many researchers have focused on inferring human intents in terms of intermediate or terminal goals in physical tasks. On the other hand, working side by side with people, an autonomous robot additionally needs to come up with in-depth information about underlying haptic interaction patterns that are typically encountered during human-human cooperation. However, to our knowledge, no study has yet focused on characterizing such detailed information. In this sense, this work is pioneering as an effort to gain deeper understanding of interaction patterns involving two or more humans in a physical task. We present a labeled human-human-interaction dataset, which captures the interaction of two humans, who collaboratively transport an object in an haptics-enabled virtual environment. In the light of information gained by studying this dataset, we propose that the actions of cooperating partners can be examined under three interaction types: In any cooperative task, the interacting humans either 1) work in harmony, 2) cope with conflicts, or 3) remain passive during interaction. In line with this conception, we present a taxonomy of human interaction patterns; then propose five different feature sets, comprising force-, velocity-and power-related information, for the classification of these patterns. Our evaluation shows that using a multi-class support vector machine (SVM) classifier, we can accomplish a correct classification rate of 86 percent for the identification of interaction patterns, an accuracy obtained by fusing a selected set of most informative features by Minimum Redundancy Maximum Relevance (mRMR) feature selection method

    Intention recognition for dynamic role exchange in haptic collaboration

    No full text
    In human-computer collaboration involving haptics, a key issue that remains to be solved is to establish an intuitive communication between the partners. Even though computers are widely used to aid human operators in teleoperation, guidance, and training, because they lack the adaptability, versatility, and awareness of a human, their ability to improve efficiency and effectiveness in dynamic tasks is limited. We suggest that the communication between a human and a computer can be improved if it involves a decision-making process in which the computer is programmed to infer the intentions of the human operator and dynamically adjust the control levels of the interacting parties to facilitate a more intuitive interaction setup. In this paper, we investigate the utility of such a dynamic role exchange mechanism, where partners negotiate through the haptic channel to trade their control levels on a collaborative task. We examine the energy consumption, the work done on the manipulated object, and the joint efficiency in addition to the task performance. We show that when compared to an equal control condition, a role exchange mechanism improves task performance and the joint efficiency of the partners. We also show that augmenting the system with additional informative visual and vibrotactile cues, which are used to display the state of interaction, allows the users to become aware of the underlying role exchange mechanism and utilize it in favor of the task. These cues also improve the users sense of interaction and reinforce his/her belief that the computer aids with the execution of the task. © 2013 IEEE

    Resolving conflicts during human-robot co-manipulation

    Get PDF
    UK Research and Innovation, UKRI: EP/S033718/2, EP/T022493/1, EP/V00784XThis work is partially funded by UKRI and CHIST-ERA (HEAP: EP/S033718/2; Horizon: EP/T022493/1; TAS Hub: EP/V00784X).This paper proposes a machine learning (ML) approach to detect and resolve motion conflicts that occur between a human and a proactive robot during the execution of a physically collaborative task. We train a random forest classifier to distinguish between harmonious and conflicting human-robot interaction behaviors during object co-manipulation. Kinesthetic information generated through the teamwork is used to describe the interactive quality of collaboration. As such, we demonstrate that features derived from haptic (force/torque) data are sufficient to classify if the human and the robot harmoniously manipulate the object or they face a conflict. A conflict resolution strategy is implemented to get the robotic partner to proactively contribute to the task via online trajectory planning whenever interactive motion patterns are harmonious, and to follow the human lead when a conflict is detected. An admittance controller regulates the physical interaction between the human and the robot during the task. This enables the robot to follow the human passively when there is a conflict. An artificial potential field is used to proactively control the robot motion when partners work in harmony. An experimental study is designed to create scenarios involving harmonious and conflicting interactions during collaborative manipulation of an object, and to create a dataset to train and test the random forest classifier. The results of the study show that ML can successfully detect conflicts and the proposed conflict resolution mechanism reduces human force and effort significantly compared to the case of a passive robot that always follows the human partner and a proactive robot that cannot resolve conflicts. © 2023 Copyright is held by the owner/author(s).2-s2.0-8515037875

    An intuitive tangible game controller

    Get PDF
    This paper outlines the development of a sensory feedback device providing a low cost, versatile and intuitive interface for controlling digital environments, in this example a flight simulator. Gesture based input allows for a more immersive experience, so rather than making the user feel like they are controlling an aircraft the intuitive interface allows the user to become the aircraft that is controlled by the movements of the user's hand. The movements are designed to feel intuitive and allow for a sense of immersion that would be difficult to achieve with an alternative interface. In this example the user's hand can become the aircraft much the same way that a child would imagine it

    A Novel Untethered Hand Wearable with Fine-Grained Cutaneous Haptic Feedback

    Get PDF
    During open surgery, a surgeon relies not only on the detailed view of the organ being operated upon and on being able to feel the fine details of this organ but also heavily relies on the combination of these two senses. In laparoscopic surgery, haptic feedback provides surgeons information on interaction forces between instrument and tissue. There have been many studies to mimic the haptic feedback in laparoscopic-related telerobotics studies to date. However, cutaneous feedback is mostly restricted or limited in haptic feedback-based minimally invasive studies. We argue that fine-grained information is needed in laparoscopic surgeries to study the details of the instrument’s end and can convey via cutaneous feedback. We propose an exoskeleton haptic hand wearable which consists of five 4 ⇄ 4 miniaturized fingertip actuators, 80 in total, to convey cutaneous feedback. The wearable is described as modular, lightweight, Bluetooth, and WiFi-enabled, and has a maximum power consumption of 830 mW. Software is developed to demonstrate rapid tactile actuation of edges; this allows the user to feel the contours in cutaneous feedback. Moreover, to demonstrate the idea as an object displayed on a flat monitor, initial tests were carried out in 2D. In the second phase, the wearable exoskeleton glove is then further developed to feel 3D virtual objects by using a virtual reality (VR) headset demonstrated by a VR environment. Two-dimensional and 3D objects were tested by our novel untethered haptic hand wearable. Our results show that untethered humans understand actuation in cutaneous feedback just in a single tapping with 92.22% accuracy. Our wearable has an average latency of 46.5 ms, which is much less than the 600 ms tolerable delay acceptable by a surgeon in teleoperation. Therefore, we suggest our untethered hand wearable to enhance multimodal perception in minimally invasive surgeries to naturally feel the immediate environments of the instruments

    Eye gaze correlates of motor impairment in VR observation of motor actions

    Get PDF
    Introduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Methodologies, Models and A lgorithms for Patients Rehabilitation”. Objective: Identify eye gaze correlates of motor impairment in a virtual reality motor observation task in a study with healthy participants and stroke patients. Methods: Participants consisted of a group of healthy subjects (N = 20) and a group of stroke survivors (N = 10). Both groups were required to observe a simple reach-and-grab and place-and-release task in a virtual environment. Additionally, healthy subjects were required to observe the task in a normal condition and a constrained movement condition. Eye movements were recorded during the observation task for later analysis.info:eu-repo/semantics/publishedVersio

    Dervish Sound Dress; An Investigation of Wearable Technology Using Computer Music and Haptic Mechanisms for Live Performance

    Get PDF
    The realm of this thesis combines the areas of computer music, fashion design, digital art, smart clothing, biometrics, cultural traditions and performance. Dervish Sound Dress is a wearable piece of technology; a garment that is inspired by the sacred ‘turning’ experience of the Whirling Dervishes or the Mevlevi Sufi order in Turkey known as the sema. It utilizes the fundamental aspects of the sema such as music, performance and body movement through spiritual elation by creating a unique and interactive experience. Wearable technology is a burgeoning field of research. Fashion designers who are using smart textiles or integrating fashion and technology in some way require collaboration with electrical engineers and programming professionals. The garment functions as a body instrument and can be manipulated by the wearer. The cultural traditions of the Mevlevi Sufis and their metaphysical experience during the turning ritual of the sema performance is the inspiration behind the creation of a garment that emulates sounds by using body movement. Dervish Sound Dress is outfitted with sensors that trigger musical sounds when the wearer touches the bodice interface or changes gesture or movement. The wearer is alerted to the sounds through the use of haptics that are sensed on the body. The sensation is similar to when a musician plays an instrument that reverberates resulting in an immersive relationship that goes further than the auditory. The aim is to develop garments that will inspire the creation of musical sounds that can be controlled by an intuitive interface in clothing. It is a study that uses technology and performance by taking a sacred experience and creating artistic expression. Dervish Sound Dress seeks to examine how technology can be integrated into a garment as an expressive body instrument to augment contemporary sonic performance
    corecore