392 research outputs found

    Haptography: Capturing and Recreating the Rich Feel of Real Surfaces

    Get PDF
    Haptic interfaces, which allow a user to touch virtual and remote environments through a hand-held tool, have opened up exciting new possibilities for applications such as computer-aided design and robot-assisted surgery. Unfortunately, the haptic renderings produced by these systems seldom feel like authentic re-creations of the richly varied surfaces one encounters in the real world. We have thus envisioned the new approach of haptography, or haptic photography, in which an individual quickly records a physical interaction with a real surface and then recreates that experience for a user at a different time and/or place. This paper presents an overview of the goals and methods of haptography, emphasizing the importance of accurately capturing and recreating the high frequency accelerations that occur during tool-mediated interactions. In the capturing domain, we introduce a new texture modeling and synthesis method based on linear prediction applied to acceleration signals recorded from real tool interactions. For recreating, we show a new haptography handle prototype that enables the user of a Phantom Omni to feel fine surface features and textures

    Improving Telerobotic Touch Via High-Frequency Acceleration Matching

    Get PDF
    Humans rely on information-laden high-frequency accelerations in addition to quasi-static forces when interacting with objects via a handheld tool. Telerobotic systems have traditionally struggled to portray such contact transients due to closed-loop bandwidth and stability limitations, leaving remote objects feeling soft and undefined. This work seeks to maximize the user’s feel for the environment through the approach of acceleration matching; high-frequency fingertip accelerations are combined with standard low-frequency position feedback without requiring a secondary actuator on the master device. In this method, the natural dynamics of the master are identified offline using frequency-domain techniques, estimating the relationship between commanded motor current and handle acceleration while a user holds the device. During subsequent telerobotic interactions, a high-bandwidth sensor measures accelerations at the slave’s end effector, and the real-time controller re-creates these important signals at the master handle by inverting the identified model. The details of this approach are explored herein, and its ability to render hard and rough surfaces is demonstrated on a standard master-slave system. Combining high-frequency acceleration matching with position-error-based feedback of quasi-static forces creates a hybrid signal that closely corresponds to human sensing capabilities, instilling telerobotics with a more realistic sense of remote touch

    Wearable haptic systems for the fingertip and the hand: taxonomy, review and perspectives

    Get PDF
    In the last decade, we have witnessed a drastic change in the form factor of audio and vision technologies, from heavy and grounded machines to lightweight devices that naturally fit our bodies. However, only recently, haptic systems have started to be designed with wearability in mind. The wearability of haptic systems enables novel forms of communication, cooperation, and integration between humans and machines. Wearable haptic interfaces are capable of communicating with the human wearers during their interaction with the environment they share, in a natural and yet private way. This paper presents a taxonomy and review of wearable haptic systems for the fingertip and the hand, focusing on those systems directly addressing wearability challenges. The paper also discusses the main technological and design challenges for the development of wearable haptic interfaces, and it reports on the future perspectives of the field. Finally, the paper includes two tables summarizing the characteristics and features of the most representative wearable haptic systems for the fingertip and the hand

    Immersive VR for upper-extremity rehabilitation in patients with neurological disorders: a scoping review

    Get PDF
    Background: Neurological disorders, such as stroke and chronic pain syndromes, profoundly impact independence and quality of life, especially when affecting upper extremity (UE) function. While conventional physical therapy has shown effectiveness in providing some neural recovery in affected individuals, there remains a need for improved interventions. Virtual reality (VR) has emerged as a promising technology-based approach for neurorehabilitation to make the patient’s experience more enjoyable. Among VR-based rehabilitation paradigms, those based on fully immersive systems with headsets have gained significant attention due to their potential to enhance patient’s engagement. Methods: This scoping review aims to investigate the current state of research on the use of immersive VR for UE rehabilitation in individuals with neurological diseases, highlighting benefits and limitations. We identified thirteen relevant studies through comprehensive searches in Scopus, PubMed, and IEEE Xplore databases. Eligible studies incorporated immersive VR for UE rehabilitation in patients with neurological disorders and evaluated participants’ neurological and motor functions before and after the intervention using clinical assessments. Results: Most of the included studies reported improvements in the participants rehabilitation outcomes, suggesting that immersive VR represents a valuable tool for UE rehabilitation in individuals with neurological disorders. In addition, immersive VR-based interventions hold the potential for personalized and intensive training within a telerehabilitation framework. However, further studies with better design are needed for true comparison with traditional therapy. Also, the potential side effects associated with VR head-mounted displays, such as dizziness and nausea, warrant careful consideration in the development and implementation of VR-based rehabilitation programs. Conclusion: This review provides valuable insights into the application of immersive VR in UE rehabilitation, offering the foundation for future research and clinical practice. By leveraging immersive VR’s potential, researchers and rehabilitation specialists can design more tailored and patient-centric rehabilitation strategies, ultimately improving the functional outcome and enhancing the quality of life of individuals with neurological diseases

    Sensor-based artificial intelligence to support people with cognitive and physical disorders

    Get PDF
    A substantial portion of the world's population deals with disability. Many disabled people do not have equal access to healthcare, education, and employment opportunities, do not receive specific disability-related services, and experience exclusion from everyday life activities. One way to face these issues is through the use of healthcare technologies. Unfortunately, there is a large amount of diverse and heterogeneous disabilities, which require ad-hoc and personalized solutions. Moreover, the design and implementation of effective and efficient technologies is a complex and expensive process involving challenging issues, including usability and acceptability. The work presented in this thesis aims to improve the current state of technologies available to support people with disorders affecting the mind or the motor system by proposing the use of sensors coupled with signal processing methods and artificial intelligence algorithms. The first part of the thesis focused on mental state monitoring. We investigated the application of a low-cost portable electroencephalography sensor and supervised learning methods to evaluate a person's attention. Indeed, the analysis of attention has several purposes, including the diagnosis and rehabilitation of children with attention-deficit/hyperactivity disorder. A novel dataset was collected from volunteers during an image annotation task, and used for the experimental evaluation using different machine learning techniques. Then, in the second part of the thesis, we focused on addressing limitations related to motor disability. We introduced the use of graph neural networks to process high-density electromyography data for upper limbs amputees’ movement/grasping intention recognition for enabling the use of robotic prostheses. High-density electromyography sensors can simultaneously acquire electromyography signals from different parts of the muscle, providing a large amount of spatio-temporal information that needs to be properly exploited to improve recognition accuracy. The investigation of the approach was conducted using a recent real-world dataset consisting of electromyography signals collected from 20 volunteers while performing 65 different gestures. In the final part of the thesis, we developed a prototype of a versatile interactive system that can be useful to people with different types of disabilities. The system can maintain a food diary for frail people with nutrition problems, such as people with neurocognitive diseases or frail elderly people, which may have difficulties due to forgetfulness or physical issues. The novel architecture automatically recognizes the preparation of food at home, in a privacy-preserving and unobtrusive way, exploiting air quality data acquired from a commercial sensor, statistical features extraction, and a deep neural network. A robotic system prototype is used to simplify the interaction with the inhabitant. For this work, a large dataset of annotated sensor data acquired over a period of 8 months from different individuals in different homes was collected. Overall, the results achieved in the thesis are promising, and pave the way for several real-world implementations and future research directions

    Increasing Transparency and Presence of Teleoperation Systems Through Human-Centered Design

    Get PDF
    Teleoperation allows a human to control a robot to perform dexterous tasks in remote, dangerous, or unreachable environments. A perfect teleoperation system would enable the operator to complete such tasks at least as easily as if he or she was to complete them by hand. This ideal teleoperator must be perceptually transparent, meaning that the interface appears to be nearly nonexistent to the operator, allowing him or her to focus solely on the task environment, rather than on the teleoperation system itself. Furthermore, the ideal teleoperation system must give the operator a high sense of presence, meaning that the operator feels as though he or she is physically immersed in the remote task environment. This dissertation seeks to improve the transparency and presence of robot-arm-based teleoperation systems through a human-centered design approach, specifically by leveraging scientific knowledge about the human motor and sensory systems. First, this dissertation aims to improve the forward (efferent) teleoperation control channel, which carries information from the human operator to the robot. The traditional method of calculating the desired position of the robot\u27s hand simply scales the measured position of the human\u27s hand. This commonly used motion mapping erroneously assumes that the human\u27s produced motion identically matches his or her intended movement. Given that humans make systematic directional errors when moving the hand under conditions similar to those imposed by teleoperation, I propose a new paradigm of data-driven human-robot motion mappings for teleoperation. The mappings are determined by having the human operator mimic the target robot as it autonomously moves its arm through a variety of trajectories in the horizontal plane. Three data-driven motion mapping models are described and evaluated for their ability to correct for the systematic motion errors made in the mimicking task. Individually-fit and population-fit versions of the most promising motion mapping model are then tested in a teleoperation system that allows the operator to control a virtual robot. Results of a user study involving nine subjects indicate that the newly developed motion mapping model significantly increases the transparency of the teleoperation system. Second, this dissertation seeks to improve the feedback (afferent) teleoperation control channel, which carries information from the robot to the human operator. We aim to improve a teleoperation system a teleoperation system by providing the operator with multiple novel modalities of haptic (touch-based) feedback. We describe the design and control of a wearable haptic device that provides kinesthetic grip-force feedback through a geared DC motor and tactile fingertip-contact-and-pressure and high-frequency acceleration feedback through a pair of voice-coil actuators mounted at the tips of the thumb and index finger. Each included haptic feedback modality is known to be fundamental to direct task completion and can be implemented without great cost or complexity. A user study involving thirty subjects investigated how these three modalities of haptic feedback affect an operator\u27s ability to control a real remote robot in a teleoperated pick-and-place task. This study\u27s results strongly support the utility of grip-force and high-frequency acceleration feedback in teleoperation systems and show more mixed effects of fingertip-contact-and-pressure feedback

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics
    • 

    corecore