11 research outputs found

    A Low-cost Open Source 3D-Printable Dexterous Anthropomorphic Robotic Hand with a Parallel Spherical Joint Wrist for Sign Languages Reproduction

    Get PDF
    We present a novel open-source 3D-printable dexterous anthropomorphic robotic hand specifically designed to reproduce Sign Languages' hand poses for deaf and deaf-blind users. We improved the InMoov hand, enhancing dexterity by adding abduction/adduction degrees of freedom of three fingers (thumb, index and middle fingers) and a three-degrees-of-freedom parallel spherical joint wrist. A systematic kinematic analysis is provided. The proposed robotic hand is validated in the framework of the PARLOMA project. PARLOMA aims at developing a telecommunication system for deaf-blind people, enabling remote transmission of signs from tactile Sign Languages. Both hardware and software are provided online to promote further improvements from the community

    Haptic Glove and Platform with Gestural Control For Neuromorphic Tactile Sensory Feedback In Medical Telepresence

    Get PDF
    Advancements in the study of the human sense of touch are fueling the field of haptics. This is paving the way for augmenting sensory perception during object palpation in tele-surgery and reproducing the sensed information through tactile feedback. Here, we present a novel tele-palpation apparatus that enables the user to detect nodules with various distinct stiffness buried in an ad-hoc polymeric phantom. The contact force measured by the platform was encoded using a neuromorphic model and reproduced on the index fingertip of a remote user through a haptic glove embedding a piezoelectric disk. We assessed the effectiveness of this feedback in allowing nodule identification under two experimental conditions of real-time telepresence: In Line of Sight (ILS), where the platform was placed in the visible range of a user; and the more demanding Not In Line of Sight (NILS), with the platform and the user being 50 km apart. We found that the entailed percentage of identification was higher for stiffer inclusions with respect to the softer ones (average of 74% within the duration of the task), in both telepresence conditions evaluated. These promising results call for further exploration of tactile augmentation technology for telepresence in medical interventions

    A Novel Architectural Pattern to Support the Development of Human-Robot Interaction (HRI) Systems Integrating Haptic Interfaces and Gesture Recognition Algorithms

    No full text
    Hap tic and robotic interfaces are recently gaining momentum to be pervasively integrated in modern everyday life. In fact, they can be employed in several different fields, ranging from manipulation of small and dangerous objects to rehabilitation, assistive and service technologies, and are also integrated in mission critical systems. Modern research is rapidly shifting to investigate novel and more intuitive ways of controlling these interfaces. In particular, gesture-based control is one of the most interesting scenario for Human-Robot Interaction (HRI), since we human perceive gestures as a natural way of interaction with the external world. In this work we present a novel architectural pattern, entirely based on the Robotic Operating System (ROS), to support the development of applications and systems where computer vision techniques are applied to control robotic interfaces. As case study, the presented pattern is used to develop and assess the overall PARLOMA system. PARLOMA project aims at developing a system to enable remote communication between deaf-blind subjects. The system is designed to send, remotely and in real-time, messages in tactile Sign Language from a sender to a deaf-blind recipient (or many recipients) by integrating hand tracking and gesture recognition algorithms coupled with bio-mimetic hap tic interface

    ABLUR: An FPGA-based adaptive deblurring core for real-time applications

    No full text
    If a camera moves while taking a picture, motion blur is induced. There exist mechanical techniques to prevent this effect to occur, but they are cumbersome and expensive. Considering for example an Unmanned Aerial Vehicle (UAV) engaged in a save and rescue mission, where recording frames of scene to identify people and animals to rescue is required. In such cases, weight of equipments is of absolute importance, and no extra hardware can be used. In such case, vibrations are unavoidably transmitted to the camera, and recorded frames are affected by blur. It is then necessary to deblur in real-time every frame to allow post-processing algorithms to extract the largest possible amount of information from them. For more than 40 years, numerous researchers have developed theories and algorithms for this purpose, which work quite well but very often require multiple different versions of the input image, huge amount of computational resources, large execution times or intensive parameters tunin

    Blurring Prediction in Monocular SLAM

    No full text
    The paper presents a method aiming at improving the reliability of Simultaneous Localization And Mapping (SLAM) approaches based on vision systems. Classical SLAM approaches treat camera capturing time as negligible, and the recorded frames as sharp and well-defined, but this hypothesis does not hold true when the camera is moving too fast. In such cases, in fact, frames may be severely degraded by motion blur, making features matching task a difficult operation. The method here presented is based on a novel approach that combines the benefits of a fully probabilistic SLAM algorithm with the basic ideas behind modern motion blur handling algorithms. Whereby the Kalman Filter, the new approach predicts the best possible blur Point Spread Function (PSF) for each feature and performs matching using also this informatio

    Neuromorphic haptic glove and platform with gestural control for tactile sensory feedback in medical telepresence applications

    No full text
    This paper presents a tactile telepresence system employed for the localization of stiff inclusions embedded in a soft matrix. The system delivers a neuromorphic spike-based haptic feedback, encoding object stiffness, to the human fingertip. For the evaluation of the developed system, in this study a customized silicon phantom was fabricated inserting 12 inclusions made of 4 different polymers (3 replicas for each material). Such inclusions, all of them having the same shape, were encapsulated in a softer silicon matrix in randomized positions. Two main blocks composed the experimental setup. The first sub-setup included an optical sensor for tracking human hand movements and a piezoelectric disk, inserted into a glove at the level of the index fingertip, to deliver tactile feedback. The second sub-setup was a 3-axis cartesian motorized sensing platform which explored the silicon phantom through a spherical indenter mechanically linked to a load cell. The movements of the platform were based on the acquired hand gestures of the user. The normal force exerted during the active sliding was converted into temporal patterns of spikes through a neuronal model, and delivered to the fingertip via the vibrotactile glove. Inclusions were detected through modulation in the aforementioned patterns generated during the experimental trials. Results suggest that the presented system allows the recognition of the stiffness variation between the encapsulated inclusions and the surrounding matrix. As expected, stiffer inclusions were more frequently discriminated than softer ones, with about 70% of stiffer inclusions being identified in the proposed task. Future works will address the investigation of a larger set of materials in order to evaluate a finer distribution of stiffness values

    A cloud robotics system for telepresence enabling mobility impaired people to enjoy the whole museum experience

    No full text
    We present a novel robotic telepresence platform composed by a semi-autonomous mobile robot based on a cloud robotics framework, which has been developed with the aim of enabling mobility impaired people to enjoy museums and archaeological sites that would be otherwise inaccessible. Such places, in fact, very often are not equipped to provide access for mobility impaired people, in particular because these aids require dedicated infrastructures that may not fit within the environment and large investments. For this reason, people affected by mobility impairments are often unable to enjoy a part or even the entire museum experience. Solutions allowing mobility impaired people to enjoy museum experience are often based on recorded tours, thus they do not allow active participation of the user. On the contrary, the presented platform is intended to allow users to enjoy completely the museum round. A robot equipped with a camera is placed within the museum and users can control it in order to follow predefined tours or freely explore the museum. Our solution ensures that users see exactly what the robot is seing in real-time. The cloud robotics platform controls both navigation capabilities and teleoperation. Navigation tasks are intended to let the robot reliably follow pre-defined tours, while main concern of teleoperation tasks is to ensure robot safety (e.g., by means of dynamic obstacle detection and avoidance software). Proposed platform has been optimized to maximize user experience
    corecore