13 research outputs found

    PARLOMA – A Novel Human-Robot Interaction System for Deaf-blind Remote Communication

    Get PDF
    Deaf-blindness forces people to live in isolation. Up to now there is no existing technological solution enabling two (or many) Deaf-blind persons to communicate remotely among them in tactile Sign Language (t-SL). When resorting to t-SL, Deaf-blind persons can communicate only with persons physically present in the same place, because they are required to reciprocally explore their hands to exchange messages. We present a preliminary version of PARLOMA, a novel system to enable remote communication between Deaf-blind persons. It is composed of a low-cost depth sensor as the only input device, paired with a robotic hand as output device. Essentially, any user can perform handshapes in front of the depth sensor. The system is able to recognize a set of handshapes that are sent over the web and reproduced by an anthropomorphic robotic hand. PARLOMA can work as a “telephone” for Deaf-blind people. Hence, it will dramatically improve life quality of Deaf-blind persons. PARLOMA has been designed in strict collaboration with the main Italian Deaf-blind associations, in order to include end-users in the design phase

    Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    No full text
    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements

    Design and development of methodologies, technologies, and tools to support people with disabilities

    No full text
    Assistive Technologies (ATs) is an umbrella term that includes, from the one hand, assistive, adaptive, and rehabilitative devices for people with disabilities and, from the other hand, the process needed to select, locate, and use them. ATs promote greater independence by enabling people to perform tasks that they were formerly unable to accomplish (or had great difficulty accomplishing) by providing enhancements to, or changing methods of interacting with, the technology needed to accomplish such tasks. Researching on ATs means to focus both on the individuals, the users, the design, and the consecutive development of any kind of technology that could ease, or even improve, everyday life of disabled, elderly people, and people who are following rehabilitative programs. This dissertation spans on ATs that, starting from a common root and deriving from the realm of Information Technology, have been applied and deployed to several groups of individuals with disabilities. Starting from the issue of detecting hand poses, gestures, and signs for enabling novel paradigms for human-machine interaction, three approaches for hand tracking and gesture recognition from single markerless observation have been developed. The first approach comprises machine learning techniques and optimized features to boost performances. The second one comprises a 3D model of a human hand and optimization techniques. The third approach applies machine learning and statistical techniques on top of technology specifically designed for tracking human hands. Starting from these results, hand gesture recognition has then been proposed to enable new interaction paradigms, suitable for individuals with disabilities, in the eld of Human-Robot collaboration. A reliable real time protocol to remotely control anthropomorphic robotic actuators has been implemented. This protocol allows the user to send commands to one (or many) robotic actuator by simply moving his/her hand; it has been designed, modeled, and formally validated resorting to a knowledge-driven agile approach. This dissertation proposes two use cases enabled from the outcomes of the research activities. The former one is a remote communication system for deafblind individuals based on Sign Languages (SLs) with tactile feedback. With the support of SL experts, I have identified a list of fundamental hand movements and gestures to be recognized accurately. The developed algorithms were successfully tested involving 80+ volunteers (both proficient and not in SLs). This communication system is ready to be used concurrently by many people, allowing 1-to-many communication. In addition, it supports different input (cameras and sensors for non-invasive markerless hand tracking) and output (upper-limb anthropomorphic robotic interfaces) systems. The latter one is a telerehabilitation setup for upper-limb post-stroke rehabilitation, comprising vision-based input and a hand exoskeleton. Knowledge derived from the research activities has been applied to two projects, whom outcomes are discussed in this dissertation, as well. The former one lies in the realm of character recognition and aims at improving accessibility of mathematical and scientific documents for blind and deafblind individuals. The latter one aims at developing inclusive interfaces to a web platform under development for preserving and disseminating the cultural heritage of deaf and deafblind communities. All the research activities presented in this dissertation have involved a strict and direct contact with end-user associations and persons who benefit from the results of the research itself, and have been widely discussed and tested with them

    Model driven design of secure properties for vision-based applications: A case study

    No full text
    In this paper we discuss an approach to overcome difficulties and gaps which are typically encountered when dealing with security-oriented model-driven approaches. In particular, we state that state-of-the-art MDS approaches are not suitable for modern companies and industry in general, and address security only at a late stage of development, often causing big delays and reengineering costs due to extensive reworks. Instead, we propose to adopt in the SEcube platform an OTA-based XMDD approach to integrate security ab-initio. In addition, since our approach is based on a set of reusable SIBs organized within dedicated palettes in DIME, we decouple the issue of guaranteeing that the SIBs are correct and secure from the issue of analyzing the applications, which can be greatly simplified by knowing the characterization of each SIB in advance. We apply our approach to the concrete realm of computer vision steering robotics, present the safety and security properties elicited on the specific case study, and discuss the ways they can be enforce

    Evaluation of image deblurring algorithms for real-time applications

    No full text
    Camera shake is a well-known source of degradation in digital images, as it introduces motion blur. Taking satisfactory photos under dim lighting conditions or using a hand-held camera is challenging. Same problems arise when camera is connected to mechanical equipments, that transfer vibrations to the camera itself. Since decades, many different theories and algorithms have been proposed with the aim of retrieving latent images from blurry inputs; most of them work quite well, but very often incur in large execution times. There are cases in which images have to be analyzed looking for features to be extracted; in this cases, it may be useful to consider deblurring as a pre-processing stage, that should not affect the performances of the whole image processing architecture, in terms of throughput. In this paper, an extensive survey of the deblurring algorithms that have been developed during the last 40 years is provided. Aim of this paper is to highlight software approaches that are able to quickly process input images and obtain good quality outcomes, analyzing the possibility of an hardware implementation to meet real-time requirement

    Pressure-Induced Deformation of Pillar-Type Profiled Membranes and Its Effects on Flow and Mass Transfer

    No full text
    In electro-membrane processes, a pressure difference may arise between solutions flowing in alternate channels. This transmembrane pressure (TMP) causes a deformation of the membranes and of the fluid compartments. This, in turn, affects pressure losses and mass transfer rates with respect to undeformed conditions and may result in uneven flow rate and mass flux distributions. These phenomena were analyzed here for round pillar-type profiled membranes by integrated mechanical and fluid dynamics simulations. The analysis involved three steps: (1) A conservatively large value of TMP was imposed, and mechanical simulations were performed to identify the geometry with the minimum pillar density still able to withstand this TMP without collapsing (i.e., without exhibiting contacts between opposite membranes); (2) the geometry thus identified was subject to expansion and compression conditions in a TMP interval including the values expected in practical applications, and for each TMP, the corresponding deformed configuration was predicted; and (3) for each computed deformed configuration, flow and mass transfer were predicted by computational fluid dynamics. Membrane deformation was found to have important effects; friction and mass transfer coefficients generally increased in compressed channels and decreased in expanded channels, while a more complex behavior was obtained for mass transfer coefficients

    Membrane Deformation and Its Effects on Flow and Mass Transfer in the Electromembrane Processes

    Get PDF
    In the membrane processes, a trans-membrane pressure (TMP) may arise due to design features or operating conditions. In most applications, stacks for electrodialysis (ED) or reverse electrodialysis (RED) operate at low TMP (<0.1 bar); however, large stacks with non-parallel flow patterns and/or asymmetric configurations can exhibit higher TMP values, causing membrane deformations and changes in fluid dynamics and transport phenomena. In this work, integrated mechanical and fluid dynamics simulations were performed to investigate the TMP effects on deformation, flow and mass transfer for a profiled membrane-fluid channel system with geometrical and mechanical features and fluid velocities representative of ED/RED conditions. First, a conservatively high value of TMP was assumed, and mechanical simulations were conducted to identify the geometry with the largest pitch to height ratio still able to bear this load without exhibiting a contact between opposite membranes. The selected geometry was then investigated under expansion and compression conditions in a TMP range encompassing most practical applications. Finally, friction and mass transfer coefficients in the deformed channel were predicted by computational fluid dynamics. Significant effects of membrane deformation were observed: friction and mass transfer coefficients increased in the compressed channel, while they decreased (though to a lesser extent) in the expanded channel

    Neuromorphic haptic glove and platform with gestural control for tactile sensory feedback in medical telepresence applications

    No full text
    This paper presents a tactile telepresence system employed for the localization of stiff inclusions embedded in a soft matrix. The system delivers a neuromorphic spike-based haptic feedback, encoding object stiffness, to the human fingertip. For the evaluation of the developed system, in this study a customized silicon phantom was fabricated inserting 12 inclusions made of 4 different polymers (3 replicas for each material). Such inclusions, all of them having the same shape, were encapsulated in a softer silicon matrix in randomized positions. Two main blocks composed the experimental setup. The first sub-setup included an optical sensor for tracking human hand movements and a piezoelectric disk, inserted into a glove at the level of the index fingertip, to deliver tactile feedback. The second sub-setup was a 3-axis cartesian motorized sensing platform which explored the silicon phantom through a spherical indenter mechanically linked to a load cell. The movements of the platform were based on the acquired hand gestures of the user. The normal force exerted during the active sliding was converted into temporal patterns of spikes through a neuronal model, and delivered to the fingertip via the vibrotactile glove. Inclusions were detected through modulation in the aforementioned patterns generated during the experimental trials. Results suggest that the presented system allows the recognition of the stiffness variation between the encapsulated inclusions and the surrounding matrix. As expected, stiffer inclusions were more frequently discriminated than softer ones, with about 70% of stiffer inclusions being identified in the proposed task. Future works will address the investigation of a larger set of materials in order to evaluate a finer distribution of stiffness values

    Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    Get PDF
    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements

    [18F]FDG PET/CT: Lung Nodule Evaluation in Patients Affected by Renal Cell Carcinoma

    No full text
    Renal Cell Carcinoma (RCC) is generally characterized by low-FDG avidity, and [18F]FDG-PET/CT is not recommended to stage the primary tumor. However, its role to assess metastases is still unclear. The aim of this study was to evaluate the diagnostic accuracy of [18F]FDG-PET/CT in correctly identifying RCC lung metastases using histology as the standard of truth. The records of 350 patients affected by RCC were retrospectively analyzed. The inclusion criteria were: (a) biopsy- or histologically proven RCC; (b) Computed Tomography (CT) evidence of at least one lung nodule; (c) [18F]FDG-PET/CT performed prior to lung surgery; (d) lung surgery with histological analysis of surgical specimens; (e) complete follow-up available. A per-lesion analysis was performed, and diagnostic accuracy was reported as sensitivity and specificity, using histology as the standard of truth. [18F]FDG-PET/CT semiquantitative parameters (Standardized Uptake Value [SUVmax], Metabolic Tumor Volume [MTV] and Total Lesion Glycolysis [TLG]) were collected for each lesion. Sixty-seven patients with a total of 107 lesions were included: lung metastases from RCC were detected in 57 cases (53.3%), while 50 lesions (46.7%) were related to other lung malignancies. Applying a cut-off of SUVmax ≥ 2, the sensitivity and the specificity of [18F]FDG-PET/CT in detecting RCC lung metastases were 33.3% (95% CI: 21.4–47.1%) and 26% (95%CI: 14.6–40.3%), respectively. Although the analysis demonstrated a suboptimal diagnostic accuracy of [18F]FDG-PET/CT in discriminating between lung metastases from RCC and other malignancies, a semiquantitative analysis that also includes volumetric parameters (MTV and TLG) could support the correct interpretation of [18F]FDG-PET/CT images
    corecore