2,054 research outputs found

    A flexible sensor technology for the distributed measurement of interaction pressure

    Get PDF
    We present a sensor technology for the measure of the physical human-robot interaction pressure developed in the last years at Scuola Superiore Sant'Anna. The system is composed of flexible matrices of opto-electronic sensors covered by a soft silicone cover. This sensory system is completely modular and scalable, allowing one to cover areas of any sizes and shapes, and to measure different pressure ranges. In this work we present the main application areas for this technology. A first generation of the system was used to monitor human-robot interaction in upper- (NEUROExos; Scuola Superiore Sant'Anna) and lower-limb (LOPES; University of Twente) exoskeletons for rehabilitation. A second generation, with increased resolution and wireless connection, was used to develop a pressure-sensitive foot insole and an improved human-robot interaction measurement systems. The experimental characterization of the latter system along with its validation on three healthy subjects is presented here for the first time. A perspective on future uses and development of the technology is finally drafted

    Feature Analysis for Classification of Physical Actions using surface EMG Data

    Full text link
    Based on recent health statistics, there are several thousands of people with limb disability and gait disorders that require a medical assistance. A robot assisted rehabilitation therapy can help them recover and return to a normal life. In this scenario, a successful methodology is to use the EMG signal based information to control the support robotics. For this mechanism to function properly, the EMG signal from the muscles has to be sensed and then the biological motor intention has to be decoded and finally the resulting information has to be communicated to the controller of the robot. An accurate detection of the motor intention requires a pattern recognition based categorical identification. Hence in this paper, we propose an improved classification framework by identification of the relevant features that drive the pattern recognition algorithm. Major contributions include a set of modified spectral moment based features and another relevant inter-channel correlation feature that contribute to an improved classification performance. Next, we conducted a sensitivity analysis of the classification algorithm to different EMG channels. Finally, the classifier performance is compared to that of the other state-of the art algorithm

    Intelligent upper-limb exoskeleton using deep learning to predict human intention for sensory-feedback augmentation

    Full text link
    The age and stroke-associated decline in musculoskeletal strength degrades the ability to perform daily human tasks using the upper extremities. Although there are a few examples of exoskeletons, they need manual operations due to the absence of sensor feedback and no intention prediction of movements. Here, we introduce an intelligent upper-limb exoskeleton system that uses cloud-based deep learning to predict human intention for strength augmentation. The embedded soft wearable sensors provide sensory feedback by collecting real-time muscle signals, which are simultaneously computed to determine the user's intended movement. The cloud-based deep-learning predicts four upper-limb joint motions with an average accuracy of 96.2% at a 200-250 millisecond response rate, suggesting that the exoskeleton operates just by human intention. In addition, an array of soft pneumatics assists the intended movements by providing 897 newton of force and 78.7 millimeter of displacement at maximum. Collectively, the intent-driven exoskeleton can augment human strength by 5.15 times on average compared to the unassisted exoskeleton. This report demonstrates an exoskeleton robot that augments the upper-limb joint movements by human intention based on a machine-learning cloud computing and sensory feedback.Comment: 15 pages, 6 figures, 1 table, Submitted for possible publicatio

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Single Lead EMG signal to Control an Upper Limb Exoskeleton Using Embedded Machine Learning on Raspberry Pi

    Get PDF
    Post-stroke can cause partial or complete paralysis of the human limb. Delayed rehabilitation steps in post-stroke patients can cause muscle atrophy and limb stiffness. Post-stroke patients require an upper limb exoskeleton device for the rehabilitation process. Several previous studies used more than one electrode lead to control the exoskeleton. The use of many electrode leads can lead to an increase in complexity in terms of hardware and software. Therefore, this research aims to develop single lead EMG pattern recognition to control an upper limb exoskeleton. The main contribution of this research is that the robotic upper limb exoskeleton device can be controlled using a single lead EMG. EMG signals were tapped at the biceps point with a sampling frequency of 2000 Hz. A Raspberry Pi 3B+ was used to embed the data acquisition, feature extraction, classification and motor control by using multithread algorithm. The exoskeleton arm frame is made using 3D printing technology using a high torque servo motor drive. The control process is carried out by extracting EMG signals using EMG features (mean absolute value, root mean square, variance) further extraction results will be trained on machine learning (decision tree (DT), linear regression (LR), polynomial regression (PR), and random forest (RF)). The results show that machine learning decision tree and random forest produce the highest accuracy compared to other classifiers. The accuracy of DT and RF are of 96.36±0.54% and 95.67±0.76%, respectively. Combining the EMG features, shows that there is no significant difference in accuracy (p-value 0.05). A single lead EMG electrode can control the upper limb exoskeleton robot device well

    Analysis of the human interaction with a wearable lower-limb exoskeleton

    Get PDF
    The design of a wearable robotic exoskeleton needs to consider the interaction, either physical or cognitive, between the human user and the robotic device. This paper presents a method to analyse the interaction between the human user and a unilateral, wearable lower-limb exoskeleton. The lower-limb exoskeleton function was to compensate for muscle weakness around the knee joint. It is shown that the cognitive interaction is bidirectional; on the one hand, the robot gathered information from the sensors in order to detect human actions, such as the gait phases, but the subjects also modified their gait patterns to obtain the desired responses from the exoskeleton. The results of the two-phase evaluation of learning with healthy subjects and experiments with a patient case are presented, regarding the analysis of the interaction, assessed in terms of kinematics, kinetics and/or muscle recruitment. Human-driven response of the exoskeleton after training revealed the improvements in the use of the device, while particular modifications of motion patterns were observed in healthy subjects. Also, endurance (mechanical) tests provided criteria to perform experiments with one post-polio patient. The results with the post-polio patient demonstrate the feasibility of providing gait compensation by means of the presented wearable exoskeleton, designed with a testing procedure that involves the human users to assess the human-robot interaction

    Analysis of derived features for the motion classification of a passive lower limb exoskeleton

    Get PDF
    Analysis of Derived Features for the Motion Classification of a PassiveLowerLimbExoskeleton The recognition of human motion intentions is a fundamental requirement to control efficiently an exoskeleton system. The exoskeleton control can be enhanced or subsequent motions can be predicted, if the current intended motion is known. At H2T research has been carried out with a classification system based on Hidden Markov Models (HMMs) to classify the multi-modal sensor data acquired from a unilateral passive lower-limb exoskeleton. The training data is formed of force vectors, linear accelerations and Euler angles provided by 7 3D-force sensors and 3 IMUs. The recordings consist of data of 10 subjects performing 14 different types of daily activities, each one carried out 10 times. This master thesis attempts to improve the motion classification by using physical meaningful derived features from the raw data aforementioned. The knee vector moment and the knee and ankle joint angles, which respectively give a kinematic and dynamic description of a motion, were the derived features considered. Firstly, these new features are analysed to study their patterns and the resemblance of the data among different subjects is quantified in order to check their consistency. Afterwards, the derived features are evaluated in the motion classification system to check their performance. Various configurations of the classifier were tested including different preprocessors of the data employed and the structure of the HMMs used to represent each motion. Some setups combining derived features and raw data led to good results (e.g. norm of the moment vector and IMUs got 89.39% of accuracy), but did not improve the best results of previous works (e.g. 2 IMUs and 1 Force Sensor got 90.73% of accuracy). Although the classification results are not improved, it is proved that these derived features are a good representation of their primary features and a suitable option if a dimensional reduction of the data is pursued. At the end, possible directions of improvement are suggested to improve the motion classification concerning the results obtained along the thesis.Outgoin
    • 

    corecore