44 research outputs found
Upper limb soft robotic wearable devices: a systematic review
Introduction: Soft robotic wearable devices, referred to as exosuits, can be a valid alternative to rigid exoskeletons when it comes to daily upper limb support. Indeed, their inherent flexibility improves comfort, usability, and portability while not constraining the user’s natural degrees of freedom. This review is meant to guide the reader in understanding the current approaches across all design and production steps that might be exploited when developing an upper limb robotic exosuit. Methods: The literature research regarding such devices was conducted in PubMed, Scopus, and Web of Science. The investigated features are the intended scenario, type of actuation, supported degrees of freedom, low-level control, high-level control with a focus on intention detection, technology readiness level, and type of experiments conducted to evaluate the device. Results: A total of 105 articles were collected, describing 69 different devices. Devices were grouped according to their actuation type. More than 80% of devices are meant either for rehabilitation, assistance, or both. The most exploited actuation types are pneumatic (52%) and DC motors with cable transmission (29%). Most devices actuate 1 (56%) or 2 (28%) degrees of freedom, and the most targeted joints are the elbow and the shoulder. Intention detection strategies are implemented in 33% of the suits and include the use of switches and buttons, IMUs, stretch and bending sensors, EMG and EEG measurements. Most devices (75%) score a technology readiness level of 4 or 5. Conclusion: Although few devices can be considered ready to reach the market, exosuits show very high potential for the assistance of daily activities. Clinical trials exploiting shared evaluation metrics are needed to assess the effectiveness of upper limb exosuits on target users
Investigation of the stresses exerted by an exosuit of a human arm
International audienceIf a wheelchair could be considered as a better solution than an exoskeleton for the mobility of people suffering from neuromuscular diseases, there is a relevance to use a soft wearable exoskeleton (or exosuit) to assist the upper limbs in order to perform daily tasks such as having a drink, a pencil. It is imperative to limit the stresses generated by the exosuit on human. Numerical tests are proposed to investigate the possible technology choices to design the exoskeleton to limit these stresses. These numerical tests are based on the study of the inverse dynamic model of the human arm and its exosuit. A trajectory of the hand is defined and we deduce the cable tension to track this trajectory. Two decoupled planes are considered for the numerical tests, the sagittal plane where a flexion of the forearm with respect to the upper arm, and the frontal plane where abduction and adduction movements are possible. We assume that the human arm cannot provide any effort. The results show that the position of the anchor points at the shoulder and the orientation of the cable for the abduction movement have an influence to limit the stresses. However, these stresses are important on the shoulder
Physical Diagnosis and Rehabilitation Technologies
The book focuses on the diagnosis, evaluation, and assistance of gait disorders; all the papers have been contributed by research groups related to assistive robotics, instrumentations, and augmentative devices
Biomechatronics: Harmonizing Mechatronic Systems with Human Beings
This eBook provides a comprehensive treatise on modern biomechatronic systems
centred around human applications. A particular emphasis is given to exoskeleton
designs for assistance and training with advanced interfaces in human-machine
interaction. Some of these designs are validated with experimental results which
the reader will find very informative as building-blocks for designing such systems.
This eBook will be ideally suited to those researching in biomechatronic area with
bio-feedback applications or those who are involved in high-end research on manmachine interfaces. This may also serve as a textbook for biomechatronic design
at post-graduate level
Recommended from our members
A Study on Active/Passive Pneumatic Actuators for Assistive Systems
The need for intelligent assistive devices is growing. Due to advances in medicine, people are living longer and able to recover from severe neurological incidents, resulting in an increased population with neuromuscular weakness. In workplaces such as assembly lines, there is a high possibility of work-related fatigue or injury, such as when workers squat down or lift their arms during their work tasks. Assistive devices could help remedy loss of strength on their extremities as well as keep the work environment safe and productive, allowing these growing segments of the population in need of the devices to live more self-sufficient, productive, and higher-quality lives.In the design of assistive systems, an important design goal is prolonged operational time, which requires the minimum usage of energy. Energy consumption can be reduced by modifying the mechanical characteristics of assistive systems according to the dynamic characteristics of the human body, which vary considerably between tasks. This dissertation investigates 1) the design of actuators with adjustable mechanical impedance, 2) control strategies to search for, and adjust to, a suitable mechanical impedance for assistance and 3) sensing technologies for classifying the tasks in which the human engages.The first part of this dissertation characterizes a pneumatic variable stiffness actuator named an Active/Passive Pneumatic Actuator (AP2A). The actuator consists of an air cylinder and an array of solenoid valves. These valves and the corresponding switching algorithms tune the chamber pressures and make the AP2A function as a mechanical spring with desired stiffness. The actuator has a low mechanical impedance compared to geared motors, which enables it to achieve efficient interaction. Control strategies of an assistive system with the AP2A are discussed in the second part. This control framework utilizes the characteristics of the AP2A to provide assistance when necessary and to operate transparently (i.e., neither to assist nor to disturb the users) otherwise. Energy consumed by the AP2A and the assisted system is minimized by solving an optimal control problem. Finally, an estimator is introduced to detect assistive timing for the assistive system with the AP2A. This estimator utilizes physiological signals such as surface electromyogram and prior knowledge of a muscular model, classifying if the user is under the specified condition to be assisted by the AP2A. It demonstrates that the user's effort can be saved, also reducing the number of procedures to collect training data for the estimator before using assistive systems. The performance of the actuator, the controller, and the estimator proposed in this dissertation are verified through experiments.From the above, this dissertation contributes to developing the AP2A that provides assistance and saves energy usage of assistive systems by working as a mechanical spring with stiffness optimized for achieving effective interaction under specific conditions. This actuator supports assistive devices that can be deployed in the real world, properly assisting the users when needed
Télé-opération Corps Complet de Robots Humanoïdes
This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thèse vise à étudier des systèmes et des outils pour la télé-opération d’un robot humanoïde.La téléopération de robots est cruciale pour envoyer et contrôler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scénarios d’interventionen cas de catastrophe, des environnements contaminés ou des sites extraterrestres). Le terme téléopérationdésigne le plus souvent le contrôle direct et continu d’un robot. Dans ce cas, l’opérateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrôle. L’un des principaux défis est de contrôler le robot de manière à garantir son équilibredynamique tout en essayant de suivre les références humaines. De plus, l’opérateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs à distance afind’appréhender la situation ou de se sentir physiquement présent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le réseau de communicationn’est pas idéal. Dans ce cas, les commandes de l’homme au robot ainsi que la rétroaction du robotà l’homme peuvent être retardées. Ces délais peuvent être très gênants pour l’opérateur humain,qui ne peut pas télé-opérer efficacement son avatar robotique.Un autre point crucial à considérer lors de la mise en place d’un système de télé-opérationest le grand nombre de paramètres qui doivent être réglés pour contrôler efficacement les robotstélé-opérés. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventêtre utilisés pour automatiser l’apprentissage de certains paramètres.Dans cette thèse, nous avons proposé un système de télé-opération qui a été testé sur le robothumanoïde iCub. Nous avons utilisé une combinaison de capture de mouvement basée sur latechnologie inertielle comme périphérique de contrôle pour l’humanoïde et un casque de réalitévirtuelle connecté aux caméras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques équivalents en développant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilité du mouvement transféré. Nous avons ensuite implémenté un contrôleur du corps entierpour permettre au robot de suivre le mouvement humain reciblé. Le contrôleur a ensuite étéoptimisé en simulation pour obtenir un bon suivi des mouvements de référence du corps entier,en recourant à un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot réel en quelques essais.Pour télé-opérer les mouvements de marche, nous avons implémenté un mode de télé-opérationde niveau supérieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde référence au robot. Nous avons intégré ce paramètre dans le système de télé-opération, ce quipermet à l’utilisateur de basculer entre les deux modes différents.Un problème majeur empêchant le déploiement de tels systèmes dans des applications réellesest la présence de retards de communication entre l’entrée humaine et le retour du robot: mêmequelques centaines de millisecondes de retard peuvent irrémédiablement perturber l’opérateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un système danslequel un robot humanoïde exécute des commandes avant de les recevoir, de sorte que le retourvisuel semble être synchronisé avec l’opérateur, alors que le robot exécutait les commandes dansle passé. Pour ce faire, le robot prédit en permanence les commandes futures en interrogeant unmodèle d’apprentissage automatique formé sur les trajectoires passées et conditionné aux dernièrescommandes reçues