39 research outputs found

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Design and Assessment of Control Maps for Multi-Channel sEMG-Driven Prostheses and Supernumerary Limbs

    Get PDF
    Proportional and simultaneous control algorithms are considered as one of the most effective ways of mapping electromyographic signals to an artificial device. However, the applicability of these methods is limited by the high number of electromyographic features that they require to operate—typically twice as many the actuators to be controlled. Indeed, extracting many independent electromyographic signals is challenging for a number of reasons—ranging from technological to anatomical. On the contrary, the number of actively moving parts in classic prostheses or extra-limbs is often high. This paper faces this issue, by proposing and experimentally assessing a set of algorithms which are capable of proportionally and simultaneously control as many actuators as there are independent electromyographic signals available. Two sets of solutions are considered. The first uses as input electromyographic signals only, while the second adds postural measurements to the sources of information. At first, all the proposed algorithms are experimentally tested in terms of precision, efficiency, and usability on twelve able-bodied subjects, in a virtual environment. A state-of-the-art controller using twice the amount of electromyographic signals as input is adopted as benchmark. We then performed qualitative tests, where the maps are used to control a prototype of upper limb prosthesis. The device is composed of a robotic hand and a wrist implementing active prono-supination movement. Eight able-bodied subjects participated to this second round of testings. Finally, the proposed strategies were tested in exploratory experiments involving two subjects with limb loss. Results coming from the evaluations in virtual and realistic settings show encouraging results and suggest the effectiveness of the proposed approach

    A review on manipulation skill acquisition through teleoperation-based learning from demonstration

    Get PDF
    Manipulation skill learning and generalization have gained increasing attention due to the wide applications of robot manipulators and the spurt of robot learning techniques. Especially, the learning from demonstration method has been exploited widely and successfully in the robotic community, and it is regarded as a promising direction to realize the manipulation skill learning and generalization. In addition to the learning techniques, the immersive teleoperation enables the human to operate a remote robot with an intuitive interface and achieve the telepresence. Thus, it is a promising way to transfer manipulation skills from humans to robots by combining the learning methods and the teleoperation, and adapting the learned skills to different tasks in new situations. This review, therefore, aims to provide an overview of immersive teleoperation for skill learning and generalization to deal with complex manipulation tasks. To this end, the key technologies, e.g. manipulation skill learning, multimodal interfacing for teleoperation and telerobotic control, are introduced. Then, an overview is given in terms of the most important applications of immersive teleoperation platform for robot skill learning. Finally, this survey discusses the remaining open challenges and promising research topics

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    iMOVE: Development of a hybrid control interface based on sEMG and movement signals for an assistive robotic manipulator

    Get PDF
    For many people with upper limb disabilities, simple activities of daily living such as drinking, opening a door, or pushing an elevator button require the assistance of a caregiver; which reduces the independence of the individual. Assistive robotic systems controlled via human-robot interface could enable these people to perform this kind of tasks autonomously again and thereby increase their independence and quality of life. Moreover, this interface could encourage rehabilitation of motor functions because the individual would require to perform its remaining body movements and muscle activity to provide control signals. This project aims at developing a novel hybrid control interface that combines remaining movements and muscle activity of the upper body to control position and impedance of a robotic manipulator. This thesis presents a Cartesian position control system for KINOVA Gen3 robotic arm, which performs a proportional-derivative control low based to the Jacobian transpose method, that does not require inverse kinematics. A second control is proposed to change the robot’s rigidity in real-time based on measurements of muscle activity (sEMG). This control allows the user to modulate the robot’s impedance while performing a task. Moreover, it presents a body-machine interface that maps the motions of the upper body (head and shoulders) to the space of robot control signals. Its uses the principal component analysis algorithm for dimensionality reduction. The results demonstrate that combining the three methods presented above, the user can control robot positions with head and shoulders movements, while also adapting the robot’s impedance depending on its muscle activation. In the future work the performance of this system is going to be tested in patients with severe movement impairments

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8˚ to 6.4˚ compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    A Narrative Review on Wearable Inertial Sensors for Human Motion Tracking in Industrial Scenarios

    Get PDF
    Industry 4.0 has promoted the concept of automation, supporting workers with robots while maintaining their central role in the factory. To guarantee the safety of operators and improve the effectiveness of the human-robot interaction, it is important to detect the movements of the workers. Wearable inertial sensors represent a suitable technology to pursue this goal because of their portability, low cost, and minimal invasiveness. The aim of this narrative review was to analyze the state-of-the-art literature exploiting inertial sensors to track the human motion in different industrial scenarios. The Scopus database was queried, and 54 articles were selected. Some important aspects were identified: (i) number of publications per year; (ii) aim of the studies; (iii) body district involved in the motion tracking; (iv) number of adopted inertial sensors; (v) presence/absence of a technology combined to the inertial sensors; (vi) a real-time analysis; (vii) the inclusion/exclusion of the magnetometer in the sensor fusion process. Moreover, an analysis and a discussion of these aspects was also developed
    corecore