449 research outputs found

    Review of real brain-controlled wheelchairs

    Get PDF
    This paper presents a review of the state of the art regarding wheelchairs driven by a brain-computer interface (BCI). Using a brain-controlled wheelchair (BCW), disabled users could handle a wheelchair through their brain activity, granting autonomy to move through an experimental environment. A classification is established, based on the characteristics of the BCW, such as the type of electroencephalographic (EEG) signal used, the navigation system employed by the wheelchair, the task for the participants, or the metrics used to evaluate the performance. Furthermore, these factors are compared according to the type of signal used, in order to clarify the differences among them. Finally, the trend of current research in this field is discussed, as well as the challenges that should be solved in the future

    Causes of Performance Degradation in Non-invasive Electromyographic Pattern Recognition in Upper Limb Prostheses

    Get PDF
    Surface Electromyography (EMG)-based pattern recognition methods have been investigated over the past years as a means of controlling upper limb prostheses. Despite the very good reported performance of myoelectric controlled prosthetic hands in lab conditions, real-time performance in everyday life conditions is not as robust and reliable, explaining the limited clinical use of pattern recognition control. The main reason behind the instability of myoelectric pattern recognition control is that EMG signals are non-stationary in real-life environments and present a lot of variability over time and across subjects, hence affecting the system's performance. This can be the result of one or many combined changes, such as muscle fatigue, electrode displacement, difference in arm posture, user adaptation on the device over time and inter-subject singularity. In this paper an extensive literature review is performed to present the causes of the drift of EMG signals, ways of detecting them and possible techniques to counteract for their effects in the application of upper limb prostheses. The suggested techniques are organized in a table that can be used to recognize possible problems in the clinical application of EMG-based pattern recognition methods for upper limb prosthesis applications and state-of-the-art methods to deal with such problems

    Real-time EMG based pattern recognition control for hand prostheses : a review on existing methods, challenges and future implementation

    Get PDF
    Upper limb amputation is a condition that significantly restricts the amputees from performing their daily activities. The myoelectric prosthesis, using signals from residual stump muscles, is aimed at restoring the function of such lost limbs seamlessly. Unfortunately, the acquisition and use of such myosignals are cumbersome and complicated. Furthermore, once acquired, it usually requires heavy computational power to turn it into a user control signal. Its transition to a practical prosthesis solution is still being challenged by various factors particularly those related to the fact that each amputee has different mobility, muscle contraction forces, limb positional variations and electrode placements. Thus, a solution that can adapt or otherwise tailor itself to each individual is required for maximum utility across amputees. Modified machine learning schemes for pattern recognition have the potential to significantly reduce the factors (movement of users and contraction of the muscle) affecting the traditional electromyography (EMG)-pattern recognition methods. Although recent developments of intelligent pattern recognition techniques could discriminate multiple degrees of freedom with high-level accuracy, their efficiency level was less accessible and revealed in real-world (amputee) applications. This review paper examined the suitability of upper limb prosthesis (ULP) inventions in the healthcare sector from their technical control perspective. More focus was given to the review of real-world applications and the use of pattern recognition control on amputees. We first reviewed the overall structure of pattern recognition schemes for myo-control prosthetic systems and then discussed their real-time use on amputee upper limbs. Finally, we concluded the paper with a discussion of the existing challenges and future research recommendations

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8˚ to 6.4˚ compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Cooperative Human-Machine Interaction in Industrial Environments

    Get PDF
    Until the present days, there has been little advances in the relation between the shop-floor operator in an industrial environment and the machines execution the manufacturing processes. Normally, the semi-automatic processes for collaborative assembly in industry are composed of a human and non-human elements. In the human perspective, one or more persons can be working in the same cell directly or indirectly with a non-human entity. In a cell can exist several machines, normally robotic arms that perform very specific collaborative tasks with the operators. However, the latest advances are mostly related with security issues and regulations, like immediately stopping the machine if a human touches it, and not much related with operative issues like adjusting the process velocity (within a certain window of cycle time) or give preference to some tasks over another in the beginning of the shift, to benefit the operator's working conditions. Therefore, a step forward to a more advanced interaction between machine and operator should be taken, towards a more adaptive and rich symbiosis. The main goal of the present Dissertation is to explore the relation between the shop-floor operator and the machine in a cyber physical system. For that purpose, biometric sensors will be used (ECG, EMG, EDA, PZT, wearables and others) to monitor the operators physiology during the operative times, and based on that, explore how a collaborative process can be adapted to minimize the operator's stress and fatigue. First, the correct set of sensors should be explored to understand how stress and fatigue metrics can be calculated. Secondly, optimization techniques need to be studied in order to, e.g. finds the correct machine's process parameterization that, on one hand, minimizes the operator's fatigue and stress, and on the other, do not jeopardizes the process requirements in terms of timing and quality. Therefore, this can be stated as a multivariate optimization problem

    Towards electrodeless EMG linear envelope signal recording for myo-activated prostheses control

    Get PDF
    After amputation, the residual muscles of the limb may function in a normal way, enabling the electromyogram (EMG) signals recorded from them to be used to drive a replacement limb. These replacement limbs are called myoelectric prosthesis. The prostheses that use EMG have always been the first choice for both clinicians and engineers. Unfortunately, due to the many drawbacks of EMG (e.g. skin preparation, electromagnetic interferences, high sample rate, etc.); researchers have aspired to find suitable alternatives. One proposes the dry-contact, low-cost sensor based on a force-sensitive resistor (FSR) as a valid alternative which instead of detecting electrical events, detects mechanical events of muscle. FSR sensor is placed on the skin through a hard, circular base to sense the muscle contraction and to acquire the signal. Similarly, to reduce the output drift (resistance) caused by FSR edges (creep) and to maintain the FSR sensitivity over a wide input force range, signal conditioning (Voltage output proportional to force) is implemented. This FSR signal acquired using FSR sensor can be used directly to replace the EMG linear envelope (an important control signal in prosthetics applications). To find the best FSR position(s) to replace a single EMG lead, the simultaneous recording of EMG and FSR output is performed. Three FSRs are placed directly over the EMG electrodes, in the middle of the targeted muscle and then the individual (FSR1, FSR2 and FSR3) and combination of FSR (e.g. FSR1+FSR2, FSR2-FSR3) is evaluated. The experiment is performed on a small sample of five volunteer subjects. The result shows a high correlation (up to 0.94) between FSR output and EMG linear envelope. Consequently, the usage of the best FSR sensor position shows the ability of electrode less FSR-LE to proportionally control the prosthesis (3-D claw). Furthermore, FSR can be used to develop a universal programmable muscle signal sensor that can be suitable to control the myo-activated prosthesis

    Dyadic behavior in co-manipulation :from humans to robots

    Get PDF
    To both decrease the physical toll on a human worker, and increase a robot’s environment perception, a human-robot dyad may be used to co-manipulate a shared object. From the premise that humans are efficient working together, this work’s approach is to investigate human-human dyads co-manipulating an object. The co-manipulation is evaluated from motion capture data, surface electromyography (EMG) sensors, and custom contact sensors for qualitative performance analysis. A human-human dyadic co-manipulation experiment is designed in which every human is instructed to behave as a leader, as a follower or neither, acting as naturally as possible. The experiment data analysis revealed that humans modulate their arm mechanical impedance depending on their role during the co-manipulation. In order to emulate the human behavior during a co-manipulation task, an admittance controller with varying stiffness is presented. The desired stiffness is continuously varied based on a scalar and smooth function that assigns a degree of leadership to the robot. Furthermore, the controller is analyzed through simulations, its stability is analyzed by Lyapunov. The resulting object trajectories greatly resemble the patterns seen in the human-human dyad experiment.Para tanto diminuir o esforço físico de um humano, quanto aumentar a percepção de um ambiente por um robô, um díade humano-robô pode ser usado para co-manipulação de um objeto compartilhado. Partindo da premissa de que humanos são eficientes trabalhando juntos, a abordagem deste trabalho é a de investigar díades humano-humano co-manipulando um objeto compartilhado. A co-manipulação é avaliada a partir de dados de um sistema de captura de movimentos, sinais de eletromiografia (EMG), e de sensores de contato customizados para análise qualitativa de desempenho. Um experimento de co-manipulação com díades humano-humano foi projetado no qual cada humano é instruído a se comportar como um líder, um seguidor, ou simplesmente agir tão naturalmente quanto possível. A análise de dados do experimento revelou que os humanos modulam a rigidez mecânica do braço a depender de que tipo de comportamento eles foram designados antes da co-manipulação. Para emular o comportamento humano durante uma tarefa de co-manipulação, um controle por admitância com rigidez variável é apresentado neste trabalho. A rigidez desejada é continuamente variada com base em uma função escalar suave que define o grau de liderança do robô. Além disso, o controlador é analisado por meio de simulações, e sua estabilidade é analisada pela teoria de Lyapunov. As trajetórias resultantes do uso do controlador mostraram um padrão de comportamento muito parecido ao do experimento com díades humano-humano

    Development of Digital Control Systems for Wearable Mechatronic Devices: Applications in Musculoskeletal Rehabilitation of the Upper Limb

    Get PDF
    The potential for wearable mechatronic systems to assist with musculoskeletal rehabilitation of the upper limb has grown with the technology. One limiting factor to realizing the benefits of these devices as motion therapy tools is within the development of digital control solutions. Despite many device prototypes and research efforts in the surrounding fields, there are a lack of requirements, details, assessments, and comparisons of control system characteristics, components, and architectures in the literature. Pairing this with the complexity of humans, the devices, and their interactions makes it a difficult task for control system developers to determine the best solution for their desired applications. The objective of this thesis is to develop, evaluate, and compare control system solutions that are capable of tracking motion through the control of wearable mechatronic devices. Due to the immaturity of these devices, the design, implementation, and testing processes for the control systems is not well established. In order to improve the efficiency and effectiveness of these processes, control system development and evaluation tools have been proposed. The Wearable Mechatronics-Enabled Control Software framework was developed to enable the implementation and comparison of different control software solutions presented in the literature. This framework reduces the amount of restructuring and modification required to complete these development tasks. An integration testing protocol was developed to isolate different aspects of the control systems during testing. A metric suite is proposed that expands on the existing literature and allows for the measurement of more control characteristics. Together, these tools were used ii ABSTRACT iii to developed, evaluate, and compare control system solutions. Using the developed control systems, a series of experiments were performed that involved tracking elbow motion using wearable mechatronic elbow devices. The accuracy and repeatability of the motion tracking performances, the adaptability of the control models, and the resource utilization of the digital systems were measured during these experiments. Statistical analysis was performed on these metrics to compare between experimental factors. The results of the tracking performances show some of the highest accuracies for elbow motion tracking with these devices. The statistical analysis revealed many factors that significantly impact the tracking performance, such as visual feedback, motion training, constrained motion, motion models, motion inputs, actuation components, and control outputs. Furthermore, the completion of the experiments resulted in three first-time studies, such as the comparison of muscle activation models and the quantification of control system task timing and data storage needs. The successes of these experiments highlight that accurate motion tracking, using biological signals of the user, is possible, but that many more efforts are needed to obtain control solutions that are robust to variations in the motion and characteristics of the user. To guide the future development of these control systems, a national survey was conducted of therapists regarding their patient data collection and analysis methods. From the results of this survey, a series of requirements for software systems, that allow therapists to interact with the control systems of these devices, were collected. Increasing the participation of therapists in the development processes of wearable assistive devices will help to produce better requirements for developers. This will allow the customization of control systems for specific therapies and patient characteristics, which will increase the benefit and adoption rate of these devices within musculoskeletal rehabilitation programs
    corecore