637 research outputs found

    Gait sonification for rehabilitation: adjusting gait patterns by acoustic transformation of kinematic data

    Get PDF
    To enhance motor learning in both sport and rehabilitation, auditory feedback has emerged as an effective tool. Since it requires less attention than visual feedback and hardly affects the visually dominated orientation in space, it can be used safely and effectively in natural locomotion such as walking. One method for generating acoustic movement feedback is the direct mapping of kinematic data to sound (movement sonification). Using this method in orthopedic gait rehabilitation could make an important contribution to the prevention of falls and secondary diseases. This would not only reduce the individual suffering of the patients, but also medical treatment costs. To determine the possible applications of movement sonification in gait rehabilitation in the context of this work, a new gait sonification method based on inertial sensor technology was developed. Against the background of current scientific findings on sensorimotor function, feedback methods, and gait analysis, three studies published in scientific journals are presented in this thesis: The first study shows the applicability and acceptance of the feedback method in patients undergoing inpatient rehabilitation after unilateral total hip arthroplasty. In addition, the direct effect of gait sonification during ten gait training sessions on the patients’ gait pattern was revealed. In the second study, the immediate follow-up effect of gait sonification on the kinematics of the same patient group is examined at four measurement points after gait training. In this context, a significant influence of sonification on the gait pattern of the patients was shown, which, however, did not meet the previously expected effects. In view of this finding, the effect of the specific sound parameter loudness of gait sonification on the gait of healthy persons was analyzed in a third study. Thus, an impact of asymmetric loudness of gait sonification on the ground contact time could be detected. Considering this cause-effect relationship can be a component in improving gait sonfication in rehabilitation. Overall, the feasibility and effectiveness of movement sonification in gait rehabilitation of patients after unilateral hip arthroplasty becomes evident. The findings thus illustrate the potential of the method to efficiently support orthopedic gait rehabilitation in the future. On the basis of the results presented, this potential can be exploited in particular by an adequate mapping of movement to sound, a systematic modification of selected sound parameters, and a target-group-specific selection of the gait sonification mode. In addition to a detailed investigation of the three factors mentioned above, an optimization and refinement of gait analysis in patients after arthroplasty using inertial sensor technology will be beneficial in the future.Akustisches Feedback kann wirkungsvoll eingesetzt werden, um das Bewegungslernen sowohl im Sport als auch in der Rehabilitation zu erleichtern. Da es weniger Aufmerksamkeit als visuelles Feedback erfordert und die visuell dominierte Orientierung im Raum kaum beeinträchtigt, kann es während einer natürlichen Fortbewegung wie dem Gehen sicher und effektiv genutzt werden. Eine Methode zur Generierung akustischen Bewegungsfeedbacks ist die direkte Abbildung kinematischer Daten auf Sound (Bewegungssonifikation). Ein Einsatz dieser Methode in der orthopädischen Gangrehabilitation könnte einen wichtigen Beitrag zur Prävention von Stürzen und Folgeerkrankungen leisten. Neben dem individuellen Leid der Patienten ließen sich so auch medizinische Behandlungskosten erheblich reduzieren. Um im Rahmen dieser Arbeit die Einsatzmöglichkeiten der Bewegungssonifikation in der Gangrehabilitation zu bestimmen, wurde eine neue Gangsonifikationsmethodik auf Basis von Inertialsensorik entwickelt. Zu der entwickelten Methodik werden, vor dem Hintergrund aktueller wissenschaftlicher Erkenntnisse zur Sensomotorik, zu Feedbackmethoden und zur Ganganalyse, in dieser Thesis drei in Fachzeitschriften publizierte Studien vorgestellt. Die erste Studie beschreibt die Anwendbarkeit und Akzeptanz der Feedbackmethode bei Patienten in stationärer Rehabilitation nach unilateraler Hüftendoprothetik. Darüber hinaus wird der direkte Effekt der Gangsonifikation während eines zehnmaligen Gangtrainings auf das Gangmuster der Patienten deutlich. In der zweiten Studie wird der unmittelbare Nacheffekt der Gangsonifikation auf die Kinematik der gleichen Patientengruppe zu vier Messzeitpunkten nach dem Gangtraining untersucht. In diesem Zusammenhang zeigte sich ein signifikanter Einfluss der Sonifikation auf das Gangbild der Patienten, der allerdings nicht den zuvor erwarteten Effekten entsprach. Aufgrund dieses Ergebnisses wurde in einer dritten Studie die Wirkung des spezifischen Klangparameters Lautstärke der Gangsonifikation auf das Gangbild von gesunden Personen analysiert. Dabei konnte ein Einfluss von asymmetrischer Lautstärke der Gangsonifikation auf die Bodenkontaktzeit nachgewiesen werden. Die Berücksichtigung dieses Ursache-Wirkungs-Zusammenhangs kann einen Baustein bei der Verbesserung der Gangsonifikation in der Rehabilitation darstellen. Insgesamt wird die Anwendbarkeit und Wirksamkeit von Bewegungssonifikation in der Gangrehabilitation bei Patienten nach unilateraler Hüftendoprothetik evident. Die gewonnenen Erkenntnisse verdeutlichen das Potential der Methode, die orthopädische Gangrehabilitation zukünftig effizient zu unterstützen. Ausschöpfen lässt sich dieses Potential auf Grundlage der vorgestellten Ergebnisse insbesondere anhand einer adäquaten Zuordnung von Bewegung zu Sound, einer systematischen Modifikation ausgewählter Soundparameter sowie einer zielgruppenspezifischen Wahl des Modus der Sonifikation. Neben einer differenzierten Untersuchung der genannten Faktoren, erscheint zukünftig eine Optimierung und Verfeinerung der Ganganalyse bei Patienten nach Endoprothetik unter Einsatz von Inertialsensorik notwendig

    Human movement modifications induced by different levels of transparency of an active upper limb exoskeleton

    Get PDF
    Active upper limb exoskeletons are a potentially powerful tool for neuromotor rehabilitation. This potential depends on several basic control modes, one of them being transparency. In this control mode, the exoskeleton must follow the human movement without altering it, which theoretically implies null interaction efforts. Reaching high, albeit imperfect, levels of transparency requires both an adequate control method and an in-depth evaluation of the impacts of the exoskeleton on human movement. The present paper introduces such an evaluation for three different “transparent” controllers either based on an identification of the dynamics of the exoskeleton, or on force feedback control or on their combination. Therefore, these controllers are likely to induce clearly different levels of transparency by design. The conducted investigations could allow to better understand how humans adapt to transparent controllers, which are necessarily imperfect. A group of fourteen participants were subjected to these three controllers while performing reaching movements in a parasagittal plane. The subsequent analyses were conducted in terms of interaction efforts, kinematics, electromyographic signals and ergonomic feedback questionnaires. Results showed that, when subjected to less performing transparent controllers, participants strategies tended to induce relatively high interaction efforts, with higher muscle activity, which resulted in a small sensitivity of kinematic metrics. In other words, very different residual interaction efforts do not necessarily induce very different movement kinematics. Such a behavior could be explained by a natural human tendency to expend effort to preserve their preferred kinematics, which should be taken into account in future transparent controllers evaluation

    Quantitative Upper Limb Impairment Assessment for Stroke Rehabilitation: A Review

    Get PDF
    With the number of people surviving a stroke soaring, automated upper limb impairment assessment has been extensively investigated in the past decades since it lays the foundation for personalised precision rehabilitation. The recent advancement of sensor systems, such as high-precision and real-time data transmission, have made it possible to quantify the kinematic and physiological parameters of stroke patients. In this paper, we review the development of sensor-based upper limb quantitative impairment assessment, concentrating on the capable of comprehensively and accurately detecting motion parameters and measuring physiological indicators to achieve the objective and rapid quantification of the stroke severity. The paper discusses various features used by different sensors, detectable actions, their utilization techniques, and effects of sensor placement on system accuracy and stability. In addition, both the advantages and disadvantages of the model-based and model-free algorithms are also reviewed. Furthermore, challenges encompassing comprehensive assessment of medical scales, neurological deficits assessment, random movement detection, the effect of the sensor placement, and the effect of the number of sensors are also discussed

    Une méthode de mesure du mouvement humain pour la programmation par démonstration

    Full text link
    Programming by demonstration (PbD) is an intuitive approach to impart a task to a robot from one or several demonstrations by the human teacher. The acquisition of the demonstrations involves the solution of the correspondence problem when the teacher and the learner differ in sensing and actuation. Kinesthetic guidance is widely used to perform demonstrations. With such a method, the robot is manipulated by the teacher and the demonstrations are recorded by the robot's encoders. In this way, the correspondence problem is trivial but the teacher dexterity is afflicted which may impact the PbD process. Methods that are more practical for the teacher usually require the identification of some mappings to solve the correspondence problem. The demonstration acquisition method is based on a compromise between the difficulty of identifying these mappings, the level of accuracy of the recorded elements and the user-friendliness and convenience for the teacher. This thesis proposes an inertial human motion tracking method based on inertial measurement units (IMUs) for PbD for pick-and-place tasks. Compared to kinesthetic guidance, IMUs are convenient and easy to use but can present a limited accuracy. Their potential for PbD applications is investigated. To estimate the trajectory of the teacher's hand, 3 IMUs are placed on her/his arm segments (arm, forearm and hand) to estimate their orientations. A specific method is proposed to partially compensate the well-known drift of the sensor orientation estimation around the gravity direction by exploiting the particular configuration of the demonstration. This method, called heading reset, is based on the assumption that the sensor passes through its original heading with stationary phases several times during the demonstration. The heading reset is implemented in an integration and vector observation algorithm. Several experiments illustrate the advantages of this heading reset. A comprehensive inertial human hand motion tracking (IHMT) method for PbD is then developed. It includes an initialization procedure to estimate the orientation of each sensor with respect to the human arm segment and the initial orientation of the sensor with respect to the teacher attached frame. The procedure involves a rotation and a static position of the extended arm. The measurement system is thus robust with respect to the positioning of the sensors on the segments. A procedure for estimating the position of the human teacher relative to the robot and a calibration procedure for the parameters of the method are also proposed. At the end, the error of the human hand trajectory is measured experimentally and is found in an interval between 28.528.5 mm and 61.861.8 mm. The mappings to solve the correspondence problem are identified. Unfortunately, the observed level of accuracy of this IHMT method is not sufficient for a PbD process. In order to reach the necessary level of accuracy, a method is proposed to correct the hand trajectory obtained by IHMT using vision data. A vision system presents a certain complementarity with inertial sensors. For the sake of simplicity and robustness, the vision system only tracks the objects but not the teacher. The correction is based on so-called Positions Of Interest (POIs) and involves 3 steps: the identification of the POIs in the inertial and vision data, the pairing of the hand POIs to objects POIs that correspond to the same action in the task, and finally, the correction of the hand trajectory based on the pairs of POIs. The complete method for demonstration acquisition is experimentally evaluated in a full PbD process. This experiment reveals the advantages of the proposed method over kinesthesy in the context of this work.La programmation par démonstration est une approche intuitive permettant de transmettre une tâche à un robot à partir d'une ou plusieurs démonstrations faites par un enseignant humain. L'acquisition des démonstrations nécessite cependant la résolution d'un problème de correspondance quand les systèmes sensitifs et moteurs de l'enseignant et de l'apprenant diffèrent. De nombreux travaux utilisent des démonstrations faites par kinesthésie, i.e., l'enseignant manipule directement le robot pour lui faire faire la tâche. Ce dernier enregistre ses mouvements grâce à ses propres encodeurs. De cette façon, le problème de correspondance est trivial. Lors de telles démonstrations, la dextérité de l'enseignant peut être altérée et impacter tout le processus de programmation par démonstration. Les méthodes d'acquisition de démonstration moins invalidantes pour l'enseignant nécessitent souvent des procédures spécifiques pour résoudre le problème de correspondance. Ainsi l'acquisition des démonstrations se base sur un compromis entre complexité de ces procédures, le niveau de précision des éléments enregistrés et la commodité pour l'enseignant. Cette thèse propose ainsi une méthode de mesure du mouvement humain par capteurs inertiels pour la programmation par démonstration de tâches de ``pick-and-place''. Les capteurs inertiels sont en effet pratiques et faciles à utiliser, mais sont d'une précision limitée. Nous étudions leur potentiel pour la programmation par démonstration. Pour estimer la trajectoire de la main de l'enseignant, des capteurs inertiels sont placés sur son bras, son avant-bras et sa main afin d'estimer leurs orientations. Une méthode est proposée afin de compenser partiellement la dérive de l'estimation de l'orientation des capteurs autour de la direction de la gravité. Cette méthode, appelée ``heading reset'', est basée sur l'hypothèse que le capteur passe plusieurs fois par son azimut initial avec des phases stationnaires lors d'une démonstration. Cette méthode est implémentée dans un algorithme d'intégration et d'observation de vecteur. Des expériences illustrent les avantages du ``heading reset''. Cette thèse développe ensuite une méthode complète de mesure des mouvements de la main humaine par capteurs inertiels (IHMT). Elle comprend une première procédure d'initialisation pour estimer l'orientation des capteurs par rapport aux segments du bras humain ainsi que l'orientation initiale des capteurs par rapport au repère de référence de l'humain. Cette procédure, consistant en une rotation et une position statique du bras tendu, est robuste au positionnement des capteurs. Une seconde procédure est proposée pour estimer la position de l'humain par rapport au robot et pour calibrer les paramètres de la méthode. Finalement, l'erreur moyenne sur la trajectoire de la main humaine est mesurée expérimentalement entre 28.5 mm et 61.8 mm, ce qui n'est cependant pas suffisant pour la programmation par démonstration. Afin d'atteindre le niveau de précision nécessaire, une nouvelle méthode est développée afin de corriger la trajectoire de la main par IHMT à partir de données issues d'un système de vision, complémentaire des capteurs inertiels. Pour maintenir une certaine simplicité et robustesse, le système de vision ne suit que les objets et pas l'enseignant. La méthode de correction, basée sur des ``Positions Of Interest (POIs)'', est constituée de 3 étapes: l'identification des POIs dans les données issues des capteurs inertiels et du système de vision, puis l'association de POIs liées à la main et de POIs liées aux objets correspondant à la même action, et enfin, la correction de la trajectoire de la main à partir des paires de POIs. Finalement, la méthode IHMT corrigée est expérimentalement évaluée dans un processus complet de programmation par démonstration. Cette expérience montre l'avantage de la méthode proposée sur la kinesthésie dans le contexte de ce travail

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    The feasibility of measuring rehabilitation-induced changes in upper limb movement and cognition using robotic kinematics in chronic stroke

    Get PDF
    Background: Robotic measurement of kinematics is a potential method to detect precise rehabilitation-induced changes in upper limb movement and cognition post-stroke. To what degree robot-derived data aligns with other gold-standard upper limb measurement tools has yet to be described. Such comparisons would be important for translating such tools to research and clinical practice. Methods: Using the Kinesiological Instrument for Normal and Altered Reaching Movement (Kinarm), we compared the relationship between robot-derived values and gold-standard clinical tests of upper limb performance and cognitive function before and after a rehabilitation intervention in patients with chronic stroke. The intervention involved 10 sessions of pairing aerobic exercise with skilled motor and cognitive practice. Participants underwent motor performance and cognitive function assessments using the Kinarm endpoint robot and standardized measurement scales at baseline, after the 10 intervention sessions and 30 days later. Results: Ten participants with chronic upper limb impairment due to stroke (69.4 ± 12.9 years old: 7 males, 3 females) completed the intervention sessions. There were no significant improvements in upper limb recovery when measured using the clinical gold-standard tests. However, robotic kinematics variables showed significant changes in motor performance at follow-up. There were no significant changes in cognitive measures pre- and post-rehabilitation intervention. Conclusion: Rehabilitation-induced changes in upper limb performance and cognitive changes may be effectively detected and quantified using robotic kinematics measures

    Co-simulation of human digital twins and wearable inertial sensors to analyse gait event estimation

    Get PDF
    We propose a co-simulation framework comprising biomechanical human body models and wearable inertial sensor models to analyse gait events dynamically, depending on inertial sensor type, sensor positioning, and processing algorithms. A total of 960 inertial sensors were virtually attached to the lower extremities of a validated biomechanical model and shoe model. Walking of hemiparetic patients was simulated using motion capture data (kinematic simulation). Accelerations and angular velocities were synthesised according to the inertial sensor models. A comprehensive error analysis of detected gait events versus reference gait events of each simulated sensor position across all segments was performed. For gait event detection, we considered 1-, 2-, and 4-phase gait models. Results of hemiparetic patients showed superior gait event estimation performance for a sensor fusion of angular velocity and acceleration data with lower nMAEs (9%) across all sensor positions compared to error estimation with acceleration data only. Depending on algorithm choice and parameterisation, gait event detection performance increased up to 65%. Our results suggest that user personalisation of IMU placement should be pursued as a first priority for gait phase detection, while sensor position variation may be a secondary adaptation target. When comparing rotatory and translatory error components per body segment, larger interquartile ranges of rotatory errors were observed for all phase models i.e., repositioning the sensor around the body segment axis was more harmful than along the limb axis for gait phase detection. The proposed co-simulation framework is suitable for evaluating different sensor modalities, as well as gait event detection algorithms for different gait phase models. The results of our analysis open a new path for utilising biomechanical human digital twins in wearable system design and performance estimation before physical device prototypes are deployed

    Analysis of ANN and Fuzzy Logic Dynamic Modelling to Control the Wrist Exoskeleton

    Get PDF
    Human intention has long been a primary emphasis in the field of electromyography (EMG) research. This being considered, the movement of the exoskeleton hand can be accurately predicted based on the user's preferences. The EMG is a nonlinear signal formed by muscle contractions as the human hand moves and easily captured noise signal from its surroundings. Due to this fact, this study aims to estimate wrist desired velocity based on EMG signals using ANN and FL mapping methods. The output was derived using EMG signals and wrist position were directly proportional to control wrist desired velocity. Ten male subjects, ranging in age from 21 to 40, supplied EMG signal data set used for estimating the output in single and double muscles experiments. To validate the performance, a physical model of an exoskeleton hand was created using Sim-mechanics program tool. The ANN used Levenberg training method with 1 hidden layer and 10 neurons, while FL used a triangular membership function to represent muscles contraction signals amplitude at different MVC levels for each wrist position. As a result, PID was substituted to compensate fluctuation of mapping outputs, resulting in a smoother signal reading while improving the estimation of wrist desired velocity performance. As a conclusion, ANN compensates for complex nonlinear input to estimate output, but it works best with large data sets. FL allowed designers to design rules based on their knowledge, but the system will struggle due to the large number of inputs. Based on the results achieved, FL was able to show a distinct separation of wrist desired velocity hand movement when compared to ANN for similar testing datasets due to the decision making based on rules setting setup by the designer

    Multikernel convolutional neural network for sEMG based hand gesture classification

    Get PDF
    openIl riconoscimento dei gesti della mano è un argomento ampiamente discusso in letteratura, dove vengono analizzate diverse tecniche sia in termini di tipi di segnale in ingresso che di algoritmi. Tra i più utilizzati ci sono i segnali elettromiografici (sEMG), già ampiamente sfruttati nelle applicazioni di interazione uomo-macchina (HMI). Determinare come decodificare le informazioni contenute nei segnali EMG in modo robusto e accurato è un problema chiave per il quale è urgente trovare una soluzione. Recentemente, molti incarichi di riconoscimento dei pattern EMG sono stati affrontati utilizzando metodi di deep learning. Nonostante le elevate prestazioni di questi ultimi, le loro capacità di generalizzazione sono spesso limitate dall'elevata eterogeneità tra i soggetti, l'impedenza cutanea, il posizionamento dei sensori, ecc. Inoltre, poiché questo progetto è focalizzato sull'applicazione in tempo reale di protesi, ci sono maggiori vincoli sui tempi di risposta del sistema che riducono la complessità dei modelli. In questa tesi è stata testata una rete neurale convoluzionale multi-kernel su diversi dataset pubblici per verificare la sua generalizzabilità. Inoltre, è stata analizzata la capacità del modello di superare i limiti inter-soggetto e inter-sessione in giorni diversi, preservando i vincoli legati a un sistema embedded. I risultati confermano le difficoltà incontrate nell'estrazione di informazioni dai segnali emg; tuttavia, dimostrano la possibilità di ottenere buone prestazioni per un uso robusto di mani prostetiche. Inoltre, è possibile ottenere prestazioni migliori personalizzando il modello con tecniche di transfer learning e di adattamento al dominio.Hand gesture recognition is a widely discussed topic in the literature, where different techniques are analyzed in terms of both input signal types and algorithms. Among the most widely used are electromyographic signals (sEMG), which are already widely exploited in human-computer interaction (HMI) applications. Determining how to decode the information contained in EMG signals robustly and accurately is a key problem for which a solution is urgently needed. Recently, many EMG pattern recognition tasks have been addressed using deep learning methods. Despite their high performance, their generalization capabilities are often limited by high heterogeneity among subjects, skin impedance, sensor placement, etc. In addition, because this project is focused on the real-time application of prostheses, there are greater constraints on the system response times that reduce the complexity of the models. In this thesis, a multi-kernel convolutional neural network was tested on several public datasets to verify its generalizability. In addition, the model's ability to overcome inter-subject and inter-session constraints on different days while preserving the constraints associated with an embedded system was analyzed. The results confirm the difficulties encountered in extracting information from emg signals; however, they demonstrate the possibility of achieving good performance for robust use of prosthetic hands. In addition, better performance can be achieved by customizing the model with transfer learning and domain-adaptationtechniques

    Evaluating EEG–EMG Fusion-Based Classification as a Method for Improving Control of Wearable Robotic Devices for Upper-Limb Rehabilitation

    Get PDF
    Musculoskeletal disorders are the biggest cause of disability worldwide, and wearable mechatronic rehabilitation devices have been proposed for treatment. However, before widespread adoption, improvements in user control and system adaptability are required. User intention should be detected intuitively, and user-induced changes in system dynamics should be unobtrusively identified and corrected. Developments often focus on model-dependent nonlinear control theory, which is challenging to implement for wearable devices. One alternative is to incorporate bioelectrical signal-based machine learning into the system, allowing for simpler controller designs to be augmented by supplemental brain (electroencephalography/EEG) and muscle (electromyography/EMG) information. To extract user intention better, sensor fusion techniques have been proposed to combine EEG and EMG; however, further development is required to enhance the capabilities of EEG–EMG fusion beyond basic motion classification. To this end, the goals of this thesis were to investigate expanded methods of EEG–EMG fusion and to develop a novel control system based on the incorporation of EEG–EMG fusion classifiers. A dataset of EEG and EMG signals were collected during dynamic elbow flexion–extension motions and used to develop EEG–EMG fusion models to classify task weight, as well as motion intention. A variety of fusion methods were investigated, such as a Weighted Average decision-level fusion (83.01 ± 6.04% accuracy) and Convolutional Neural Network-based input-level fusion (81.57 ± 7.11% accuracy), demonstrating that EEG–EMG fusion can classify more indirect tasks. A novel control system, referred to as a Task Weight Selective Controller (TWSC), was implemented using a Gain Scheduling-based approach, dictated by external load estimations from an EEG–EMG fusion classifier. To improve system stability, classifier prediction debouncing was also proposed to reduce misclassifications through filtering. Performance of the TWSC was evaluated using a developed upper-limb brace simulator. Due to simulator limitations, no significant difference in error was observed between the TWSC and PID control. However, results did demonstrate the feasibility of prediction debouncing, showing it provided smoother device motion. Continued development of the TWSC, and EEG–EMG fusion techniques will ultimately result in wearable devices that are able to adapt to changing loads more effectively, serving to improve the user experience during operation
    • …
    corecore