1,168 research outputs found

    A Physics-Informed Low-Shot Learning For sEMG-Based Estimation of Muscle Force and Joint Kinematics

    Full text link
    Muscle force and joint kinematics estimation from surface electromyography (sEMG) are essential for real-time biomechanical analysis of the dynamic interplay among neural muscle stimulation, muscle dynamics, and kinetics. Recent advances in deep neural networks (DNNs) have shown the potential to improve biomechanical analysis in a fully automated and reproducible manner. However, the small sample nature and physical interpretability of biomechanical analysis limit the applications of DNNs. This paper presents a novel physics-informed low-shot learning method for sEMG-based estimation of muscle force and joint kinematics. This method seamlessly integrates Lagrange's equation of motion and inverse dynamic muscle model into the generative adversarial network (GAN) framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references. Experimental validations are conducted on two scenarios (i.e. the walking trials and wrist motion trials). Results indicate that the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics, which outperforms the selected benchmark methods, including physics-informed convolution neural network (PI-CNN), vallina generative adversarial network (GAN), and multi-layer extreme learning machine (ML-ELM).Comment: 17 pages, 8 Figure

    Prediction and control in human neuromusculoskeletal models

    Get PDF
    Computational neuromusculoskeletal modelling enables the generation and testing of hypotheses about human movement on a large scale, in silico. Humanoid models, which increasingly aim to replicate the full complexity of the human nervous and musculoskeletal systems, are built on extensive prior knowledge, extracted from anatomical imaging, kinematic and kinetic measurement, and codified as model description. Where inverse dynamic analysis is applied, its basis is in Newton's laws of motion, and in solving for muscular redundancy it is necessary to invoke knowledge of central nervous motor strategy. This epistemological approach contrasts strongly with the models of machine learning, which are generally over-parameterised and largely data-driven. Even as spectacular performance has been delivered by the application of these models in a number of discrete domains of artificial intelligence, work towards general human-level intelligence has faltered, leading many to wonder if the data-driven approach is fundamentally limited, and spurring efforts to combine machine learning with knowledge-based modelling. Through a series of five studies, this thesis explores the combination of neuromusculoskeletal modelling with machine learning in order to enhance the core tasks of prediction and control. Several principles for the development of clinically useful artificially intelligent systems emerge: stability, computational efficiency and incorporation of prior knowledge. The first study concerns the use of neural network function approximators for the prediction of internal forces during human movement, an important task with many clinical applications, but one for which the standard tools of modelling are slow and cumbersome. By training on a large dataset of motions and their corresponding forces, state of the art performance is demonstrated, with many-fold increases in inference speed enabling the deployment of trained models for use in a real time biofeedback system. Neural networks trained in this way, to imitate some optimal controller, encode a mapping from high-level movement descriptors to actuator commands, and may thus be deployed in simulation as \textit{policies} to control the actions of humanoid models. Unfortunately, the high complexity of realistic simulation makes stable control a challenging task, beyond the capabilities of such naively trained models. The objective of the second study was to improve performance and stability of policy-based controllers for humanoid models in simulation. A novel technique was developed, borrowing from established unsupervised adversarial methods in computer vision. This technique enabled significant gains in performance relative to a neural network baseline, without the need for additional access to the optimal controller. For the third study, increases in the capabilities of these policy-based controllers were sought. Reinforcement learning is widely considered the most powerful means of optimising such policies, but it is computationally inefficient, and this inefficiency limits its clinical utility. To mitigate this problem, a novel framework, making use of domain-specific knowledge present in motion data, and in an inverse model of the biomechanical system, was developed. Training on simple desktop hardware, this framework enabled rapid initialisation of humanoid models that were able to move naturally through a 3-dimensional simulated environment, with 900-fold improvements in sample efficiency relative to a related technique based on pure reinforcement learning. After training with subject-specific anatomical parameters, and motion data, learned policies represent personalised models of motor control that may be further interrogated to test hypotheses about movement. For the fourth study, subject-specific controllers were taken and used as the substrate for transfer learning, by removing kinematic constraints and optimising with respect to the magnitude of the medial knee joint reaction force, an important biomechanical variable in osteoarthritis of the knee. Models learned new kinematic strategies for the reduction of this biomarker, which were subsequently validated by their use, in the real world, to construct subject-specific routines for real time gait retraining. Six out of eight subjects were able to reduce medial knee joint loading by pursuing the personalised kinematic targets found in simulation. Personalisation of assistive devices, such as limb prostheses, is another area of growing interest, and one for which computational frameworks promise cost-effective solutions. Reinforcement learning provides powerful techniques for this task but the expansion of the scope of optimisation, to include previously static elements of a prosthesis, is problematic for its complexity and resulting sample inefficiency. The fifth and final study demonstrates a new algorithm that leverages the methods described in the previous studies, and additional techniques for variance control, to surmount this problem, improving sample efficiency and simultaneously, through the use of prior knowledge encoded in motion data, providing a rational means of determining optimality in the prosthesis. Trained models were able to jointly optimise motor control and prosthesis design to enable improved performance in a walking task, and optimised designs were robust to both random seed and reward specification. This algorithm could be used to speed the design and production of real personalised prostheses, representing a potent realisation of the potential benefits of combined reinforcement learning and realistic neuromusculoskeletal modelling.Open Acces

    Endoscopic Ultrasound Image Synthesis Using a Cycle-Consistent Adversarial Network

    Get PDF
    Endoscopic ultrasound (EUS) is a challenging procedure that requires skill, both in endoscopy and ultrasound image interpretation. Classification of key anatomical landmarks visible on EUS images can assist the gastroenterologist during navigation. Current applications of deep learning have shown the ability to automatically classify ultrasound images with high accuracy. However, these techniques require a large amount of labelled data which is time consuming to obtain, and in the case of EUS, is also a difficult task to perform retrospectively due to the lack of 3D context. In this paper, we propose the use of an image-to-image translation method to create synthetic EUS (sEUS) images from CT data, that can be used as a data augmentation strategy when EUS data is scarce. We train a cycle-consistent adversarial network with unpaired EUS images and CT slices extracted in a manner such that they mimic plausible EUS views, to generate sEUS images from the pancreas, aorta and liver. We quantitatively evaluate the use of sEUS images in a classification sub-task and assess the Fréchet Inception Distance. We show that synthetic data, obtained from CT data, imposes only a minor classification accuracy penalty and may help generalization to new unseen patients. The code and a dataset containing generated sEUS images are available at: https://ebonmati.github.io
    corecore