25 research outputs found

    Merging of Healthy Motor Modules Predicts Reduced Locomotor Performance and Muscle Coordination Complexity Post-Stroke

    No full text
    Evidence suggests that the nervous system controls motor tasks using a low-dimensional modular organization of muscle activation. However, it is not clear if such an organization applies to coordination of human walking, nor how nervous system injury may alter the organization of motor modules and their biomechanical outputs. We first tested the hypothesis that muscle activation patterns during walking are produced through the variable activation of a small set of motor modules. In 20 healthy control subjects, EMG signals from eight leg muscles were measured across a range of walking speeds. Four motor modules identified through nonnegative matrix factorization were sufficient to account for variability of muscle activation from step to step and across speeds. Next, consistent with the clinical notion of abnormal limb flexion-extension synergies post-stroke, we tested the hypothesis that subjects with post-stroke hemiparesis would have altered motor modules, leading to impaired walking performance. In post-stroke subjects (n = 55), a less complex coordination pattern was shown. Fewer modules were needed to account for muscle activation during walking at preferred speed compared with controls. Fewer modules resulted from merging of the modules observed in healthy controls, suggesting reduced independence of neural control signals. The number of modules was correlated to preferred walking speed, speed modulation, step length asymmetry, and propulsive asymmetry. Our results suggest a common modular organization of muscle coordination underlying walking in both healthy and post-stroke subjects. Identification of motor modules may lead to new insight into impaired locomotor coordination and the underlying neural systems

    Plug-and-Play Gesture Control Using Muscle and Motion Sensors

    No full text
    As the capacity for machines to extend human capabilities continues to grow, the communication channels used must also expand. Allowing machines to interpret nonverbal commands such as gestures can help make interactions more similar to interactions with another person. Yet to be pervasive and effective in realistic scenarios, such interfaces should not require significant sensing infrastructure or per-user setup time. The presented work takes a step towards these goals by using wearable muscle and motion sensors to detect gestures without dedicated calibration or training procedures. An algorithm is presented for clustering unlabeled streaming data in real time, and it is applied to adaptively thresholding muscle and motion signals acquired via electromyography (EMG) and an inertial measurement unit (IMU). This enables plug-and-play online detection of arm stiffening, fist clenching, rotation gestures, and forearm activation. It also augments a neural network pipeline, trained only on strategically chosen training data from previous users, to detect left, right, up, and down gestures. Together, these pipelines offer a plug-and-play gesture vocabulary suitable for remotely controlling a robot. Experiments with 6 subjects evaluate classifier performance and interface efficacy. Classifiers correctly identified 97.6% of 1,200 cued gestures, and a drone correctly responded to 81.6% of 1,535 unstructured gestures as subjects remotely controlled it through target hoops during 119 minutes of total flight time
    corecore