149,418 research outputs found

    Muscle Synergies Facilitate Computational Prediction of Subject-Specific Walking Motions.

    Get PDF
    Researchers have explored a variety of neurorehabilitation approaches to restore normal walking function following a stroke. However, there is currently no objective means for prescribing and implementing treatments that are likely to maximize recovery of walking function for any particular patient. As a first step toward optimizing neurorehabilitation effectiveness, this study develops and evaluates a patient-specific synergy-controlled neuromusculoskeletal simulation framework that can predict walking motions for an individual post-stroke. The main question we addressed was whether driving a subject-specific neuromusculoskeletal model with muscle synergy controls (5 per leg) facilitates generation of accurate walking predictions compared to a model driven by muscle activation controls (35 per leg) or joint torque controls (5 per leg). To explore this question, we developed a subject-specific neuromusculoskeletal model of a single high-functioning hemiparetic subject using instrumented treadmill walking data collected at the subject's self-selected speed of 0.5 m/s. The model included subject-specific representations of lower-body kinematic structure, foot-ground contact behavior, electromyography-driven muscle force generation, and neural control limitations and remaining capabilities. Using direct collocation optimal control and the subject-specific model, we evaluated the ability of the three control approaches to predict the subject's walking kinematics and kinetics at two speeds (0.5 and 0.8 m/s) for which experimental data were available from the subject. We also evaluated whether synergy controls could predict a physically realistic gait period at one speed (1.1 m/s) for which no experimental data were available. All three control approaches predicted the subject's walking kinematics and kinetics (including ground reaction forces) well for the model calibration speed of 0.5 m/s. However, only activation and synergy controls could predict the subject's walking kinematics and kinetics well for the faster non-calibration speed of 0.8 m/s, with synergy controls predicting the new gait period the most accurately. When used to predict how the subject would walk at 1.1 m/s, synergy controls predicted a gait period close to that estimated from the linear relationship between gait speed and stride length. These findings suggest that our neuromusculoskeletal simulation framework may be able to bridge the gap between patient-specific muscle synergy information and resulting functional capabilities and limitations

    LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning

    Full text link
    We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework for generating crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior, flow, lighting conditions, viewpoint, noise, etc. Furthermore, we can increase the realism by combining synthetically-generated behaviors with real-world background videos. We demonstrate the benefits of LCrowdV over prior lableled crowd datasets by improving the accuracy of pedestrian detection and crowd behavior classification algorithms. LCrowdV would be released on the WWW

    Using humanoid robots to study human behavior

    Get PDF
    Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans “program” behavior in-or train-each other

    XBATTLE: A Dynamic Distributed Model of Generalized Military Conflict

    Full text link
    A dynamic distributed model is presented that reproduces the dynamics of a wide range of varied battle scenarios with a general and abstract representation. The model illustrates the rich dynamic behavior that can be achieved from a simple generic model
    corecore