1,045 research outputs found

    Design and Evaluation of the LOPES Exoskeleton Robot for Interactive Gait Rehabilitation

    Get PDF
    This paper introduces a newly developed gait rehabilitation device. The device, called LOPES, combines a freely translatable and 2-D-actuated pelvis segment with a leg exoskeleton containing three actuated rotational joints: two at the hip and one at the knee. The joints are impedance controlled to allow bidirectional mechanical interaction between the robot and the training subject. Evaluation measurements show that the device allows both a "pa- tient-in-charge" and "robot-in-charge" mode, in which the robot is controlled either to follow or to guide a patient, respectively. Electromyography (EMG) measurements (one subject) on eight important leg muscles, show that free walking in the device strongly resembles free treadmill walking; an indication that the device can offer task-specific gait training. The possibilities and limitations to using the device as gait measurement tool are also shown at the moment position measurements are not accurate enough for inverse-dynamical gait analysis

    How do Cross-View and Cross-Modal Alignment Affect Representations in Contrastive Learning?

    Full text link
    Various state-of-the-art self-supervised visual representation learning approaches take advantage of data from multiple sensors by aligning the feature representations across views and/or modalities. In this work, we investigate how aligning representations affects the visual features obtained from cross-view and cross-modal contrastive learning on images and point clouds. On five real-world datasets and on five tasks, we train and evaluate 108 models based on four pretraining variations. We find that cross-modal representation alignment discards complementary visual information, such as color and texture, and instead emphasizes redundant depth cues. The depth cues obtained from pretraining improve downstream depth prediction performance. Also overall, cross-modal alignment leads to more robust encoders than pre-training by cross-view alignment, especially on depth prediction, instance segmentation, and object detection

    UniBEV: Multi-modal 3D Object Detection with Uniform BEV Encoders for Robustness against Missing Sensor Modalities

    Full text link
    Multi-sensor object detection is an active research topic in automated driving, but the robustness of such detection models against missing sensor input (modality missing), e.g., due to a sudden sensor failure, is a critical problem which remains under-studied. In this work, we propose UniBEV, an end-to-end multi-modal 3D object detection framework designed for robustness against missing modalities: UniBEV can operate on LiDAR plus camera input, but also on LiDAR-only or camera-only input without retraining. To facilitate its detector head to handle different input combinations, UniBEV aims to create well-aligned Bird's Eye View (BEV) feature maps from each available modality. Unlike prior BEV-based multi-modal detection methods, all sensor modalities follow a uniform approach to resample features from the native sensor coordinate systems to the BEV features. We furthermore investigate the robustness of various fusion strategies w.r.t. missing modalities: the commonly used feature concatenation, but also channel-wise averaging, and a generalization to weighted averaging termed Channel Normalized Weights. To validate its effectiveness, we compare UniBEV to state-of-the-art BEVFusion and MetaBEV on nuScenes over all sensor input combinations. In this setting, UniBEV achieves 52.5%52.5 \% mAP on average over all input combinations, significantly improving over the baselines (43.5%43.5 \% mAP on average for BEVFusion, 48.7%48.7 \% mAP on average for MetaBEV). An ablation study shows the robustness benefits of fusing by weighted averaging over regular concatenation, and of sharing queries between the BEV encoders of each modality. Our code will be released upon paper acceptance.Comment: 6 pages, 5 figure

    SliceMatch: Geometry-guided Aggregation for Cross-View Pose Estimation

    Full text link
    This work addresses cross-view camera pose estimation, i.e., determining the 3-Degrees-of-Freedom camera pose of a given ground-level image w.r.t. an aerial image of the local area. We propose SliceMatch, which consists of ground and aerial feature extractors, feature aggregators, and a pose predictor. The feature extractors extract dense features from the ground and aerial images. Given a set of candidate camera poses, the feature aggregators construct a single ground descriptor and a set of pose-dependent aerial descriptors. Notably, our novel aerial feature aggregator has a cross-view attention module for ground-view guided aerial feature selection and utilizes the geometric projection of the ground camera's viewing frustum on the aerial image to pool features. The efficient construction of aerial descriptors is achieved using precomputed masks. SliceMatch is trained using contrastive learning and pose estimation is formulated as a similarity comparison between the ground descriptor and the aerial descriptors. Compared to the state-of-the-art, SliceMatch achieves a 19% lower median localization error on the VIGOR benchmark using the same VGG16 backbone at 150 frames per second, and a 50% lower error when using a ResNet50 backbone

    Neuromechanical Model-Based Adaptive Control of Bilateral Ankle Exoskeletons:Biological Joint Torque and Electromyogram Reduction Across Walking Conditions

    Get PDF
    To enable the broad adoption of wearable robotic exoskeletons in medical and industrial settings, it is crucial they can adaptively support large repertoires of movements. We propose a new human-machine interface to simultaneously drive bilateral ankle exoskeletons during a range of 'unseen' walking conditions and transitions that were not used for establishing the control interface. The proposed approach used person-specific neuromechanical models to estimate biological ankle joint torques in real-time from measured electromyograms (EMGS) and joint angles. We call this 'neuromechanical model-based control' (NMBC). NMBC enabled six individuals to voluntarily control a bilateral ankle exoskeleton across six walking conditions, including all intermediate transitions, i.e., two walking speeds, each performed at three ground elevations. A single subject case-study was carried out on a dexterous locomotion tasks involving moonwalking. NMBC always enabled reducing biological ankle torques, as well as eight ankle muscle EMGs both within (22% torque;12% EMG) and between walking conditions (24% torque; 14% EMG) when compared to non-assisted conditions. Torque and EMG reductions in novel walking conditions indicated that the exoskeleton operated symbiotically, as an exomuscle controlled by the operator.s neuromuscular system. This opens new avenues for the systematic adoption of wearable robots as part of out-of-the-lab medical and occupational settings

    Reproducability of corticomuscular coherence:A comparison between static and perturbed tasks

    Get PDF
    Corticomuscular coherence (CMC) is used to quantify functional corticomuscular coupling during a static motor task. Although the reproducibility of CMC characteristics such as peak strength and frequency within one session is good, reproducibility of CMC between sessions is limited (Pohja et al. 2005, NeuroImage). Reproducible CMC characteristics are required in order to assess changes in corticomuscular coupling in a longitudinal study design, for example during rehabilitation. We recently demonstrated that the presence of CMC in the population in substantially increased when position perturbations are applied during an isotonic force task. Here, we assessed the reproducibility of perturbed CMC compared to unperturbed CMC. Subjects (n=10) performed isotonic wrist flexion contractions against the handle of a wrist manipulator (WM) while EEG (64 channels) and EMG of the m.flexor carpi radialis were recorded in two experimental sessions separated by at least one week. The handle of the WM either kept a neutral angle (baseline task) or imposed a small angle perturbation (perturbed task). In the baseline task, 3 subjects had significant CMC in both the first and the second sessions. In the other 7 subjects no significant CMC was found in both sessions. Between sessions, significant CMC was always found in overlapping frequency bands and generally on overlapping electrodes. In the subjects with CMC a significant cross correlation coefficient between the spectra in the two sessions was present (mean 0.57; 0.3 - 0.79). In the perturbed task CMC was present in 8 subjects in both sessions and absent in 1 subject in the two sessions. One subject had CMC only in the second session. For the subjects with CMC, the correlation coefficient between the spectra of the two sessions was significantly larger than zero with a mean of 0.68 (range 0.38 - 0.88). The presence and absence of CMC within subjects could be reproduced very well between the sessions. This was also demonstrated by the significant correlation between the spectra in the two sessions ; the degree of correlation was variable over subjects both in the baseline and the perturbed task. The reproducibility characteristics of CMC in a perturbed task are comparable or slightly better with respect to an unperturbed task. However, comparison is limited by the small number of subjects with CMC in the baseline task. Perturbed CMC is present in more subjects, which is crucial when developing methods to track corticomuscular coupling over multiple sessions, for example during rehalibitation.handles MIMO systems, and can deal with short measurement time

    Myoelectric model-based control of a bi-lateral robotic ankle exoskeleton during even ground locomotion <sup>∗</sup>

    Get PDF
    Individuals with neuromuscular injuries may fully benefit from wearable robots if a new class of wearable technologies is devised to assist complex movements seamlessly in everyday tasks. Among the most important tasks are locomotion activities. Current human-machine interfaces (HMI) are challenged in enabling assistance across wide ranges of locomoting tasks. Electromyography (EMG) and computational modelling can be used to establish an interface with the neuromuscular system. We propose an HMI based on EMG-driven musculoskeletal modelling that estimates biological joint torques in real-time and uses a percentage of these to dynamically control exoskeleton-generated torques in a task-independent manner, i.e. no need to classify locomotion modes. Proof of principle results on one subject showed that this approach could reduce EMGs during exoskeleton-assisted even ground locomotion compared to transparent mode (i.e. zero impedance). Importantly, results showed that a substantial portion of the biological ankle joint torque needed to walk was transferred from the human to the exoskeleton. That is, while the total human-exoskeleton ankle joint was always similar between assisted and zero-impedance modes, the ratio between exoskeleton-generated and human-generated torque varied substantially, with human-generated torques being dynamically compensated by the exoskeleton during assisted mode. This is a first step towards natural, continuous assistance in a large variety of movements
    corecore