541 research outputs found

    Back To The Roots: Tree-Based Algorithms for Weakly Supervised Anomaly Detection

    Full text link
    Weakly supervised methods have emerged as a powerful tool for model-agnostic anomaly detection at the Large Hadron Collider (LHC). While these methods have shown remarkable performance on specific signatures such as di-jet resonances, their application in a more model-agnostic manner requires dealing with a larger number of potentially noisy input features. In this paper, we show that using boosted decision trees as classifiers in weakly supervised anomaly detection gives superior performance compared to deep neural networks. Boosted decision trees are well known for their effectiveness in tabular data analysis. Our results show that they not only offer significantly faster training and evaluation times, but they are also robust to a large number of noisy input features. By using advanced gradient boosted decision trees in combination with ensembling techniques and an extended set of features, we significantly improve the performance of weakly supervised methods for anomaly detection at the LHC. This advance is a crucial step towards a more model-agnostic search for new physics.Comment: 11 pages, 9 figure

    Cooperative Goal Generation for Reaching Tasks in Robot-Assisted Rehabilitation

    Full text link
    Robot-assisted neurorehabilitation requires automated generation of goal positions for reaching tasks in functional movement therapy. In state-of-the-art solutions, these positions are determined by a motivational therapy game either through constraints on the end-effector (2D or 3D games), or individual arm joints (1D games). Consequently, these positions cannot be adapted to the patients' specific needs by the therapist, and the effectiveness of the training is reduced. We solve this issue by generating goal positions using Gaussian Mixture Models and probability density maps based on the active range of motion of the patient and desired activities, while being compliant with existing game constraints. Therapists can modify the goal generation via an intuitive difficulty and activity parameter. The pipeline was tested on the upper-limb exoskeleton ANYexo 2.0. We have shown that the range of motion exploration rate could be altered from 0.39% to 5.9% per task and that our method successfully generated a sequence of reaching tasks that matched the range of motion of the selected activity, up to an inlier accuracy of 78.9%. Results demonstrate that the responsibilities of the therapy game (i.e., motivating the patient) and the therapists (i.e., individualizing the training) could be distributed properly. We believe that with our pipeline, effective cooperation between the involved agents is achieved, and the provided therapy can be improved

    ARMStick - An Intuitive Therapist Interface for Upper-Limb Rehabilitation Robots

    Full text link
    Currently, therapists struggle with interaction of rehabilitation robots due to non-intuitive interfaces. Therefore their acceptance of these robots are limited. This paper presents the development of ARMStick, a lightweight and small robotic interface in the shape of a human arm with 4 actuated and 3 unactuated joints, to facilitate the interaction between therapists and rehabilitation robots. It allows therapists to intuitively perceive joint-dependent data as recorded by rehabilitation robots, and teach poses and trajectories to individualize therapy to the patient. It's range of motion (RoM) covers the RoM of a healthy human. The device's measuring accuracy of CI 95% <±0.322∘ \lt \pm 0.322^{\circ} and movement accuracy of CI 70% <±5.23∘ \lt \pm 5.23^{\circ} lie within the confidence interval of average visual perception. A demonstration of the device to 5 therapists indicated that it could indeed improve efficiency and efficacy bottlenecks in current robot-assisted therapy. Comparison of ARMStick to two visual user interfaces showed a decrease in mean adaptation time from 15s to 5s for three arm configurations presented to the therapists

    Polymorphic Control Framework for Automated and Individualized Robot-Assisted Rehabilitation

    No full text
    Robots were introduced in the field of upper-limb neuro-rehabilitation to relieve the therapist from physical labor, and to provide high-intensity therapy to the patient. A variety of control methods were developed that incorporate patients' physiological and biomechanical states to adapt the provided assistance automatically. Higher level states such as selected type of assistance, chosen task characteristics, defined session goals, and given patient impairments are often neglected or modeled into tight requirements, low-dimensional study designs, and narrow inclusion criteria so that presented solutions cannot be transferred to other tasks, robotic devices or target groups. In this work, we present the design of a modular high-level control framework based on invariant states covering all decision layers in therapy. We verified the functionality of our framework on the assistance and task layer by outlaying the invariant states based on the characteristics of twenty examined state-of-the-art controllers. Then, we integrated four controllers on each layer and designed two algorithms that automatically selected suitable controllers. The framework was deployed on an arm rehabilitation robot and tested on one participant acting as a patient. We observed plausible system reactions to external changes by a second operator representing a therapist. We believe that this work will boost the development of novel controllers and selection algorithms in cooperative decision-making on layers other than assistance, and eases transferability and integration of existing solutions on lower layers into arbitrary robotic systems.ISSN:1552-3098ISSN:1042-296XISSN:1941-046

    Physical Human-Robot Interaction with Real Active Surfaces using Haptic Rendering on Point Clouds

    No full text
    During robot-assisted therapy of hemiplegic patients, interaction with the patient must be intrinsically safe. Straight-forward collision avoidance solutions can provide this safety requirement with conservative margins. These margins heavily reduce the robot’s workspace and make interaction with the patient’s unguided body parts impossible. However, interaction with the own body is highly beneficial from a therapeutic point of view. We tackle this problem by combining haptic rendering techniques with classical computer vision methods. Our proposed solution consists of a pipeline that builds collision objects from point clouds in real-time and a controller that renders haptic interaction. The raw sensor data is processed to overcome noise and occlusion problems. Our proposed approach is validated on the 6 DoF exoskeleton ANYexo for direct impacts, sliding scenarios, and dynamic collision surfaces. The results show that this method has the potential to successfully prevent collisions and allow haptic interaction for highly dynamic environments. We believe that this work significantly adds to the usability of current exoskeletons by enabling virtual haptic interaction with the patient’s body parts in human-robot therapy

    ANYexo 2.0: A Fully-Actuated Upper-Limb Exoskeleton for Manipulation and Joint-Oriented Training in all Stages of Rehabilitation

    No full text
    We developed an exoskeleton for neurorehabilitation that covered all relevant degrees of freedom of the human arm while providing enough range of motion, speed, strength, and haptic-rendering function for therapy of severely affected (e.g., mobilization) and mildly affected patients (e.g., strength and speed). The ANYexo 2.0, uniting these capabilities, could be the vanguard for highly versatile therapeutic robotics applicable to a broad target group and an extensive range of exercises. Thus, supporting the practical adoption of these devices in clinics. The unique kinematic structure of the robot and the bio-inspired controlled shoulder coupling allowed training for most activities of daily living. We demonstrated this capability with 15 sample activities, including interaction with real objects and the own body with the robot in transparent mode. The robot’s joints can reach 200%, 398%, and 354% of the speed required during activities of daily living at the shoulder, elbow, and wrist, respectively. Further, the robot can provide isometric strength training. We present a detailed analysis of the kinematic properties and propose algorithms for intuitive control implementation.ISSN:1552-3098ISSN:1042-296XISSN:1941-046

    Score rectification for online assessments in robot-assisted arm rehabilitation

    No full text
    Relative comparison of clinical scores to measure the effectiveness of neuro-rehabilitation therapy is possible through a series of discrete measurements during the rehabilitation period within specifically designed task environments. Robots allow quantitative, continuous measurement of data. Resulting robotic scores are also only comparable within similar context, e.g. type of task. We propose a method to decouple these scores from their respective context through functional orthogonalization and compensation of the compounding factors based on a data-driven sensitivity analysis of the user performance. The method was validated for the established accuracy score with variable arm weight support, provoked muscle fatigue and different task directions on 6 participants of our arm exoskeleton group on the ANYexo robot. In the best case, the standard deviation of the assessed score in changing context could be reduced by a factor of 3.2. Therewith, we paved the way to context-independent, quantitative online assessments, recorded autonomously with robots.ISSN:0178-2312ISSN:2196-677

    Digital Guinea Pig: Merits and Methods of Human-in-the-Loop Simulation for Upper-Limb Exoskeletons

    Full text link
    Exoskeletons operate in continuous haptic interaction with a human limb. Thus, this interaction is a key factor to consider during the development of hardware and control policies for these devices. Physics simulations can complement real-world experiments for prototype validation, leading to higher efficiency in hardware and software development iterations as well as increased safety for participants and robot hardware. Here, we present a simulation framework of the full rigid-body dynamics of a coupled human and exoskeleton arm built to validate the full software stack. We present a method to model the human-robot interaction dynamics as decoupled spring-damper systems based on anthropometric data. Further, we demonstrate the application of the simulation framework to predict the closed-loop haptic-rendering performance of a 9-DOF exoskeleton in interaction with a human. The simulation was capable of simulating the closed-loop system's reaction to an impact on a haptic wall. The intrusion into the compliant walls was predicted with a relative accuracy of 6% to 13%. Admissible control gains could be predicted with an accuracy of around 14%, and higher prediction accuracy is indicated when modeling the torque tracking bandwidth of the actuators. Hence, the simulation is valuable for validating prototype software, developing intuition, and a better understanding of the complex characteristics of the coupled system dynamics, even though the quantitative prediction is limited
    • …
    corecore