18,612 research outputs found

    IMU-based human activity recognition and payload classification for low-back exoskeletons

    Get PDF
    Nowadays, work-related musculoskeletal disorders have a drastic impact on a large part of the world population. In particular, low-back pain counts as the leading cause of absence from work in the industrial sector. Robotic exoskeletons have great potential to improve industrial workers’ health and life quality. Nonetheless, current solutions are often limited by sub-optimal control systems. Due to the dynamic environment in which they are used, failure to adapt to the wearer and the task may be limiting exoskeleton adoption in occupational scenarios. In this scope, we present a deep-learning-based approach exploiting inertial sensors to provide industrial exoskeletons with human activity recognition and adaptive payload compensation. Inertial measurement units are easily wearable or embeddable in any industrial exoskeleton. We exploited Long-Short Term Memory networks both to perform human activity recognition and to classify the weight of lifted objects up to 15 kg. We found a median F1 score of 90.80 % (activity recognition) and 87.14 % (payload estimation) with subject-specific models trained and tested on 12 (6M-6F) young healthy volunteers. We also succeeded in evaluating the applicability of this approach with an in-lab real-time test in a simulated target scenario. These high-level algorithms may be useful to fully exploit the potential of powered exoskeletons to achieve symbiotic human–robot interaction

    Context-Aware Data Association for Multi-Inhabitant Sensor-Based Activity Recognition

    Get PDF
    Recognizing the activities of daily living (ADLs) in multi-inhabitant settings is a challenging task. One of the major challenges is the so-called data association problem: how to assign to each user the environmental sensor events that he/she actually triggered? In this paper, we tackle this problem with a contextaware approach. Each user in the home wears a smartwatch, which is used to gather several high-level context information, like the location in the home (thanks to a micro-localization infrastructure) and the posture (e.g., sitting or standing). Context data is used to associate sensor events to the users which more likely triggered them. We show the impact of context reasoning in our framework on a dataset where up to 4 subjects perform ADLs at the same time (collaboratively or individually). We also report our experience and the lessons learned in deploying a running prototype of our method
    • …
    corecore