70 research outputs found

    Summary of the Sussex-Huawei Locomotion-Transportation Recognition Challenge 2019

    Get PDF
    In this paper we summarize the contributions of participants to the third Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge organized at the HASCAWorkshop of UbiComp/ISWC 2020. The goal of this machine learning/data science challenge is to recognize eight locomotion and transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from the inertial sensor data of a smartphone in a user-independent manner with an unknown target phone position. The training data of a “train” user is available from smartphones placed at four body positions (Hand, Torso, Bag and Hips). The testing data originates from “test” users with a smartphone placed at one, but unknown, body position. We introduce the dataset used in the challenge and the protocol of the competition. We present a meta-analysis of the contributions from 15 submissions, their approaches, the software tools used, computational cost and the achieved results. Overall, one submission achieved F1 scores above 80%, three with F1 scores between 70% and 80%, seven between 50% and 70%, and four below 50%, with a latency of maximum of 5 seconds

    Summary of SHL Challenge 2023: Recognizing Locomotion and Transportation Mode from GPS and Motion Sensors

    Get PDF
    In this paper we summarize the contributions of participants to the fifth Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge organized at the HASCA Workshop of UbiComp/ISWC 2023. The goal of this machine learning/data science challenge is to recognize eight locomotion and transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from the motion (accelerometer, gyroscope, magnetometer) and GPS (GPS location, GPS reception) sensor data of a smartphone in a user-independent manner. The training data of a “train” user is available from smartphones placed at four body positions (Hand, Torso, Bag and Hips). The testing data originates from “test” users with a smartphone placed at one, but unknown, body position. We introduce the dataset used in the challenge and the protocol of the competition. We present a meta-analysis of the contributions from 15 submissions, their approaches, the software tools used, computational cost and the achieved results. The challenge evaluates the recognition performance by comparing predicted to ground-truth labels at every 10 milliseconds, but puts no constraints on the maximum decision window length. Overall, five submissions achieved F1 scores above 90%, three between 80% and 90%, two between 70% and 80%, three between 50% and 70%, and two below 50%. While the task this year is facing the technical challenges of sensor unavailability, irregular sampling, and sensor diversity, the overall performance based on GPS and motion sensors is better than previous years (e.g. the best performance reported in SHL 2020, 2021 and 2023 are 88.5%, 75.4% and 96.0%, respectively). This is possibly due to the complementary between the GPS and motion sensors and also the removal of constraints on the decision window length. Finally, we present a baseline implementation to help understand the contribution of each sensor modality to the recognition task

    Summary of the Sussex-Huawei Locomotion-Transportation Recognition Challenge

    Get PDF
    In this paper we summarize the contributions of participants to the Sussex-Huawei Transportation-Locomotion (SHL) Recognition Challenge organized at the HASCA Workshop of UbiComp 2018. The SHL challenge is a machine learning and data science competition, which aims to recognize eight transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from the inertial and pressure sensor data of a smartphone. We introduce the dataset used in the challenge and the protocol for the competition. We present a meta-analysis of the contributions from 19 submissions, their approaches, the software tools used, computational cost and the achieved results. Overall, two entries achieved F1 scores above 90%, eight with F1 scores between 80% and 90%, and nine between 50% and 80%

    Summary of the Sussex-Huawei locomotion-transportation recognition challenge 2020

    Get PDF
    In this paper we summarize the contributions of participants to the third Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge organized at the HASCAWorkshop of UbiComp/ISWC 2020. The goal of this machine learning/data science challenge is to recognize eight locomotion and transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from the inertial sensor data of a smartphone in a user-independent manner with an unknown target phone position. The training data of a “train” user is available from smartphones placed at four body positions (Hand, Torso, Bag and Hips). The testing data originates from “test” users with a smartphone placed at one, but unknown, body position. We introduce the dataset used in the challenge and the protocol of the competition. We present a meta-analysis of the contributions from 15 submissions, their approaches, the software tools used, computational cost and the achieved results. Overall, one submission achieved F1 scores above 80%, three with F1 scores between 70% and 80%, seven between 50% and 70%, and four below 50%, with a latency of maximum of 5 seconds

    Improving Intelligence of Robotic Lower-Limb Prostheses to Enhance Mobility for Individuals with Limb Loss

    Get PDF
    The field of wearable robotics is an emerging field that seeks to create smarter and intuitive devices that can assist users improve their overall quality of life. Specifically, individuals with lower limb amputation tend to have significantly impaired mobility and asymmetric gait patterns that result in increased energy expenditure than able-bodied individuals over a variety of tasks. Unfortunately, most of the commercial devices are passive and lack the ability to easily adapt to changing environmental contexts. Powered prostheses have shown promise to help restore the necessary power needed to walk in common ambulatory tasks. However, there is a need to infer/detect the user's movement to appropriately provide seamless and natural assistance. To achieve this behavior, a better understanding is required of adding intelligence to powered prostheses. This dissertation focuses on three key research objectives: 1) developing and enhancing offline intent recognition systems for both classification and regression tasks using embedded prosthetic mechanical sensors and machine learning, 2) deploying intelligent controllers in real-time to directly modulate assistive torque in a knee and ankle prosthetic device, and 3) quantifying the biomechanical and clinical effects of a powered prosthesis compared to a passive device. The findings conducted show improvement in developing powered prostheses to better enhance mobility for individuals with transfemoral amputation and show a step forward towards clinical acceptance.Ph.D

    Sensor Fusion Representation of Locomotion Biomechanics with Applications in the Control of Lower Limb Prostheses

    Get PDF
    Free locomotion and movement in diverse environments are significant concerns for individuals with amputation who need independence in daily living activities. As users perform community ambulation, they face changing contexts that challenge what the typical passive prosthesis can offer. This problem rises opportunities for developing intelligent robotic systems that assist the locomotion with the least possible interruptions for direct input during operation. The use of multiple sensors to detect the user's intent and locomotion parameters is a promising technique that could provide a fast and natural response to the prostheses. However, the use of these sensors still requires a thorough investigation before they can be translated into practical settings. In addition, the dynamic change of context during locomotion should translate to adjustment in the device's response. To achieve the scaling rules for this modulation, a rich biomechanics dataset of community ambulation would provide a source of quantitative criteria to generate bioinspired controllers. This dissertation produces a better understanding of the characteristics of community ambulation from two different perspectives: the biomechanics of human motion and the sensory signals that can be captured by wearable technology. By studying human locomotion in diverse environments, including walking on stairs, ramps, and level ground, this work generated a comprehensive open-source dataset containing the biomechanics and signals from wearable sensors during locomotion, evaluating the effects of changing the locomotion context within the ambulation mode. With the multimodal dataset, I developed and evaluated a combined strategy for ambulation mode classification and the estimation of locomotion parameters, including the walking speed, stair height, ramp slope, and biological moment. Finally, by combining this knowledge and incorporating both the biomechanics insight with the machine learning-based inference in the frame of impedance control, I propose novel methods to improve the performance of lower-limb robotics with a focus on powered prostheses.Ph.D

    Towards Real time activity recognition

    Get PDF

    Deep Multi Temporal Scale Networks for Human Motion Analysis

    Get PDF
    The movement of human beings appears to respond to a complex motor system that contains signals at different hierarchical levels. For example, an action such as ``grasping a glass on a table'' represents a high-level action, but to perform this task, the body needs several motor inputs that include the activation of different joints of the body (shoulder, arm, hand, fingers, etc.). Each of these different joints/muscles have a different size, responsiveness, and precision with a complex non-linearly stratified temporal dimension where every muscle has its temporal scale. Parts such as the fingers responds much faster to brain input than more voluminous body parts such as the shoulder. The cooperation we have when we perform an action produces smooth, effective, and expressive movement in a complex multiple temporal scale cognitive task. Following this layered structure, the human body can be described as a kinematic tree, consisting of joints connected. Although it is nowadays well known that human movement and its perception are characterised by multiple temporal scales, very few works in the literature are focused on studying this particular property. In this thesis, we will focus on the analysis of human movement using data-driven techniques. In particular, we will focus on the non-verbal aspects of human movement, with an emphasis on full-body movements. The data-driven methods can interpret the information in the data by searching for rules, associations or patterns that can represent the relationships between input (e.g. the human action acquired with sensors) and output (e.g. the type of action performed). Furthermore, these models may represent a new research frontier as they can analyse large masses of data and focus on aspects that even an expert user might miss. The literature on data-driven models proposes two families of methods that can process time series and human movement. The first family, called shallow models, extract features from the time series that can help the learning algorithm find associations in the data. These features are identified and designed by domain experts who can identify the best ones for the problem faced. On the other hand, the second family avoids this phase of extraction by the human expert since the models themselves can identify the best set of features to optimise the learning of the model. In this thesis, we will provide a method that can apply the multi-temporal scales property of the human motion domain to deep learning models, the only data-driven models that can be extended to handle this property. We will ask ourselves two questions: what happens if we apply knowledge about how human movements are performed to deep learning models? Can this knowledge improve current automatic recognition standards? In order to prove the validity of our study, we collected data and tested our hypothesis in specially designed experiments. Results support both the proposal and the need for the use of deep multi-scale models as a tool to better understand human movement and its multiple time-scale nature
    • 

    corecore