533 research outputs found

    Human Gait Database for Normal Walk Collected by Smart Phone Accelerometer

    Full text link
    The goal of this study is to introduce a comprehensive gait database of 93 human subjects who walked between two endpoints during two different sessions and record their gait data using two smartphones, one was attached to the right thigh and another one on the left side of the waist. This data is collected with the intention to be utilized by a deep learning-based method which requires enough time points. The metadata including age, gender, smoking, daily exercise time, height, and weight of an individual is recorded. this data set is publicly available

    Deep CNN-LSTM With Self-Attention Model for Human Activity Recognition Using Wearable Sensor

    Get PDF
    Human Activity Recognition (HAR) systems are devised for continuously observing human behavior - primarily in the fields of environmental compatibility, sports injury detection, senior care, rehabilitation, entertainment, and the surveillance in intelligent home settings. Inertial sensors, e.g., accelerometers, linear acceleration, and gyroscopes are frequently employed for this purpose, which are now compacted into smart devices, e.g., smartphones. Since the use of smartphones is so widespread now-a-days, activity data acquisition for the HAR systems is a pressing need. In this article, we have conducted the smartphone sensor-based raw data collection, namely H-Activity , using an Android-OS-based application for accelerometer, gyroscope, and linear acceleration. Furthermore, a hybrid deep learning model is proposed, coupling convolutional neural network and long-short term memory network (CNN-LSTM), empowered by the self-attention algorithm to enhance the predictive capabilities of the system. In addition to our collected dataset ( H-Activity ), the model has been evaluated with some benchmark datasets, e.g., MHEALTH, and UCI-HAR to demonstrate the comparative performance of our model. When compared to other models, the proposed model has an accuracy of 99.93% using our collected H-Activity data, and 98.76% and 93.11% using data from MHEALTH and UCI-HAR databases respectively, indicating its efficacy in recognizing human activity recognition. We hope that our developed model could be applicable in the clinical settings and collected data could be useful for further research.publishedVersio

    Finding Your Way Back: Comparing Path Odometry Algorithms for Assisted Return.

    Get PDF
    We present a comparative analysis of inertial-based odometry algorithms for the purpose of assisted return. An assisted return system facilitates backtracking of a path previously taken, and can be particularly useful for blind pedestrians. We present a new algorithm for path matching, and test it in simulated assisted return tasks with data from WeAllWalk, the only existing data set with inertial data recorded from blind walkers. We consider two odometry systems, one based on deep learning (RoNIN), and the second based on robust turn detection and step counting. Our results show that the best path matching results are obtained using the turns/steps odometry system

    Real-Time Step Detection Using Unconstrained Smartphone

    Get PDF
    Nowadays smartphones are carrying more and more sensors among which are inertial sensors. These devices provide information about the movement and forces acting on the device, but they can also provide information about the movement of the user. Step detection is at the core of many smartphone applications such as indoor location, virtual reality, health and activity monitoring, and some of these require high levels of precision. Current state of the art step detection methods rely heavily in the prediction of the movements performed by the user and the smartphone or on methods of activity recognition for parameter tuning. These methods are limited by the number of situations the researchers can predict and do not consider false positive situations which occur in daily living such as jumps or stationary movements, which in turn will contribute to lower performances. In this thesis, a novel unconstrained smartphone step detection method is proposed using Convolutional Neural Networks. The model utilizes the data from the accelerometer and gyroscope of the smartphone for step detection. For the training of the model, a data set containing step and false step situations was built with a total of 4 smartphone placements, 5 step activities and 2 false step activities. The model was tested using the data from a volunteer which it has not previously seen. The proposed model achieved an overall recall of 89.87% and an overall precision of 87.90%, while being able to distinguish step and non-step situations. The model also revealed little difference between the performance in different smartphone placements, indicating a strong capability towards unconstrained use. The proposed solution demonstrates more versatility than state of the art alternatives, by presenting comparable results without the need of parameter tuning or adjustments for the smartphone use case, potentially allowing for better performances in free living scenarios
    corecore