442,539 research outputs found
Wearable sensor-based human activity recognition using hybrid deep learning techniques
Human activity recognition (HAR) can be exploited to great benefits in many applications, including elder care, health care, rehabilitation, entertainment, and monitoring. Many existing techniques, such as deep learning, have been developed for specific activity recognition, but little for the recognition of the transitions between activities. This work proposes a deep learning based scheme that can recognize both specific activities and the transitions between two different activities of short duration and low frequency for health care applications. In this work, we first build a deep convolutional neural network (CNN) for extracting features from the data collected by sensors. Then, the long short-term memory (LTSM) network is used to capture long-term dependencies between two actions to further improve the HAR identification rate. By combing CNN and LSTM, a wearable sensor based model is proposed that can accurately recognize activities and their transitions. The experimental results show that the proposed approach can help improve the recognition rate up to 95.87% and the recognition rate for transitions higher than 80%, which are better than those of most existing similar models over the open HAPT dataset
Recommended from our members
Improved Sensor-Based Human Activity Recognition Via Hybrid Convolutional and Recurrent Neural Networks
Non-intrusive sensor-based human activity recognition is utilized in a spectrum of applications including fitness tracking devices, gaming, health care monitoring, and smartphone applications. Deep learning models such as convolutional neural networks (CNNs) and long short-term memory (LSTMs) recurrent neural networks provide a way to achieve human activity recognition accurately and effectively. This project designed and explored a variety of multi-layer hybrid deep learning architectures which aimed to improve human activity recognition performance by integrating local features and was scale invariant with dependencies of activities. We achieved a 94.7% activity recognition rate on the University of California, Irvine public domain dataset for human activity recognition containing 6 activities with a 2-layer CNN-1-layer LSTM hybrid model. Additionally, we achieved an 88.0% activity recognition rate on the University of Texas at Dallas Multimodal Human Activity dataset containing 27 activities with a 4-layer CNN-1-layer LSTM hybrid model. For both datasets, our hybrid models outperformed other deep learning models and traditional machine learning methods
Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition
The extensive ubiquitous availability of sensors in smart devices and the
Internet of Things (IoT) has opened up the possibilities for implementing
sensor-based activity recognition. As opposed to traditional sensor time-series
processing and hand-engineered feature extraction, in light of deep learning's
proven effectiveness across various domains, numerous deep methods have been
explored to tackle the challenges in activity recognition, outperforming the
traditional signal processing and traditional machine learning approaches. In
this work, by performing extensive experimental studies on two human activity
recognition datasets, we investigate the performance of common deep learning
and machine learning approaches as well as different training mechanisms (such
as contrastive learning), and various feature representations extracted from
the sensor time-series data and measure their effectiveness for the human
activity recognition task.Comment: Seventh International Conference on Internet of Things and
Applications (IoT 2023
Vision Based Activity Recognition Using Machine Learning and Deep Learning Architecture
Human Activity recognition, with wide application in fields like video surveillance, sports, human interaction, elderly care has shown great influence in upbringing the standard of life of people. With the constant development of new architecture, models, and an increase in the computational capability of the system, the adoption of machine learning and deep learning for activity recognition has shown great improvement with high performance in recent years. My research goal in this thesis is to design and compare machine learning and deep learning models for activity recognition through videos collected from different media in the field of sports.
Human activity recognition (HAR) mostly is to recognize the action performed by a human through the data collected from different sources automatically. Based on the literature review, most data collected for analysis is based on time series data collected through different sensors and video-based data collected through the camera. So firstly, our research analyzes and compare different machine learning and deep learning architecture with sensor-based data collected from an accelerometer of a smartphone place at different position of the human body. Without any hand-crafted feature extraction methods, we found that deep learning architecture outperforms most of the machine learning architecture and the use of multiple sensors has higher accuracy than a dataset collected from a single sensor.
Secondly, as collecting data from sensors in real-time is not feasible in all the fields such as sports, we study the activity recognition by using the video dataset. For this, we used two state-of-the-art deep learning architectures previously trained on the big, annotated dataset using transfer learning methods for activity recognition in three different sports-related publicly available datasets.
Extending the study to the different activities performed on a single sport, and to avoid the current trend of using special cameras and expensive set up around the court for data collection, we developed our video dataset using sports coverage of basketball games broadcasted through broadcasting media. The detailed analysis and experiments based on different criteria such as range of shots taken, scoring activities is presented for 8 different activities using state-of-art deep learning architecture for video classification
- …