4 research outputs found

    Human Activity Recognition Using CNN and Lstm Deep Learning Algorithms

    Get PDF
    Human Activity Recognition recognizes and classifies the activities performed by the users or people based on the data collected from the sensors of special devices such as smart-watches, smartphones etc. It has become easy to collect a huge amount of data from inertial sensors that are embedded in wearable devices. An accelerometer and gyroscope sensors are most commonly used inertial sensors. There are various already available datasets, in our paper, we are using the Wireless Sensor Data Mining dataset which contains 1,098,207 data of 6 physical activities performed. In this paper, the activities we aim to classify are walking, jogging, going up and downstairs, standing, and sitting. There are various algorithms applied on the various datasets. In our paper, we use Convolutional Neural Network and Long Short-Term Memory deep learning algorithm on the data set, we split the data into training data [80%] and testing data [20%]. By using a confusion matrix, we recognize and classify the activities performed using maximum accuracy

    Indian Sign Language Recognition through Hybrid ConvNet-LSTM Networks

    Get PDF
    Dynamic hand gesture recognition is a challenging task of Human-Computer Interaction (HCI) and Computer Vision. The potential application areas of gesture recognition include sign language translation, video gaming, video surveillance, robotics, and gesture-controlled home appliances. In the proposed research, gesture recognition is applied to recognize sign language words from real-time videos. Classifying the actions from video sequences requires both spatial and temporal features. The proposed system handles the former by the Convolutional Neural Network (CNN), which is the core of several computer vision solutions and the latter by the Recurrent Neural Network (RNN), which is more efficient in handling the sequences of movements. Thus, the real-time Indian sign language (ISL) recognition system is developed using the hybrid CNN-RNN architecture. The system is trained with the proposed CasTalk-ISL dataset. The ultimate purpose of the presented research is to deploy a real-time sign language translator to break the hurdles present in the communication between hearing-impaired people and normal people. The developed system achieves 95.99% top-1 accuracy and 99.46% top-3 accuracy on the test dataset. The obtained results outperform the existing approaches using various deep models on different datasets

    A Survey on Different Deep Learning Model for Human Activity Recognition based on Application

    Get PDF
    The field of human activity recognition (HAR) seeks to identify and classify an individual's unique movements or activities. However, recognizing human activity from video is a challenging task that requires careful attention to individuals, their behaviors, and relevant body parts. Multimodal activity recognition systems are necessary for many applications, including video surveillance systems, human-computer interfaces, and robots that analyze human behavior. This study provides a comprehensive analysis of recent breakthroughs in human activity classification, including different approaches, methodologies, applications, and limitations. Additionally, the study identifies several challenges that require further investigation and improvements. The specifications for an ideal human activity recognition dataset are also discussed, along with a thorough examination of the publicly available human activity classification datasets

    Deep Learning Class Discrimination Based on Prior Probability for Human Activity Recognition

    No full text
    corecore