3 research outputs found

    HActivityNet: A Deep Convolutional Neural Network for Human Activity Recognition

    Get PDF
    Human Activity Recognition (HAR), a vast area of a computer vision research, has gained standings in recent years due to its applications in various fields. As human activity has diversification in action, interaction, and it embraces a large amount of data and powerful computational resources, it is very difficult to recognize human activities from an image. In order to solve the computational cost and vanishing gradient problem, in this work, we have proposed a revised simple convolutional neural network (CNN) model named Human Activity Recognition Network (HActivityNet) that is automatically extract and learn features and recognize activities in a rapid, precise and consistent manner. To solve the problem of imbalanced positive and negative data, we have created two datasets, one is HARDataset1 dataset which is created by extracted image frames from KTH dataset, and another one is HARDataset2 dataset prepared from activity video frames performed by us. The comprehensive experiment shows that our model performs better with respect to the present state of the art models. The proposed model attains an accuracy of 99.5% on HARDatase1 and almost 100% on HARDataset2 dataset. The proposed model also performed well on real data

    Fully convolutional networks for action recognition

    No full text
    Human action recognition is an important and challenging topic in computer vision. Recently, convolutional neural networks (CNNs) have established impressive results for many image recognition tasks. The CNNs usually contain million parameters which prone to overfit when training on small datasets. Therefore, the CNNs do not produce superior performance over traditional methods for action recognition. In this study, the authors design a novel two‐stream fully convolutional networks architecture for action recognition which can significantly reduce parameters while keeping performance. To utilise the advantage of spatial‐temporal features, a linear weighted fusion method is used to fuse two‐stream networks’ feature maps and a video pooling method is adopted to construct the video‐level features. At the meantime, the authors also demonstrate that the improved dense trajectories has significant impact for action recognition. The authors’ method can achieve the state‐of‐the‐art performance on two challenging datasets UCF101 (93.0%) and HMDB51 (70.2%)
    corecore