15 research outputs found

    Artificial Intelligence Enabled Methods for Human Action Recognition using Surveillance Videos

    Get PDF
    Computer vision applications have been attracting researchers and academia. It is more so with cloud computing resources enabling such applications. Analysing video surveillance applications became an important research area due to its widespread applications. For instance, CCTV camera are used in public places in order to monitor situations, identify any theft or crime instances. In presence of thousands of such surveillance videos streaming simultaneously, manual analysis is very tedious and time consuming task. There is need for automated approach for analysis and giving notifications or findings to officers concerned. It is very useful to police and investigation agencies to ascertain facts, recover evidences and even exploit digital forensics. In this context, this paper throws light on different methods of human action recognition (HAR) using machine learning (ML) and deep learning (DL) that come under Artificial Intelligence (AI). It also reviews methods on privacy preserving action recognition and Generative Adversarial Networks (GANs). This paper also provides different datasets being used for human action recognition research besides giving an account of research gaps that help in pursuing further research in the area of human action recognition

    Video-Based Human Activity Recognition Using Deep Learning Approaches

    Get PDF
    Due to its capacity to gather vast, high-level data about human activity from wearable or stationary sensors, human activity recognition substantially impacts people’s day-to-day lives. Multiple people and things may be seen acting in the video, dispersed throughout the frame in various places. Because of this, modeling the interactions between many entities in spatial dimensions is necessary for visual reasoning in the action recognition task. The main aim of this paper is to evaluate and map the current scenario of human actions in red, green, and blue videos, based on deep learning models. A residual network (ResNet) and a vision transformer architecture (ViT) with a semi-supervised learning approach are evaluated. The DINO (self-DIstillation with NO labels) is used to enhance the potential of the ResNet and ViT. The evaluated benchmark is the human motion database (HMDB51), which tries to better capture the richness and complexity of human actions. The obtained results for video classification with the proposed ViT are promising based on performance metrics and results from the recent literature. The results obtained using a bi-dimensional ViT with long short-term memory demonstrated great performance in human action recognition when applied to the HMDB51 dataset. The mentioned architecture presented 96.7 Β± 0.35% and 41.0 Β± 0.27% in terms of accuracy (mean Β± standard deviation values) in the train and test phases of the HMDB51 dataset, respectively

    Texture-Based Input Feature Selection for Action Recognition

    Full text link
    The performance of video action recognition has been significantly boosted by using motion representations within a two-stream Convolutional Neural Network (CNN) architecture. However, there are a few challenging problems in action recognition in real scenarios, e.g., the variations in viewpoints and poses, and the changes in backgrounds. The domain discrepancy between the training data and the test data causes the performance drop. To improve the model robustness, we propose a novel method to determine the task-irrelevant content in inputs which increases the domain discrepancy. The method is based on a human parsing model (HP model) which jointly conducts dense correspondence labelling and semantic part segmentation. The predictions from the HP model also function as re-rendering the human regions in each video using the same set of textures to make humans appearances in all classes be the same. A revised dataset is generated for training and testing and makes the action recognition model exhibit invariance to the irrelevant content in the inputs. Moreover, the predictions from the HP model are used to enrich the inputs to the AR model during both training and testing. Experimental results show that our proposed model is superior to existing models for action recognition on the HMDB-51 dataset and the Penn Action dataset

    μ‹œκ³΅κ°„ μ£Όμ˜μ§‘μ€‘μ„ κ°–λŠ” 이쀑 흐름 행동인식 신경망

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사)--μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› :κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀,2019. 8. μ „ν™”μˆ™.μ˜€λŠ˜λ‚  ν™œλ°œν•œ 심측 신경망 연ꡬ와 데이터 μ €μž₯ 및 처리 기술 λ°œλ‹¬λ‘œ 인해 이 미지 뿐만 μ•„λ‹ˆλΌ λΉ„λ””μ˜€μ™€ 같은 μ‹œκ°„ 흐름을 가진 λŒ€μš©λŸ‰ λ°μ΄ν„°μ—μ„œ λ‹€μ–‘ν•œ 인식 문제λ₯Ό μˆ˜ν–‰ν•˜λŠ” 연ꡬ가 λ”μš± 더 λ§Žμ€ 관심을 λ°›κ³  μžˆλ‹€. κ·Έ μ€‘μ—μ„œλ„ 이쀑 흐름 신경망은 처음으둜 신경망을 ν†΅ν•œ ν•™μŠ΅μ΄ 기쑴의 μˆ˜μž‘μ—…μœΌλ‘œ 뽑은 νŠΉμ§•λ³΄λ‹€ (hand- crafted features) 쒋은 μ„±λŠ₯을 보여쀀 μ΄ν›„λ‘œ, λΉ„λ””μ˜€ 행동 μΈμ‹μ—μ„œ μ£Όλ₯˜ μ•„ν‚€ν…μ³λ‘œ μžλ¦¬μž‘μ•˜λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” ν•΄λ‹Ή 아킀텍쳐λ₯Ό ν™•μž₯ν•˜μ—¬ λΉ„λ””μ˜€μ—μ„œ λ™μž‘ 인식을 μœ„ν•΄ λ…λ¦½μ μœΌλ‘œ ν›ˆλ ¨λœ 이쀑 흐름 신경망에 μ‹œκ³΅κ°„ μ£Όμ˜μ§‘μ€‘μ„ μ£ΌλŠ” 아킀텍쳐λ₯Ό μ œμ•ˆν–ˆ λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” cross attention을 톡해 기쑴의 독립적인 신경망에 μƒν˜Έ 보완적인 ν•™μŠ΅μœΌλ‘œ μ„±λŠ₯ ν–₯상을 μœ λ„ν–ˆλ‹€. HMDB-51의 ν‘œμ€€ λΉ„λ””μ˜€ 행동인식 벀치 λ§ˆν¬μ—μ„œ λ³Έ λ…Όλ¬Έμ˜ μ•„ν‚€ν…μ³μ˜ μ„±λŠ₯을 μ‹€ν—˜ν•˜μ˜€μœΌλ©°, 기쑴의 아킀텍쳐보닀 κ°œμ„ λœ μ„±λŠ₯을 얻을 수 μžˆμ—ˆλ‹€.Two-stream architecture has been mainstream since the success of [1], but two important information is processed independently and not interacted until the late fusion. We investigate a different spatio-temporal attention architecture based on two separate recognition streams (spatial and temporal), which interact with each other by cross attention. The spatial stream performs action recognition from still video frames, whilst the temporal stream is trained to recognise action from motion in the form of dense optical flow. Both streams convey their learned knowledge to the other stream in the form of attention maps. Cross attentions allow us to exploit the availability of supplemental information and enhance learning of the streams. To demonstrate the benefits of our proposed cross-stream spatio-temporal attention architecture, it has been evaluated on two standard action recognition benchmarks where it boosts the previous performance.μš” μ•½ 제 1 μž₯ μ„œλ‘  제 2 μž₯ κ΄€λ ¨ 연ꡬ 2.1 행동 μΈμ‹μ—μ„œμ˜ 이쀑 흐름 신경망 2.2 ν–‰λ™μΈμ‹μ—μ„œμ˜ 주의 집쀑(Attention) 제 3 μž₯ μ‹œκ³΅κ°„ μ£Όμ˜μ§‘μ€‘μ„ κ°–λŠ” 이쀑 흐름 행동인식 신경망 3.1 효과적인 μ£Όμ˜μ§‘μ€‘ μΆ”μΆœ 3.2 ν–‰λ™νŒ¨ν„΄ ν•™μŠ΅κ³Όμ • 제 4 μž₯ μ‹€ν—˜ 4.1 데이터셋과 κ΅¬ν˜„ 세뢀사항 4.2 μ„±λŠ₯ 비ꡐ 제 5 μž₯ κ²°λ‘  ABSTRACTMaste
    corecore