12 research outputs found

    Artificial Intelligence Enabled Methods for Human Action Recognition using Surveillance Videos

    Get PDF
    Computer vision applications have been attracting researchers and academia. It is more so with cloud computing resources enabling such applications. Analysing video surveillance applications became an important research area due to its widespread applications. For instance, CCTV camera are used in public places in order to monitor situations, identify any theft or crime instances. In presence of thousands of such surveillance videos streaming simultaneously, manual analysis is very tedious and time consuming task. There is need for automated approach for analysis and giving notifications or findings to officers concerned. It is very useful to police and investigation agencies to ascertain facts, recover evidences and even exploit digital forensics. In this context, this paper throws light on different methods of human action recognition (HAR) using machine learning (ML) and deep learning (DL) that come under Artificial Intelligence (AI). It also reviews methods on privacy preserving action recognition and Generative Adversarial Networks (GANs). This paper also provides different datasets being used for human action recognition research besides giving an account of research gaps that help in pursuing further research in the area of human action recognition

    μ‹œκ³΅κ°„ μ£Όμ˜μ§‘μ€‘μ„ κ°–λŠ” 이쀑 흐름 행동인식 신경망

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사)--μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› :κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀,2019. 8. μ „ν™”μˆ™.μ˜€λŠ˜λ‚  ν™œλ°œν•œ 심측 신경망 연ꡬ와 데이터 μ €μž₯ 및 처리 기술 λ°œλ‹¬λ‘œ 인해 이 미지 뿐만 μ•„λ‹ˆλΌ λΉ„λ””μ˜€μ™€ 같은 μ‹œκ°„ 흐름을 가진 λŒ€μš©λŸ‰ λ°μ΄ν„°μ—μ„œ λ‹€μ–‘ν•œ 인식 문제λ₯Ό μˆ˜ν–‰ν•˜λŠ” 연ꡬ가 λ”μš± 더 λ§Žμ€ 관심을 λ°›κ³  μžˆλ‹€. κ·Έ μ€‘μ—μ„œλ„ 이쀑 흐름 신경망은 처음으둜 신경망을 ν†΅ν•œ ν•™μŠ΅μ΄ 기쑴의 μˆ˜μž‘μ—…μœΌλ‘œ 뽑은 νŠΉμ§•λ³΄λ‹€ (hand- crafted features) 쒋은 μ„±λŠ₯을 보여쀀 μ΄ν›„λ‘œ, λΉ„λ””μ˜€ 행동 μΈμ‹μ—μ„œ μ£Όλ₯˜ μ•„ν‚€ν…μ³λ‘œ μžλ¦¬μž‘μ•˜λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” ν•΄λ‹Ή 아킀텍쳐λ₯Ό ν™•μž₯ν•˜μ—¬ λΉ„λ””μ˜€μ—μ„œ λ™μž‘ 인식을 μœ„ν•΄ λ…λ¦½μ μœΌλ‘œ ν›ˆλ ¨λœ 이쀑 흐름 신경망에 μ‹œκ³΅κ°„ μ£Όμ˜μ§‘μ€‘μ„ μ£ΌλŠ” 아킀텍쳐λ₯Ό μ œμ•ˆν–ˆ λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” cross attention을 톡해 기쑴의 독립적인 신경망에 μƒν˜Έ 보완적인 ν•™μŠ΅μœΌλ‘œ μ„±λŠ₯ ν–₯상을 μœ λ„ν–ˆλ‹€. HMDB-51의 ν‘œμ€€ λΉ„λ””μ˜€ 행동인식 벀치 λ§ˆν¬μ—μ„œ λ³Έ λ…Όλ¬Έμ˜ μ•„ν‚€ν…μ³μ˜ μ„±λŠ₯을 μ‹€ν—˜ν•˜μ˜€μœΌλ©°, 기쑴의 아킀텍쳐보닀 κ°œμ„ λœ μ„±λŠ₯을 얻을 수 μžˆμ—ˆλ‹€.Two-stream architecture has been mainstream since the success of [1], but two important information is processed independently and not interacted until the late fusion. We investigate a different spatio-temporal attention architecture based on two separate recognition streams (spatial and temporal), which interact with each other by cross attention. The spatial stream performs action recognition from still video frames, whilst the temporal stream is trained to recognise action from motion in the form of dense optical flow. Both streams convey their learned knowledge to the other stream in the form of attention maps. Cross attentions allow us to exploit the availability of supplemental information and enhance learning of the streams. To demonstrate the benefits of our proposed cross-stream spatio-temporal attention architecture, it has been evaluated on two standard action recognition benchmarks where it boosts the previous performance.μš” μ•½ 제 1 μž₯ μ„œλ‘  제 2 μž₯ κ΄€λ ¨ 연ꡬ 2.1 행동 μΈμ‹μ—μ„œμ˜ 이쀑 흐름 신경망 2.2 ν–‰λ™μΈμ‹μ—μ„œμ˜ 주의 집쀑(Attention) 제 3 μž₯ μ‹œκ³΅κ°„ μ£Όμ˜μ§‘μ€‘μ„ κ°–λŠ” 이쀑 흐름 행동인식 신경망 3.1 효과적인 μ£Όμ˜μ§‘μ€‘ μΆ”μΆœ 3.2 ν–‰λ™νŒ¨ν„΄ ν•™μŠ΅κ³Όμ • 제 4 μž₯ μ‹€ν—˜ 4.1 데이터셋과 κ΅¬ν˜„ 세뢀사항 4.2 μ„±λŠ₯ 비ꡐ 제 5 μž₯ κ²°λ‘  ABSTRACTMaste

    Discriminative Video Representation Learning

    Get PDF
    Representation learning is a fundamental research problem in the area of machine learning, refining the raw data to discover representations needed for various applications. However, real-world data, particularly video data, is neither mathematically nor computationally convenient to process due to its semantic redundancy and complexity. Video data, as opposed to images, includes temporal correlation and motion dynamics, but the ground truth label is normally limited to category labels, which makes the video representation learning a challenging problem. To this end, this thesis addresses the problem of video representation learning, specifically discriminative video representation learning, which focuses on capturing useful data distributions and reliable feature representations improving the performance of varied downstream tasks. We argue that neither all frames in one video nor all dimensions in one feature vector are useful and should be equally treated for video representation learning. Based on this argument, several novel algorithms are investigated in this thesis under multiple application scenarios, such as action recognition, action detection and one-class video anomaly detection. These proposed video representation learning methods produce discriminative video features in both deep and non-deep learning setups. Specifically, they are presented in the form of: 1) an early fusion layer that adopts a temporal ranking SVM formulation, agglomerating several optical flow images from consecutive frames into a novel compact representation, named as dynamic optical flow images; 2) an intermediate feature aggregation layer that applies weakly-supervised contrastive learning techniques, learning discriminative video representations via contrasting positive and negative samples from a sequence; 3) a new formulation for one-class feature learning that learns a set of discriminative subspaces with orthonormal hyperplanes to flexibly bound the one-class data distribution using Riemannian optimisation methods. We provide extensive experiments to gain intuitions into why the learned representations are discriminative and useful. All the proposed methods in this thesis are evaluated on standard publicly available benchmarks, demonstrating state-of-the-art performance
    corecore