879 research outputs found

    Recent Developments in Video Surveillance

    Get PDF
    With surveillance cameras installed everywhere and continuously streaming thousands of hours of video, how can that huge amount of data be analyzed or even be useful? Is it possible to search those countless hours of videos for subjects or events of interest? Shouldn’t the presence of a car stopped at a railroad crossing trigger an alarm system to prevent a potential accident? In the chapters selected for this book, experts in video surveillance provide answers to these questions and other interesting problems, skillfully blending research experience with practical real life applications. Academic researchers will find a reliable compilation of relevant literature in addition to pointers to current advances in the field. Industry practitioners will find useful hints about state-of-the-art applications. The book also provides directions for open problems where further advances can be pursued

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Globally-Coordinated Locally-Linear Modeling of Multi-Dimensional Data

    Get PDF
    This thesis considers the problem of modeling and analysis of continuous, locally-linear, multi-dimensional spatio-temporal data. Our work extends the previously reported theoretical work on the global coordination model to temporal analysis of continuous, multi-dimensional data. We have developed algorithms for time-varying data analysis and used them in full-scale, real-world applications. The applications demonstrated in this thesis include tracking, synthesis, recognitions and retrieval of dynamic objects based on their shape, appearance and motion. The proposed approach in this thesis has advantages over existing approaches to analyzing complex spatio-temporal data. Experiments show that the new modeling features of our approach improve the performance of existing approaches in many applications. In object tracking, our approach is the first one to track nonlinear appearance variations by using low-dimensional representation of the appearance change in globally-coordinated linear subspaces. In dynamic texture synthesis, we are able to model non-stationary dynamic textures, which cannot be handled by any of the existing approaches. In human motion synthesis, we show that realistic synthesis can be performed without using specific transition points, or key frames

    Memory Based Online Learning of Deep Representations from Video Streams

    Full text link
    We present a novel online unsupervised method for face identity learning from video streams. The method exploits deep face descriptors together with a memory based learning mechanism that takes advantage of the temporal coherence of visual data. Specifically, we introduce a discriminative feature matching solution based on Reverse Nearest Neighbour and a feature forgetting strategy that detect redundant features and discard them appropriately while time progresses. It is shown that the proposed learning procedure is asymptotically stable and can be effectively used in relevant applications like multiple face identification and tracking from unconstrained video streams. Experimental results show that the proposed method achieves comparable results in the task of multiple face tracking and better performance in face identification with offline approaches exploiting future information. Code will be publicly available.Comment: arXiv admin note: text overlap with arXiv:1708.0361

    Histogram of Oriented Principal Components for Cross-View Action Recognition

    Full text link
    Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which are viewpoint dependent. In contrast, we directly process pointclouds for cross-view action recognition from unknown and unseen views. We propose the Histogram of Oriented Principal Components (HOPC) descriptor that is robust to noise, viewpoint, scale and action speed variations. At a 3D point, HOPC is computed by projecting the three scaled eigenvectors of the pointcloud within its local spatio-temporal support volume onto the vertices of a regular dodecahedron. HOPC is also used for the detection of Spatio-Temporal Keypoints (STK) in 3D pointcloud sequences so that view-invariant STK descriptors (or Local HOPC descriptors) at these key locations only are used for action recognition. We also propose a global descriptor computed from the normalized spatio-temporal distribution of STKs in 4-D, which we refer to as STK-D. We have evaluated the performance of our proposed descriptors against nine existing techniques on two cross-view and three single-view human action recognition datasets. The Experimental results show that our techniques provide significant improvement over state-of-the-art methods

    동적 카메라에서 동적 물체 탐지를 위한 배경 중심 접근법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 최진영.A number of surveillance cameras have been installed for safety and security in actual environments. To achieve a human-level visual intelligence via cameras, there has been much effort to develop many computer vision algorithms realizing the various visual functions from low level to high level. Among them, the moving object detection is a fundamental function because the attention to a moving object is essential to understand its high-level behavior. Most of moving object detection algorithms in a fixed camera adopt the background-centric modeling approach. However, the background-centric approach does not work well in a moving camera because the modeling of moving background in an online way is challengeable. Until now, most algorithms for the object detection in a moving camera have relied on the object-centric approach using appearance-based recognition schemes. However, the object-centric approach suffers from the heavy computational complexity. In this thesis, we propose an efficient and robust scheme based on the background-centric approach to detect moving objects in the dynamic background environments using moving cameras. To tackle the challenges arising from the dynamic background, in this thesis, we deal with four problems: false positives from inaccurate camera motion estimation, sudden scene changes such as illumination, slow moving object relative to camera movement, and motion model limitation in a dashcam video. To solve the false positives due to motion estimation error, we propose a new scheme to improve the robustness of moving object detection in a moving camera. To lessen the influence of background motion, we adopt a dual-mode kernel model that builds two background models using a grid-based modeling. In addition, to reduce the false detections and the missing of true objects, we introduce an attentional sampling scheme based on spatio-temporal properties of moving objects. From the spatio-temporal properties, we build a foreground probability map and generate a sampling map which selects the candidate pixels to find the actual objects. We apply the background subtraction and model update with attention to only the selected pixels. To resolve sudden scene changes and slow moving object problems, we propose a situation-aware background learning method that handles dynamic scenes for moving object detection in a moving camera. We suggest new modules that utilizes situation variables and builds a background model adaptively. Our method compensates for camera movement and updates the background model according to the situation variables. The situation-aware scheme enables the algorithm to build a clear background model without contamination by the foreground. To overcome the limitation of motion model in a dashcam video, we propose a prior-based attentional update scheme to handle dynamic scene changes. Motivated by the center-focused and structure-focused tendencies of human attention, we extend the compensation-based method that focuses on the center changes and neglects minor changes on the important scene structure. The center-focused tendency is implemented by increasing the learning rate of the boundary region through the multiplication of the attention map and the age model. The structure-focused tendency is used to build a robust background model through the model selection after the road and sky region are estimated. In experiments, the proposed framework shows its efficiency and robustness through qualitative and quantitative comparison evaluation with the state-of-the arts. Through the first scheme, it takes only 4.8 ms in one frame processing without parallel processing. The second scheme enables to adapt rapidly changing scenes while maintaining the performance and speed. Through the third scheme for the driving situation, successful results are shown in background modeling and moving object detection in dashcam videos.1 Introduction 1 1.1 Background 1 1.2 Related works 4 1.3 Contributions 10 1.4 Contents of Thesis 11 2 Problem Statements 13 2.1 Background-centric approach for a fixed camera 13 2.2 Problem statements for a moving camera 17 3 Dual modeling with Attentional Sampling 25 3.1 Dual-mode modeling for a moving camera 26 3.1.1 Age model for adaptive learning rate 28 3.1.2 Grid-based modeling 29 3.1.3 Dual-mode kernel modeling 32 3.1.4 Motion compensation by mixing models 35 3.2 Dual-mode modeling with Attentional sampling 36 3.2.1 Foreground probability map based on occurrence 37 3.2.2 Sampling Map Generation 41 3.2.3 Model update with sampling map 43 3.2.4 Probabilistic Foreground Decision 44 3.3 Benefits 45 4 Situation-aware Background Learning 47 4.1 Situation Variable Estimation 51 4.1.1 Background Motion Estimation 51 4.1.2 Foreground Motion Estimation 52 4.1.3 Illumination Change Estimation 53 4.2 Situation-Aware Background Learning 54 4.2.1 Situation-Aware Warping of the Background Model 54 4.2.2 Situation-Aware Update of the Background Model 55 4.3 Foreground Decision 58 4.4 Benefits 59 5 Prior-based Attentional Update for dashcam video 61 5.1 Camera Motion Estimation 65 5.2 Road and Sky region estimation 66 5.3 Background learning 69 5.4 Foreground Result Combining 75 5.5 Benefits 77 6 Experiments 79 6.1 Qualitative Comparisons 82 6.1.1 Dual modeling with attentional sampling 82 6.1.2 Situation-aware background learning 84 6.1.3 Prior-based attentional update 88 6.2 Quantitative Comparisons 91 6.2.1 Dual modeling with attentional sampling 91 6.2.2 Situation-aware background learning 91 6.2.3 Prior-based attentional update (PBAU) 93 6.2.4 Runtime evaluation 94 6.2.5 Unified framework 94 6.3 Application: combining with recognition algorithm 98 6.4 Discussion 102 6.4.1 Issues 102 6.4.2 Strength 104 6.4.3 Limitation 105 7 Concluding remarks and Future works 109 Bibliography 113 초록 125Docto

    Weakly Labeled Action Recognition and Detection

    Get PDF
    Research in human action recognition strives to develop increasingly generalized methods that are robust to intra-class variability and inter-class ambiguity. Recent years have seen tremendous strides in improving recognition accuracy on ever larger and complex benchmark datasets, comprising realistic actions in the wild videos. Unfortunately, the all-encompassing, dense, global representations that bring about such improvements often benefit from the inherent characteristics, specific to datasets and classes, that do not necessarily reflect knowledge about the entity to be recognized. This results in specific models that perform well within datasets but generalize poorly. Furthermore, training of supervised action recognition and detection methods need several precise spatio-temporal manual annotations to achieve good recognition and detection accuracy. For instance, current deep learning architectures require millions of accurately annotated videos to learn robust action classifiers. However, these annotations are quite difficult to achieve. In the first part of this dissertation, we explore the reasons for poor classifier performance when tested on novel datasets, and quantify the effect of scene backgrounds on action representations and recognition. We attempt to address the problem of recognizing human actions while training and testing on distinct datasets when test videos are neither labeled nor available during training. In this scenario, learning of a joint vocabulary, or domain transfer techniques are not applicable. We perform different types of partitioning of the GIST feature space for several datasets and compute measures of background scene complexity, as well as, for the extent to which scenes are helpful in action classification. We then propose a new process to obtain a measure of confidence in each pixel of the video being a foreground region using motion, appearance, and saliency together in a 3D-Markov Random Field (MRF) based framework. We also propose multiple ways to exploit the foreground confidence: to improve bag-of-words vocabulary, histogram representation of a video, and a novel histogram decomposition based representation and kernel. The above-mentioned work provides probability of each pixel being belonging to the actor, however, it does not give the precise spatio-temporal location of the actor. Furthermore, above framework would require precise spatio-temporal manual annotations to train an action detector. However, manual annotations in videos are laborious, require several annotators and contain human biases. Therefore, in the second part of this dissertation, we propose a weakly labeled approach to automatically obtain spatio-temporal annotations of actors in action videos. We first obtain a large number of action proposals in each video. To capture a few most representative action proposals in each video and evade processing thousands of them, we rank them using optical flow and saliency in a 3D-MRF based framework and select a few proposals using MAP based proposal subset selection method. We demonstrate that this ranking preserves the high-quality action proposals. Several such proposals are generated for each video of the same action. Our next challenge is to iteratively select one proposal from each video so that all proposals are globally consistent. We formulate this as Generalized Maximum Clique Graph problem (GMCP) using shape, global and fine-grained similarity of proposals across the videos. The output of our method is the most action representative proposals from each video. Using our method can also annotate multiple instances of the same action in a video can also be annotated. Moreover, action detection experiments using annotations obtained by our method and several baselines demonstrate the superiority of our approach. The above-mentioned annotation method uses multiple videos of the same action. Therefore, in the third part of this dissertation, we tackle the problem of spatio-temporal action localization in a video, without assuming the availability of multiple videos or any prior annotations. The action is localized by employing images downloaded from the Internet using action label. Given web images, we first dampen image noise using random walk and evade distracting backgrounds within images using image action proposals. Then, given a video, we generate multiple spatio-temporal action proposals. We suppress camera and background generated proposals by exploiting optical flow gradients within proposals. To obtain the most action representative proposals, we propose to reconstruct action proposals in the video by leveraging the action proposals in images. Moreover, we preserve the temporal smoothness of the video and reconstruct all proposal bounding boxes jointly using the constraints that push the coefficients for each bounding box toward a common consensus, thus enforcing the coefficient similarity across multiple frames. We solve this optimization problem using the variant of two-metric projection algorithm. Finally, the video proposal that has the lowest reconstruction cost and is motion salient is used to localize the action. Our method is not only applicable to the trimmed videos, but it can also be used for action localization in untrimmed videos, which is a very challenging problem. Finally, in the third part of this dissertation, we propose a novel approach to generate a few properly ranked action proposals from a large number of noisy proposals. The proposed approach begins with dividing each proposal into sub-proposals. We assume that the quality of proposal remains the same within each sub-proposal. We, then employ a graph optimization method to recombine the sub-proposals in all action proposals in a single video in order to optimally build new action proposals and rank them by the combined node and edge scores. For an untrimmed video, we first divide the video into shots and then make the above-mentioned graph within each shot. Our method generates a few ranked proposals that can be better than all the existing underlying proposals. Our experimental results validated that the properly ranked action proposals can significantly boost action detection results. Our extensive experimental results on different challenging and realistic action datasets, comparisons with several competitive baselines and detailed analysis of each step of proposed methods validate the proposed ideas and frameworks
    corecore