18 research outputs found

    Autonomous real-time surveillance system with distributed IP cameras

    Get PDF
    An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator

    Security System Based on Suspicious Behavior Detection

    Get PDF
    In recent years, the demand for image analysis applications of video surveillance has grown rapidly. The latest advances in video surveillance have aimed at automating the monitoring itself, so that it is a computer (not the security personnel) what observes the images and detects suspicious behavior or events. In this context, we present system for the automatic detection of suspicious behavior in public buildings, that obtains high resolution image of the individual or individuals who have activated the alarm in the systePeer Reviewe

    Implementation of Closed-circuit Television (CCTV) Using Wireless Internet Protocol (IP) Camera

    Get PDF
    This paper presents three techniques for configuring, interfacing and networking of a wireless IP-based camera for real-time security surveillance systems design. The three different real-time implementation techniques proposed for configuring, interfacing and networking the IP camera are: 1). accessing the IP-based camera using the WANSCAM or XXCAM vendor software, 2). accessing the IP-based camera via Firefox® web browser , and 3). accessing the IP camera via MATLAB with SIMULINK on an internet ready system. The live streaming video based on the proposed techniques can be adapted for image detection, recognition and tracking for real-time intelligent security surveillance systems design.  The paper also carried out a thorough comparative analysis of the three methods of achieving video streaming resulting from the output of the IP-based cameras. The analysis shows that the WANSCAM or XXCAM software displays the best video animations from the IP-based cameras when compared with the performance of the other methods. Keywords: Closed-circuit television, Internet protocol, Security surveillance, IP-based cameras, Wireless networking, animation

    Enhancing camera surveillance using computer vision: a research note

    Full text link
    Purpose\mathbf{Purpose} - The growth of police operated surveillance cameras has out-paced the ability of humans to monitor them effectively. Computer vision is a possible solution. An ongoing research project on the application of computer vision within a municipal police department is described. The paper aims to discuss these issues. Design/methodology/approach\mathbf{Design/methodology/approach} - Following the demystification of computer vision technology, its potential for police agencies is developed within a focus on computer vision as a solution for two common surveillance camera tasks (live monitoring of multiple surveillance cameras and summarizing archived video files). Three unaddressed research questions (can specialized computer vision applications for law enforcement be developed at this time, how will computer vision be utilized within existing public safety camera monitoring rooms, and what are the system-wide impacts of a computer vision capability on local criminal justice systems) are considered. Findings\mathbf{Findings} - Despite computer vision becoming accessible to law enforcement agencies the impact of computer vision has not been discussed or adequately researched. There is little knowledge of computer vision or its potential in the field. Originality/value\mathbf{Originality/value} - This paper introduces and discusses computer vision from a law enforcement perspective and will be valuable to police personnel tasked with monitoring large camera networks and considering computer vision as a system upgrade

    Applicability Study of A Slope Motion Monitor Using Video Motion Detection Technology

    Get PDF
    This study primarily investigates the applicability of video motion detection (VMD) technology for detecting side-slope movement. This technology involves using an economical high-resolution camera to instantly record activities, such as side-slope sliding, toppling, and movement. Concurrently, sum of absolute differences (SAD) analysis was combined with the threshold value to assess the side-slope surface movement. The physical modeling detection results showed that the VMD technology instantly detects side-slope tension crack development, rock deformation, and the location of collapsing surfaces, thereby effectively improving the effectiveness of alarms before and during disasters. Actual landslide case analysis shows that dip-slope movement is detected through gradual expansion of initial slanted rectangular red blocks and instant magnification following the block expansion on the ground level. The monitoring mechanism of VDM technology for detecting the speed and movement of debris flow can be used as a reference in disaster prevention and evacuations of people living in downstream areas. Furthermore, this study generalizes the limitations of VMD technology. These generalizations can be used as a reference for future slope surface movement monitoring and related studies.本文主要探討影像感測技術於邊坡運動監測之適用性,藉由低成本、高解析度攝影機即時記錄邊坡滑動、傾倒及流動等過程,並以絕對差值分析法搭配門檻值評估邊坡地表運動過程。物理模型感測結果顯示,採用影像感測技術可即時偵測邊坡張裂隙發展、岩體變形與地表瞬間崩壞之區位,有效提升災前及災中預警之成效。實際山崩案例分析發現,順向坡滑動之感測特徵由初始傾斜長條型紅色區塊逐漸增加範圍,最終塊體於地面擴散後將造成紅色區塊瞬間擴大。而影像感測技術對於土石流速度及流動狀況之監測,亦可作為下游保全對象防災疏散之參考。此外,本研究亦初步歸納影像感測技術之應用限制,期能作為未來邊坡地表變動監測與相關研究之參考

    Bayesian inference application to burglary detection

    Get PDF
    Real time motion tracking is very important for video analytics. But very little research has been done in identifying the top-level plans behind the atomic activities evident in various surveillance footages [61]. Surveillance videos can contain high level plans in the form of complex activities [61]. These complex activities are usually a combination of various articulated activities like breaking windshield, digging, and non-articulated activities like walking, running. We have developed a Bayesian framework for recognizing complex activities like burglary. This framework (belief network) is based on an expectation propagation algorithm [8] for approximate Bayesian inference. We provide experimental results showing the application of our framework for automatically detecting burglary from surveillance videos in real time

    Occlusion handling in multiple people tracking

    Get PDF
    Object tracking with occlusion handling is a challenging problem in automated video surveillance. Occlusion handling and tracking have always been considered as separate modules. We have proposed an automated video surveillance system, which automatically detects occlusions and perform occlusion handling, while the tracker continues to track resulting separated objects. A new approach based on sub-blobbing is presented for tracking objects accurately and steadily, when the target encounters occlusion in video sequences. We have used a feature-based framework for tracking, which involves feature extraction and feature matching

    Symbolic and Deep Learning Based Data Representation Methods for Activity Recognition and Image Understanding at Pixel Level

    Get PDF
    Efficient representation of large amount of data particularly images and video helps in the analysis, processing and overall understanding of the data. In this work, we present two frameworks that encapsulate the information present in such data. At first, we present an automated symbolic framework to recognize particular activities in real time from videos. The framework uses regular expressions for symbolically representing (possibly infinite) sets of motion characteristics obtained from a video. It is a uniform framework that handles trajectory-based and periodic articulated activities and provides polynomial time graph algorithms for fast recognition. The regular expressions representing motion characteristics can either be provided manually or learnt automatically from positive and negative examples of strings (that describe dynamic behavior) using offline automata learning frameworks. Confidence measures are associated with recognitions using Levenshtein distance between a string representing a motion signature and the regular expression describing an activity. We have used our framework to recognize trajectory-based activities like vehicle turns (U-turns, left and right turns, and K-turns), vehicle start and stop, person running and walking, and periodic articulated activities like digging, waving, boxing, and clapping in videos from the VIRAT public dataset, the KTH dataset, and a set of videos obtained from YouTube. Next, we present a core sampling framework that is able to use activation maps from several layers of a Convolutional Neural Network (CNN) as features to another neural network using transfer learning to provide an understanding of an input image. The intermediate map responses of a Convolutional Neural Network (CNN) contain information about an image that can be used to extract contextual knowledge about it. Our framework creates a representation that combines features from the test data and the contextual knowledge gained from the responses of a pretrained network, processes it and feeds it to a separate Deep Belief Network. We use this representation to extract more information from an image at the pixel level, hence gaining understanding of the whole image. We experimentally demonstrate the usefulness of our framework using a pretrained VGG-16 model to perform segmentation on the BAERI dataset of Synthetic Aperture Radar (SAR) imagery and the CAMVID dataset. Using this framework, we also reconstruct images by removing noise from noisy character images. The reconstructed images are encoded using Quadtrees. Quadtrees can be an efficient representation in learning from sparse features. When we are dealing with handwritten character images, they are quite susceptible to noise. Hence, preprocessing stages to make the raw data cleaner can improve the efficacy of their use. We improve upon the efficiency of probabilistic quadtrees by using a pixel level classifier to extract the character pixels and remove noise from the images. The pixel level denoiser uses a pretrained CNN trained on a large image dataset and uses transfer learning to aid the reconstruction of characters. In this work, we primarily deal with classification of noisy characters and create the noisy versions of handwritten Bangla Numeral and Basic Character datasets and use them and the Noisy MNIST dataset to demonstrate the usefulness of our approach
    corecore