206 research outputs found

    Advanced Video-Based Surveillance

    Get PDF
    Over the past decade, we have witnessed a tremendous growth in the demand for personal security and defense of vital infrastructure throughout the world. At the same time, rapid advances in video-based surveillance have emerged and offered a strategic technology to address the demands imposed by security applications. These events have led to a massive research effort devoted to the development of effective and reliable surveillance systems endowed with intelligent video-processing capabilities. As a result, advanced video-based surveillance systems have been developed by research groups from academia and industry alike. In broad terms, advanced video-based surveillance could be described as intelligent video processing designed to assist security personnel by providing reliable real-time alerts and to support efficient video analysis for forensics investigations

    PhD Forum: Investigating the performance of a multi-modal approach to unusual event detection

    Get PDF
    In this paper, we investigate the parameters under- pinning our previously presented system for detecting unusual events in surveillance applications [1]. The system identifies anomalous events using an unsupervised data-driven approach. During a training period, typical activities within a surveilled environment are modeled using multi-modal sensor readings. Significant deviations from the established model of regular activity can then be flagged as anomalous at run-time. Using this approach, the system can be deployed and automatically adapt for use in any environment without any manual adjustment. Experiments carried out on two days of audio-visual data were performed and evaluated using a manually annotated ground- truth. We investigate sensor fusion and quantitatively evaluate the performance gains over single modality models. We also investigate different formulations of our cluster-based model of usual scenes as well as the impact of dynamic thresholding on identifying anomalous events. Experimental results are promis- ing, even when modeling is performed using very simple audio and visual features

    Integrating multiple sensor modalities for environmental monitoring of marine locations

    Get PDF
    In this paper we present preliminary work on integrating visual sensing with the more traditional sensing modalities for marine locations. We have deployed visual sensing at one of the Smart Coast WSN sites in Ireland and have built a software platform for gathering and synchronizing all sensed data. We describe how the analysis of a range of different sensor modalities can reinforce readings from a given noisy, unreliable sensor

    Advance Intelligent Video Surveillance System (AIVSS): A Future Aspect

    Get PDF
    Over the last few decades, remarkable infrastructure growths have been noticed in security-related issues throughout the world. So, with increased demand for Security, Video-based Surveillance has become an important area for the research. An Intelligent Video Surveillance system basically censored the performance, happenings, or changing information usually in terms of human beings, vehicles or any other objects from a distance by means of some electronic equipment (usually digital camera). The scopes like prevention, detection, and intervention which have led to the development of real and consistent video surveillance systems are capable of intelligent video processing competencies. In broad terms, advanced video-based surveillance could be described as an intelligent video processing technique designed to assist security personnel’s by providing reliable real-time alerts and to support efficient video analysis for forensic investigations. This chapter deals with the various requirements for designing a robust and reliable video surveillance system. Also, it is discussed the different types of cameras required in different environmental conditions such as indoor and outdoor surveillance. Different modeling schemes are required for designing of efficient surveillance system under various illumination conditions

    Multispectral object segmentation and retrieval in surveillance video

    Get PDF
    This paper describes a system for object segmentation and feature extraction for surveillance video. Segmentation is performed by a dynamic vision system that fuses information from thermal infrared video with standard CCTV video in order to detect and track objects. Separate background modelling in each modality and dynamic mutual information based thresholding are used to provide initial foreground candidates for tracking. The belief in the validity of these candidates is ascertained using knowledge of foreground pixels and temporal linking of candidates. The transferable belief model is used to combine these sources of information and segment objects. Extracted objects are subsequently tracked using adaptive thermo-visual appearance models. In order to facilitate search and classification of objects in large archives, retrieval features from both modalities are extracted for tracked objects. Overall system performance is demonstrated in a simple retrieval scenari

    Sensor node localisation using a stereo camera rig

    Get PDF
    In this paper, we use stereo vision processing techniques to detect and localise sensors used for monitoring simulated environmental events within an experimental sensor network testbed. Our sensor nodes communicate to the camera through patterns emitted by light emitting diodes (LEDs). Ultimately, we envisage the use of very low-cost, low-power, compact microcontroller-based sensing nodes that employ LED communication rather than power hungry RF to transmit data that is gathered via existing CCTV infrastructure. To facilitate our research, we have constructed a controlled environment where nodes and cameras can be deployed and potentially hazardous chemical or physical plumes can be introduced to simulate environmental pollution events in a controlled manner. In this paper we show how 3D spatial localisation of sensors becomes a straightforward task when a stereo camera rig is used rather than a more usual 2D CCTV camera

    Flame Detection for Video-based Early Fire Warning Systems and 3D Visualization of Fire Propagation

    Get PDF
    Early and accurate detection and localization of flame is an essential requirement of modern early fire warning systems. Video-based systems can be used for this purpose; however, flame detection remains a challenging issue due to the fact that many natural objects have similar characteristics with fire. In this paper, we present a new algorithm for video based flame detection, which employs various spatio-temporal features such as colour probability, contour irregularity, spatial energy, flickering and spatio-temporal energy. Various background subtraction algorithms are tested and comparative results in terms of computational efficiency and accuracy are presented. Experimental results with two classification methods show that the proposed methodology provides high fire detection rates with a reasonable false alarm ratio. Finally, a 3D visualization tool for the estimation of the fire propagation is outlined and simulation results are presented and discussed.The original article was published by ACTAPRESS and is available here: http://www.actapress.com/Content_of_Proceeding.aspx?proceedingid=73

    Scene modelling using an adaptive mixture of Gaussians in colour and space

    Get PDF
    We present an integrated pixel segmentation and region tracking algorithm, designed for indoor environments. Visual monitoring systems often use frame differencing techniques to independently classify each image pixel as either foreground or background. Typically, this level of processing does not take account of the global image structure, resulting in frequent misclassification. We use an adaptive Gaussian mixture model in colour and space to represent background and foreground regions of the scene. This model is used to probabilistically classify observed pixel values, incorporating the global scene structure into pixel-level segmentation. We evaluate our system over 4 sequences and show that it successfully segments foreground pixels and tracks major foreground regions as they move through the scene
    corecore