17 research outputs found

    Feature extraction techniques for abandoned object classification in video surveillance

    Full text link
    We address the problem of abandoned object classification in video surveillance. Our aim is to determine (i) which feature extraction technique proves more useful for accurate object classification in a video surveillance context (scale invariant image transform (SIFT) keypoints vs. geometric primitive features), and (ii) how the resulting features affect classification accuracy and false positive rates for different classification schemes used. Objects are classified into four different categories: bag (s), person (s), trolley (s), and group (s) of people. Our experimental results show that the highest recognition accuracy and the lowest false alarm rate are achieved by building a classifier based on our proposed set of statistics of geometric primitives' features. Moreover, classification performance based on this set of features proves to be more invariant across different learning algorithms. © 2008 IEEE

    Comparative evaluation of stationary foreground object detection algorithms based on background subtraction techniques

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Á. Bayona, J. C. SanMiguel, and J. M. Martínez, "Comparative evaluation of stationary foreground object detection algorithms based on background subtraction techniques" in Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance. AVSS 2009, p. 25 - 30In several video surveillance applications, such as the detection of abandoned/stolen objects or parked vehicles,the detection of stationary foreground objects is a critical task. In the literature, many algorithms have been proposed that deal with the detection of stationary foreground objects, the majority of them based on background subtraction techniques. In this paper we discuss various stationary object detection approaches comparing them in typical surveillance scenarios (extracted from standard datasets). Firstly, the existing approaches based on background-subtraction are organized into categories. Then, a representative technique of each category is selected and described. Finally, a comparative evaluation using objective and subjective criteria is performed on video surveillance sequences selected from the PETS 2006 and i-LIDS for AVSS 2007 datasets, analyzing the advantages and drawbacks of each selected approach.This work has partially supported by the Cátedra UAMInfoglobal ("Nuevas tecnologías de vídeo aplicadas a sistemas de video-seguridad"), the Spanish Administration agency CDTI (CENIT-VISION 2007-1007), by the Spanish Government (TEC2007-65400 SemanticVideo), by the Comunidad de Madrid (S-050/TIC-0223- ProMultiDis), by the Consejería de Educación of the Comunidad de Madrid, and by The European Social Fund

    Robust unattended and stolen object detection by fusing simple algorithms

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. C. San Miguel, and J. M. Martínez, "Robust unattended and stolen object detection by fusing simple algorithms", in IEEE Fifth International Conference on Advanced Video and Signal Based Surveillance, 2008. AVSS '08, 2008, p. 18 - 25In this paper a new approach for detecting unattended or stolen objects in surveillance video is proposed. It is based on the fusion of evidence provided by three simple detectors. As a first step, the moving regions in the scene are detected and tracked. Then, these regions are classified as static or dynamic objects and human or nonhuman objects. Finally, objects detected as static and nonhuman are analyzed with each detector. Data from these detectors are fused together to select the best detection hypotheses. Experimental results show that the fusion-based approach increases the detection reliability as compared to the detectors and performs considerably well across a variety of multiple scenarios operating at realtime.This work is supported by Cátedra Infoglobal-UAM for “Nuevas Tecnologías de video aplicadas a la seguridad”, by the Spanish Government (TEC2007-65400 SemanticVideo), by the Comunidad de Madrid (S- 050/TIC-0223 - ProMultiDis-CM), by the Consejería de Educación of the Comunidad de Madrid and by the European Social Fund

    Détection et localisation d'objets stationnaires par une paire de caméras PTZ

    Get PDF
    Session "Articles"National audienceDans ce papier, nous proposons une approche originale pour détecter et localiser des objets stationnaires sur une scène étendue en exploitant une paire de caméras PTZ. Nous proposons deux contributions principales. Tout d'abord, nous présentons une méthode de détection et de segmentation d'objets stationnaires. Celle-ci est basée sur la réidentification de descripteurs de l'avant-plan et une segmentation de ces blobs en objets à l'aide de champs de Markov. La seconde contribution concerne la mise en correspondance entre les deux PTZ des silhouettes d'objets détectées dans chaque image

    Automatic classification of abandoned objects for surveillance of public premises

    Full text link
    One of the core components of any visual surveillance system is object classification, where detected objects are classified into different categories of interest. Although in airports or train stations, abandoned objects are mainly luggage or trolleys, none of the existing works in the literature have attempted to classify or recognize trolleys. In this paper, we analyzed and classified images of trolley(s), bag(s), single person(s), and group(s) of people by using various shape features with a number of uncluttered and cluttered images and applied multiframe integration to overcome partial occlusions and obtain better recognition results. We also tested the proposed techniques on data extracted from a wellrecognized and recent data set, PETS 2007 benchmark data set[16]. Our experimental results show that the features extracted are invariant to data set and classification scheme chosen. For our four-class object recognition problem, we achieved an average recognition accuracy of 70%. © 2008 IEEE

    Autonomous Mobile Vision System for Terrorist Attack Prevention

    Get PDF
    This paper consists of research into an autonomous mobile vision system with the ability to detect and prevent terrorist attacks. The system runs on a piece of computer vision software that can recognize common objects through a camera. A highly mobile spider-like platform was designed to move the camera through a crowded area, and multiple path planning algorithms are tested

    Three dimensional information estimation and tracking for moving objects detection using two cameras framework

    Get PDF
    Calibration, matching and tracking are major concerns to obtain 3D information consisting of depth, direction and velocity. In finding depth, camera parameters and matched points are two necessary inputs. Depth, direction and matched points can be achieved accurately if cameras are well calibrated using manual traditional calibration. However, most of the manual traditional calibration methods are inconvenient to use because markers or real size of an object in the real world must be provided or known. Self-calibration can solve the traditional calibration limitation, but not on depth and matched points. Other approaches attempted to match corresponding object using 2D visual information without calibration, but they suffer low matching accuracy under huge perspective distortion. This research focuses on achieving 3D information using self-calibrated tracking system. In this system, matching and tracking are done under self-calibrated condition. There are three contributions introduced in this research to achieve the objectives. Firstly, orientation correction is introduced to obtain better relationship matrices for matching purpose during tracking. Secondly, after having relationship matrices another post-processing method, which is status based matching, is introduced for improving object matching result. This proposed matching algorithm is able to achieve almost 90% of matching rate. Depth is estimated after the status based matching. Thirdly, tracking is done based on x-y coordinates and the estimated depth under self-calibrated condition. Results show that the proposed self-calibrated tracking system successfully differentiates the location of objects even under occlusion in the field of view, and is able to determine the direction and the velocity of multiple moving objects

    A One-Threshold Algorithm for Detecting Abandoned Packages Under Severe Occlusions Using a Single Camera

    Get PDF
    We describe a single-camera system capable of detecting abandoned packages under severe occlusions, which leads to complications on several levels. The first arises when frames containing only background pixels are unavailable for initializing the background model - a problem for which we apply a novel discriminative measure. The proposed measure is essentially the probability of observing a particular pixel value, conditioned on the probability that no motion is detected, with the pdf on which the latter is based being estimated as a zero-mean and unimodal Gaussian distribution from observing the difference values between successive frames. We will show that such a measure is a powerful discriminant even under severe occlusions, and can deal robustly with the foreground aperture effect - a problem inherently caused by differencing successive frames. The detection of abandoned packages then follows at both the pixel and region level. At the pixel-level, an ``abandoned pixel'' is detected as a foreground pixel, at which no motion is observed. At the region-level, abandoned pixels are ascertained in a Markov Random Field (MRF), after which they are clustered. These clusters are only finally classified as abandoned packages, if they display temporal persistency in their size, shape, position and color properties, which is determined using conditional probabilities of these attributes. The algorithm is also carefully designed to avoid any thresholding, which is the pitfall of many vision systems, and which significantly improves the robustness of our system. Experimental results from real-life train station sequences demonstrate the robustness and applicability of our algorithm
    corecore