12 research outputs found

    Dynamic background modeling and subtraction using spatio-temporal local binary patterns

    Full text link
    Traditional background modeling and subtraction methods have a strong assumption that the scenes are of static struc-tures with limited perturbation. These methods will perform poorly in dynamic scenes. In this paper, we present a solution to this problem. We first extend the local binary patterns from spatial domain to spatio-temporal domain, and present a new online dynamic texture extraction operator, named spatio-temporal local binary patterns (STLBP). Then we present a novel and effective method for dynamic background model-ing and subtraction using STLBP. In the proposed method, each pixel is modeled as a group of STLBP dynamic tex-ture histograms which combine spatial texture and tempo-ral motion information together. Compared with traditional methods, experimental results show that the proposed method adapts quickly to the changes of the dynamic background. It achieves accurate detection of moving objects and suppresses most of the false detections for dynamic changes of nature scenes. Index Terms — Background modeling, object detection, spatio-temporal features, local binary pattern

    Moving Object Tracking using Color Feature in a Video

    Get PDF
    Video processing is one of the most challenging areas in image processing. It deals with identifying an object of interest. Motion detection has been used in many fields either directly or indirectly. In this paper an efficient approach to motion detection in video sequence using color feature extraction operator. Using this approach we improve the background subtraction and detecting the moving object with greater accuracy. In this paper, background modeling is done in order to make the update of background due to light illumination and change in the weather condition. Foreground detection is done before updating the background model. Color feature extraction is done in order to avoid the dynamic background such as moving leaves, rain, snow, rippling water

    Adaptive background subtraction technique with unique feature representation for vehicle counting

    Get PDF
    Vehicle detection is the first step towards a successful traffic monitoring system. Although there were many studies for vehicle detection, only a few methods dealt with a complex situation especially in traffic jams. In addition, evaluation under different weather conditions (rainy, foggy and snowy) is so important for some countries but unfortunately it is rarely performed. Presently, vehicle detection is mainly performed using background subtraction method, yet it still faces many challenges. In this thesis, an adaptive background model based on the approximate median filter (AMF) is developed. To demonstrate its potential, the proposed method is further combined with two proposed feature representation techniques to be employed in either global or local vehicle detection strategy. In the global approach, an adaptive triangle-based threshold method is applied following the proposed adaptive background method. As a consequence, a better segmented foreground can be differentiated from the background regardless of the different weather conditions (i.e., rain, fog and snowfall). Comparisons with the adaptive local threshold (ALT) and the three frame differencing methods show that the proposed method achieves the average recall value of 85.94% and the average precision value of 79.53% with a negligible processing time difference. In the local approach, some predefined regions, instead of the whole image, will be used for the background subtraction operation. Subsequently, two feature representations, i.e. normalized object-area occupancy and normalized edge pixels are computed and formed into a feature vector, which is then fed into the k-means clustering technique. As illustrated in the results, the proposed method has shown an increment of at least 10% better in terms of the precision and 4.5% in terms of F1 score when compared to the existing methods. Once again, even with this significant improvement, the proposed method does not incur noticeable difference in the processing time. In conducting the experiments, different standard datasets have been used to show the performance of the proposed approach. In summary, the proposed method has shown better performances compared to three frame differencing and adaptive local threshold methods

    Sensor Selection and Integration to Improve Video Segmentation in Complex Environments

    Get PDF
    Background subtraction is often considered to be a required stage of any video surveillance system being used to detect objects in a single frame and/or track objects across multiple frames in a video sequence. Most current state-of-the-art techniques for object detection and tracking utilize some form of background subtraction that involves developing a model of the background at a pixel, region, or frame level and designating any elements that deviate from the background model as foreground. However, most existing approaches are capable of segmenting a number of distinct components but unable to distinguish between the desired object of interest and complex, dynamic background such as moving water and high reflections. In this paper, we propose a technique to integrate spatiotemporal signatures of an object of interest from different sensing modalities into a video segmentation method in order to improve object detection and tracking in dynamic, complex scenes. Our proposed algorithm utilizes the dynamic interaction information between the object of interest and background to differentiate between mistakenly segmented components and the desired component. Experimental results on two complex data sets demonstrate that our proposed technique significantly improves the accuracy and utility of state-of-the-art video segmentation technique. © 2014 Adam R. Reckley et al

    Detección de intrusión en exteriores en tiempo real

    Full text link
    Este trabajo presenta un sistema de detección de intrusión en exteriores en tiempo real utilizando la técnica de substracción de fondo, basada en análisis a nivel de píxel. Los objetivos marcados parten de un análisis del estado del arte actual para posteriormente ofrecer una combinación flexible de dichos métodos que integre diversas funcionalidades en un esquema escalable que permita la sustitución de unos módulos por otros sin cambiar la filosofía del algoritmo. Podemos distinguir cuatro fases en los procesos de modelado de fondo: definición de la naturaleza del modelo, inicialización del modelo, actualización del modelo y detección de frente por comparación con el modelo. Respecto a los modelos se propone el uso de un esquema no paramétrico que almacene las muestras de fondo y de frente mediante una aproximación multicapa. El esquema de inicialización propuesto permite la inicialización local de áreas en los modelos en función de una preclasificación en clases de píxel, un proceso que puede entenderse como inicialización selectiva. Para el proceso de actualización, el sistema utiliza un proceso guiado por las confianzas, o frecuencias de aparición de modos, que controlan la actualización selectiva mediante el uso de etiquetas o clases. Los modelos se adaptan temporalmente utilizando un proceso de actualización selectiva paramétrico que controla la evolución de los modelos. En cuanto a la fase de comparación con los modelos, se propone un método de comparación con el entorno espacial o vecindario que da robustez al ruido introducido por las vibraciones de captura. El sistema integra un proceso comparativo que aporta robustez frente a sombras y reflejos moderados. El sistema ha sido evaluado mediante una base de datos pública, lo que permite una comparativa fiable con el estado del arte actual. Los resultados obtenidos no han sido sometidos a ninguna técnica de post-procesado que mejoren los estadísticos de clasificación. El sistema ha sido desarrollado e implementado usando el lenguaje de programación orientada a objetos C++ y la librería de análisis de vídeo OpenCV. Finalmente, se incluye una interfaz de usuario implementada mediante el programa Qt Designer que permita la interacción instantánea con el sistema, alterando sus parámetros si se considera necesario. El algoritmo se integra en un entorno de procesado común que permite su utilización con la salida directa de una cámara.This Project presents an outdoor intrusion detection system in real time using a background subtraction technique based on a pixel-driven analysis. The objectives are based on an analysis of the current state of the art to later provide a flexible combination of such methods to integrate several functionalities in a scalable scheme that allows the substitution of some of the modules without changing the philosophy of the algorithm. The use of a non-parametric scheme that stores samples of the background and foreground using a multilayer approach is used for the models. The proposed initialization scheme relies in a pixel classification stage to provide local-updating mechanisms under a selective spirit. For the updating process, the system uses a process that is guided by the confidence matrix or frequency of occurrence of the modes, which control a selective updating process by using pixel’s classes. The models are temporarily adapted using a parametric selective updating process that controls the evolution of the models. In the stage of comparison against the models, we propose a method of comparison with the pixel spatial environment or pixel’s neighborhood. Such method provides robustness to noise due to the vibrations of the capturing system. The system also integrates a comparative process that gives robustness to shadows and moderate reflections. The system has been evaluated using a public database which allows a reliable comparative with the current state of the art. The results have not undergone any post-processing technique that improves the statistical classification. The system has been developed and implemented using the object-oriented programming language C++ and the video analysis library OpenCV. Finally, it is enhanced by a user interface implemented by Qt Designer program that allows instantaneous interaction with the system, altering its parameters if necessary. The algorithm is integrated into a common processing environment that allows its use with the direct output of a camera

    Carried baggage detection and recognition in video surveillance with foreground segmentation

    Get PDF
    Security cameras installed in public spaces or in private organizations continuously record video data with the aim of detecting and preventing crime. For that reason, video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis, have gained high interest in recent years. In this thesis, the primary focus is on two key aspects of video analysis, reliable moving object segmentation and carried object detection & identification. A novel moving object segmentation scheme by background subtraction is presented in this thesis. The scheme relies on background modelling which is based on multi-directional gradient and phase congruency. As a post processing step, the detected foreground contours are refined by classifying the edge segments as either belonging to the foreground or background. Further contour completion technique by anisotropic diffusion is first introduced in this area. The proposed method targets cast shadow removal, gradual illumination change invariance, and closed contour extraction. A state of the art carried object detection method is employed as a benchmark algorithm. This method includes silhouette analysis by comparing human temporal templates with unencumbered human models. The implementation aspects of the algorithm are improved by automatically estimating the viewing direction of the pedestrian and are extended by a carried luggage identification module. As the temporal template is a frequency template and the information that it provides is not sufficient, a colour temporal template is introduced. The standard steps followed by the state of the art algorithm are approached from a different extended (by colour information) perspective, resulting in more accurate carried object segmentation. The experiments conducted in this research show that the proposed closed foreground segmentation technique attains all the aforementioned goals. The incremental improvements applied to the state of the art carried object detection algorithm revealed the full potential of the scheme. The experiments demonstrate the ability of the proposed carried object detection algorithm to supersede the state of the art method
    corecore