7 research outputs found

    Advanced Video-Based Surveillance

    Get PDF
    Over the past decade, we have witnessed a tremendous growth in the demand for personal security and defense of vital infrastructure throughout the world. At the same time, rapid advances in video-based surveillance have emerged and offered a strategic technology to address the demands imposed by security applications. These events have led to a massive research effort devoted to the development of effective and reliable surveillance systems endowed with intelligent video-processing capabilities. As a result, advanced video-based surveillance systems have been developed by research groups from academia and industry alike. In broad terms, advanced video-based surveillance could be described as intelligent video processing designed to assist security personnel by providing reliable real-time alerts and to support efficient video analysis for forensics investigations

    A Wireless Sensor Network for Vineyard Monitoring That Uses Image Processing

    Get PDF
    The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis

    Detección de objetos en entornos dinámicos para videovigilancia

    Get PDF
    La videovigilancia por medios automáticos es un campo de investigación muy activo debido a la necesidad de seguridad y control. En este sentido, existen situaciones que dificultan el correcto funcionamiento de los algoritmos ya existentes. Esta tesis se centra en la detección de movimiento y aborda varias de las problemáticas habituales, planteando nuevos enfoques que, en la gran mayoría de las ocasiones, superan a otras propuestas pertenecientes al estado del arte. En particular estudiamos: - La importancia del espacio de color de cara a la detección de movimiento. - Los efectos del ruido en el vídeo de entrada. - Un nuevo modelo de fondo denominado MFBM que acepta cualquier número y tipo de rasgo de entrada. - Un método para paliar las dificultades que suponen los cambios de iluminación. - Un método no panorámico para detectar movimiento en cámaras no estáticas. Durante la tesis se han utilizado diferentes repositorios públicos que son ampliamente utilizados en el ámbito de la detección de movimiento. Además, los resultados obtenidos han sido comparados con los de otras propuestas existentes. Todo el código utilizado ha sido colgado en la Web de forma pública. En esta tesis se llega a las siguientes conclusiones: - El espacio de color con el que se codifique el vídeo de entrada repercute notablemente en el rendimiento de los métodos de detección. El modelo RGB no siempre es la mejor opción. También se ha comprobado que ponderar los canales de color del vídeo de entrada mejora el rendimiento de los métodos. - El ruido en el vídeo de entrada a la hora de realizar la detección de movimiento es un factor a tener en cuenta ya que condiciona el rendimiento de los métodos. Resulta llamativo que, si bien el ruido suele ser perjudicial, en ocasiones puede mejorar la detección. - El modelo MFBM supera a los demás métodos competidores estudiados, todos ellos pertenecientes al estado del arte. - Los problemas derivados de los cambios de iluminación se reducen significativamente al utilizar el método propuesto. - El método propuesto para detectar movimiento con cámaras no estáticas supera en la gran mayoría de las ocasiones a otras propuestas existentes. Se han consultado 280 entradas bibliográficas, entre ellas podemos destacar: - C. Wren, A. Azarbayejani, T. Darrell, and A. Pentl, “Pfinder: real-time tracking of the human body,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, pp. 780–785, 1997. - C. Stauffer and W. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE Intl. Conf. on Computer Vision and Pattern Recognition, 1999. - L. Li, W. Huang, I.-H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” Image Processing, IEEE Transactions on, vol. 13, pp. 1459–1472, 2004. - T. Bouwmans, “Traditional and recent approaches in background modeling for foreground detection: An overview,” Computer Science Review, vol. 11-12, pp. 31 – 66, 2014

    BigBackground-Based Illumination Compensation for Surveillance Video

    No full text
    <p>Abstract</p> <p>Illumination changes cause challenging problems for video surveillance algorithms, as objects of interest become masked by changes in background appearance. It is desired for such algorithms to maintain a consistent perception of a scene regardless of illumination variation. This work introduces a concept we call BigBackground, which is a model for representing large, persistent scene features based on chromatic self-similarity. This model is found to comprise 50% to 90% of surveillance scenes. The large, stable regions represented by the model are used as reference points for performing illumination compensation. The presented compensation technique is demonstrated to decrease improper false-positive classification of background pixels by an average of 83% compared to the uncompensated case and by 25% to 43% compared to compensation techniques from the literature.</p

    Illumination compensation in video surveillance analysis

    Get PDF
    Problems in automated video surveillance analysis caused by illumination changes are explored, and solutions are presented. Controlled experiments are first conducted to measure the responses of color targets to changes in lighting intensity and spectrum. Surfaces of dissimilar color are found to respond significantly differently. Illumination compensation model error is reduced by 70% to 80% by individually optimizing model parameters for each distinct color region, and applying a model tuned for one region to a chromatically different region increases error by a factor of 15. A background model--called BigBackground--is presented to extract large, stable, chromatically self-similar background features by identifying the dominant colors in a scene. The stability and chromatic diversity of these features make them useful reference points for quantifying illumination changes. The model is observed to cover as much as 90% of a scene, and pixels belonging to the model are 20% more stable on average than non-member pixels. Several illumination compensation techniques are developed to exploit BigBackground, and are compared with several compensation techniques from the literature. Techniques are compared in terms of foreground / background classification, and are applied to an object tracking pipeline with kinematic and appearance-based correspondence mechanisms. Compared with other techniques, BigBackground-based techniques improve foreground classification by 25% to 43%, improve tracking accuracy by an average of 20%, and better preserve object appearance for appearance-based trackers. All algorithms are implemented in C or C++ to support the consideration of runtime performance. In terms of execution speed, the BigBackground-based illumination compensation technique is measured to run on par with the simplest compensation technique used for comparison, and consistently achieves twice the frame rate of the two next-fastest techniques.Ph.D.Committee Chair: Wills, Scott; Committee Co-Chair: Wills, Linda; Committee Member: Bader, David; Committee Member: Howard, Ayanna; Committee Member: Kim, Jongman; Committee Member: Romberg, Justi
    corecore