371 research outputs found

    Effective shadow detection in traffic monitoring applications

    Get PDF
    This paper presents work we have done in detecting moving shadows in the context of an outdoor traffic scene for visual surveillance purposes. The algorithm just exploits some foreground photometric properties concerning shadows. The input of the system is constituted by the blobs previously detected and by the division image between the current frame and the background of the scene. The method proposed is essentially based on multi-gradient operations applied on the division image which aim to discover the most likely shadow regions. Further on, the subsequent “smart” binary edge matching we devised is performed on each blob’s boundary and permits to effectively discard those regions inside the blob which are either too far from the boundary or too small. We demonstrate the effectiveness of our method by using a gray level sequence taken from a sunny, daytime, traffic scene. Since no a priori knowledge is used in order to detect, and remove, shadows, this method represents one of the most general purpose systems to date for detecting outdoor shadows

    Cast shadow modelling and detection

    Get PDF
    Computer vision applications are often confronted by the need to differentiate between objects and their shadows. A number of shadow detection algorithms have been proposed in literature, based on physical, geometrical, and other heuristic techniques. While most of these existing approaches are dependent on the scene environments and object types, the ones that are not, are classified as superior to others conceptually and in terms of accuracy. Despite these efforts, the design of a generic, accurate, simple, and efficient shadow detection algorithm still remains an open problem. In this thesis, based on a physically-derived hypothesis for shadow identification, novel, multi-domain shadow detection algorithms are proposed and tested in the spatial and transform domains. A novel "Affine Shadow Test Hypothesis" has been proposed, derived, and validated across multiple environments. Based on that, several new shadow detection algorithms have been proposed and modelled for short-duration video sequences, where a background frame is available as a reliable reference, and for long duration video sequences, where the use of a dedicated background frame is unreliable. Finally, additional algorithms have been proposed to detect shadows in still images, where the use of a separate background frame is not possible. In this approach, the author shows that the proposed algorithms are capable of detecting cast, and self shadows simultaneously. All proposed algorithms have been modelled, and tested to detect shadows in the spatial (pixel) and transform (frequency) domains and are compared against state-of-art approaches, using popular test and novel videos, covering a wide range of test conditions. It is shown that the proposed algorithms outperform most existing methods and effectively detect different types of shadows under various lighting and environmental conditions

    Study of segmentation and identification techniques applied to environments with natural illumination and moving objects

    Full text link
    La presente tesis está enmarcada en el área de visión por computador y en ella se realizan aportaciones encaminados a resolver el problema de segmentar automáticamente objetos en imágenes de escenas adquiridas en entornos donde se está realizando actividad, es decir, aparece movimiento de los elementos que la componen, y con iluminación variable o no controlada. Para llevar a cabo los desarrollos y poder evaluar prestaciones se ha abordado la resolución de dos problemas distintos desde el punto de vista de requerimientos y condiciones de entorno. En primer lugar se aborda el problema de segmentar e identificar, los códigos de los contenedores de camiones con imágenes tomadas en la entrada de un puerto comercial que se encuentra ubicada a la intemperie. En este caso se trata de proponer técnicas de segmentación que permitan extraer objetos concretos, en nuestro caso caracteres en contenedores, procesando imágenes individuales. No sólo supone un reto el trabajar con iluminación natural, sino además el trabajar con elementos deteriorados, con contrastes muy diferentes, etc. Dentro de este contexto, en la tesis se evalúan técnicas presentes en la literatura como LAT, Watershed, algoritmo de Otsu, variación local o umbralizado para segmentar imágenes en niveles de gris. A partir de este estudio, se propone una solución que combina varias de las técnicas anteriores, en un intento de abordar con éxito la extracción de caracteres de contenedores en todas las situaciones ambientales de movimiento e iluminación. El conocimiento a priori del tipo de objetos a segmentar nos permitió diseñar filtros con capacidad discriminante entre el ruido y los caracteres. El sistema propuesto tiene el valor añadido de que no necesita el ajuste de parámetros, por parte del usuario, para adaptarse a las variaciones de iluminación ambientales y consigue un nivel alto en la segmentación e identificación de caracteres.Rosell Ortega, JA. (2011). Study of segmentation and identification techniques applied to environments with natural illumination and moving objects [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/10863Palanci

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF

    Extraction of moving objects from their background based on multiple adaptive thresholds and boundary evaluation

    Get PDF
    The extraction of moving objects from their background is a challenging task in visual surveillance. As a single threshold often fails to resolve ambiguities and correctly segment the object, in this paper, we propose a new method that uses three thresholds to accurately classify pixels as foreground or background. These thresholds are adaptively determined by considering the distributions of differences between the input and background images and are used to generate three boundary sets. These boundary sets are then merged to produce a final boundary set that represents the boundaries of the moving objects. The merging step proceeds by first identifying boundary segment pairs that are significantly inconsistent. Then, for each inconsistent boundary segment pair, its associated curvature, edge response, and shadow index are used as criteria to evaluate the probable location of the true boundary. The resulting boundary is finally refined by estimating the width of the halo-like boundary and referring to the foreground edge map. Experimental results show that the proposed method consistently performs well under different illumination conditions, including indoor, outdoor, moderate, sunny, rainy, and dim cases. By comparing with a ground truth in each case, both the classification error rate and the displacement error indicate an accurate detection, which show substantial improvement in comparison with other existing methods. © 2010 IEEE.published_or_final_versio

    Automatic human trajectory destination prediction from video

    Get PDF
    This paper presents an intelligent human trajectory destination detection system from video. The system assumes a passive collection of video from a wide scene used by humans in their daily motion activities such as walking towards a door. The proposed system includes three main modules, namely human blob detection, star skeleton detection and destination area prediction, and it works directly with raw video, producing motion features for destination prediction system, such as position, velocity and acceleration from detected human skeletons, resulting in several input features that are used to train a machine learning classifier. We adopted a university campus exterior scene for the experimental study, which includes 348 pedestrian trajectories from 171 videos and five destination areas: A, B, C, D and E. A total of six data processing combinations and four machine learning classifiers were compared, under a realistic growing window evaluation. Overall, high quality results were achieved by the best model, which uses 37 skeleton motion inputs, undersampling on training data and a random forest. The global discrimination, in terms of area of the receiver operating characteristic curve is around 87%. Furthermore, the best model can predict in advance the five destination classes, obtaining a very good ahead discrimination for classes A, B, C and D, and a reasonable ahead discrimination for class E. (C) 2018 Elsevier Ltd. All rights reserved.This work is funded by the Portuguese Foundation for Science and Technology (FCT - Fundação para a Ciência e a Tecnologia) under research grant SFRH/BD/84939/2012

    Automatic and accurate shadow detection from (potentially) a single image using near-infrared information

    Get PDF
    Shadows, due to their prevalence in natural images, are a long studied phenomenon in digital photography and computer vision. Indeed, their presence can be a hindrance for a number of algorithms; accurate detection (and sometimes subsequent removal) of shadows in images is thus of paramount importance. In this paper, we present a method to detect shadows in a fast and accurate manner. To do so, we employ the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. We start by observing that commonly encountered light sources have very distinct spectra in the NIR, and propose that ratios of the colour channels (red, green and blue) to the NIR image gives valuable information about impinging illumination. In addition, we assume that shadows are contained in the darker parts of an image for both visible and NIR. This latter assumption is corroborated by the fact that a number of colorants are transparent to the NIR, thus making parts of the image that are dark in both the visible and NIR prime shadow candidates. These hypotheses allow for fast, accurate shadow detection in real, complex, scenes, including soft and occlusion shadows. We demonstrate that the process is reliable enough to be performed in-camera on still mosaicked images by simulating a modified colour filter array (CFA) that can simultaneously capture NIR and visible images. Finally, we show that our binary shadow maps can be the input of a matting algorithm to improve their precision in a fully automatic manner

    A mathematical model for computerized car crash detection using computer vision techniques

    Full text link
    My proposed approach to the automatic detection of traffic accidents in a signalized intersection is presented here. In this method, a digital camera is strategically placed to view the entire intersection. The images are captured, processed and analyzed for the presence of vehicles and pedestrians in the proposed detection zones. Those images are further processed to detect if an accident has occurred; The mathematical model presented is a Poisson distribution that predicts the number of accidents in an intersection per week, which can be used as approximations for modeling the crash process. We believe that the crash process can be modeled by using a two-state method, which implies that the intersection is in one of two states: clear (no accident) or obstructed (accident). We can then incorporate a rule-based AI system, which will help us in identifying that a crash has taken or will possibly take place; We have modeled the intersection as a service facility, which processes vehicles in a relatively small amount of time. A traffic accident is then perceived as an interruption of that service
    corecore